You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2020/04/22 12:37:43 UTC

Build failed in Jenkins: beam_PerformanceTests_HadoopFormat #1453

See <https://builds.apache.org/job/beam_PerformanceTests_HadoopFormat/1453/display/redirect>

Changes:


------------------------------------------
[...truncated 351.93 KB...]
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:311)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: The connection attempt failed.
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.createConnection(DBInputFormat.java:205)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.setConf(DBInputFormat.java:164)
    	... 18 more
    Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
    	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:257)
    	at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
    	at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
    	at org.postgresql.Driver.makeConnection(Driver.java:452)
    	at org.postgresql.Driver.connect(Driver.java:254)
    	at java.sql.DriverManager.getConnection(DriverManager.java:664)
    	at java.sql.DriverManager.getConnection(DriverManager.java:247)
    	at org.apache.hadoop.mapreduce.lib.db.DBConfiguration.getConnection(DBConfiguration.java:154)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.createConnection(DBInputFormat.java:198)
    	... 19 more
    Caused by: java.net.SocketTimeoutException: connect timed out
    	at java.net.PlainSocketImpl.socketConnect(Native Method)
    	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    	at java.net.Socket.connect(Socket.java:589)
    	at org.postgresql.core.PGStream.<init>(PGStream.java:69)
    	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:156)
    	... 27 more

    Apr 22, 2020 12:35:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2020-04-22T12:35:19.258Z: java.lang.RuntimeException: java.lang.RuntimeException: org.postgresql.util.PSQLException: The connection attempt failed.
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.setConf(DBInputFormat.java:171)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.createInputFormatInstance(HadoopFormatIO.java:722)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.computeSplitsIfNecessary(HadoopFormatIO.java:678)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.split(HadoopFormatIO.java:640)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:284)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:206)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:190)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:169)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:78)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:417)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:386)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:311)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: java.lang.RuntimeException: org.postgresql.util.PSQLException: The connection attempt failed.
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.createConnection(DBInputFormat.java:205)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.setConf(DBInputFormat.java:164)
    	... 18 more
    Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
    	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:257)
    	at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
    	at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
    	at org.postgresql.Driver.makeConnection(Driver.java:452)
    	at org.postgresql.Driver.connect(Driver.java:254)
    	at java.sql.DriverManager.getConnection(DriverManager.java:664)
    	at java.sql.DriverManager.getConnection(DriverManager.java:247)
    	at org.apache.hadoop.mapreduce.lib.db.DBConfiguration.getConnection(DBConfiguration.java:154)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.createConnection(DBInputFormat.java:198)
    	... 19 more
    Caused by: java.net.SocketTimeoutException: connect timed out
    	at java.net.PlainSocketImpl.socketConnect(Native Method)
    	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    	at java.net.Socket.connect(Socket.java:589)
    	at org.postgresql.core.PGStream.<init>(PGStream.java:69)
    	at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:156)
    	... 27 more

    Apr 22, 2020 12:35:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2020-04-22T12:35:29.904Z: java.io.IOException: Got SQLException
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.getSplits(DBInputFormat.java:285)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.computeSplitsIfNecessary(HadoopFormatIO.java:679)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.split(HadoopFormatIO.java:640)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:284)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:206)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:190)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:169)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:78)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:417)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:386)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:311)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: org.postgresql.util.PSQLException: ERROR: relation "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist
      Position: 22
    	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
    	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
    	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
    	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
    	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
    	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
    	at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:224)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.getSplits(DBInputFormat.java:256)
    	... 17 more

    Apr 22, 2020 12:35:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2020-04-22T12:35:31.160Z: java.io.IOException: Got SQLException
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.getSplits(DBInputFormat.java:285)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.computeSplitsIfNecessary(HadoopFormatIO.java:679)
    	at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIO$HadoopInputFormatBoundedSource.split(HadoopFormatIO.java:640)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.splitAndValidate(WorkerCustomSources.java:284)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitTyped(WorkerCustomSources.java:206)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplitWithApiLimit(WorkerCustomSources.java:190)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSources.performSplit(WorkerCustomSources.java:169)
    	at org.apache.beam.runners.dataflow.worker.WorkerCustomSourceOperationExecutor.execute(WorkerCustomSourceOperationExecutor.java:78)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.executeWork(BatchDataflowWorker.java:417)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.doWork(BatchDataflowWorker.java:386)
    	at org.apache.beam.runners.dataflow.worker.BatchDataflowWorker.getAndPerformWork(BatchDataflowWorker.java:311)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.doWork(DataflowBatchWorkerHarness.java:140)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:120)
    	at org.apache.beam.runners.dataflow.worker.DataflowBatchWorkerHarness$WorkerThread.call(DataflowBatchWorkerHarness.java:107)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Caused by: org.postgresql.util.PSQLException: ERROR: relation "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist
      Position: 22
    	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
    	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
    	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
    	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
    	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
    	at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
    	at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
    	at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:224)
    	at org.apache.hadoop.mapreduce.lib.db.DBInputFormat.getSplits(DBInputFormat.java:256)
    	... 17 more

    Apr 22, 2020 12:35:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2020-04-22T12:35:31.197Z: Finished operation Read using Hadoop InputFormat/Read(HadoopInputFormatBoundedSource)+Collect read time+Get values only/Values/Map+Values as string+Calculate hashcode/WithKeys/AddKeys/Map+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey+Calculate hashcode/Combine.perKey(Hashing)/Combine.GroupedValues/Partial+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey/Reify+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey/Write
    Apr 22, 2020 12:35:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    SEVERE: 2020-04-22T12:35:31.312Z: Workflow failed. Causes: S05:Read using Hadoop InputFormat/Read(HadoopInputFormatBoundedSource)+Collect read time+Get values only/Values/Map+Values as string+Calculate hashcode/WithKeys/AddKeys/Map+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey+Calculate hashcode/Combine.perKey(Hashing)/Combine.GroupedValues/Partial+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey/Reify+Calculate hashcode/Combine.perKey(Hashing)/GroupByKey/Write failed., Internal Issue (a5f9dfc8e03ccb8f): 63963027:24514
    Apr 22, 2020 12:35:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2020-04-22T12:35:31.448Z: Cleaning up.
    Apr 22, 2020 12:35:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2020-04-22T12:35:31.556Z: Stopping worker pool...
    Apr 22, 2020 12:37:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2020-04-22T12:37:28.215Z: Autoscaling: Resized worker pool from 5 to 0.
    Apr 22, 2020 12:37:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2020-04-22T12:37:28.269Z: Worker pool stopped.
    Apr 22, 2020 12:37:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2020-04-22_05_33_40-17346139990843994430 failed with status FAILED.

org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT > writeAndReadUsingHadoopFormat STANDARD_OUT
    Load test results for test (ID): 5130c9f8-7b0e-4662-a8b2-4eb95e1924fc and timestamp: 2020-04-22T12:37:33.346000000Z:
                     Metric:                    Value:
                   read_time                       0.0

org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT > writeAndReadUsingHadoopFormat FAILED
    java.lang.IllegalStateException: Unable to fetch table size
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.lambda$getWriteSuppliers$0(HadoopFormatIOIT.java:231)
        at java.util.Optional.orElseThrow(Optional.java:290)
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.lambda$getWriteSuppliers$1(HadoopFormatIOIT.java:231)
        at org.apache.beam.sdk.testutils.metrics.MetricsReader.lambda$readAll$0(MetricsReader.java:107)
        at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
        at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1556)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
        at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
        at org.apache.beam.sdk.testutils.metrics.MetricsReader.readAll(MetricsReader.java:107)
        at org.apache.beam.sdk.testutils.metrics.IOITMetrics.publish(IOITMetrics.java:55)
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.collectAndPublishMetrics(HadoopFormatIOIT.java:217)
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.writeAndReadUsingHadoopFormat(HadoopFormatIOIT.java:201)

org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT STANDARD_ERROR
    Apr 22, 2020 12:37:33 PM org.apache.beam.sdk.io.common.IOITHelper executeWithRetry
    WARNING: Attempt #1 of 3 failed: ERROR: table "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist.
    Apr 22, 2020 12:37:33 PM org.apache.beam.sdk.io.common.IOITHelper executeWithRetry
    WARNING: Retrying in 2000 ms.
    Apr 22, 2020 12:37:35 PM org.apache.beam.sdk.io.common.IOITHelper executeWithRetry
    WARNING: Attempt #2 of 3 failed: ERROR: table "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist.
    Apr 22, 2020 12:37:35 PM org.apache.beam.sdk.io.common.IOITHelper executeWithRetry
    WARNING: Retrying in 4000 ms.
    Apr 22, 2020 12:37:39 PM org.apache.beam.sdk.io.common.IOITHelper executeWithRetry
    WARNING: Attempt #3 of 3 failed: ERROR: table "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:io:hadoop-format:integrationTest FAILED

org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT > classMethod FAILED
    org.postgresql.util.PSQLException: ERROR: table "beamtest_hadoopformatioit_2020_04_22_12_28_39_270" does not exist
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
        at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
        at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
        at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
        at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
        at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
        at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:244)
        at org.apache.beam.sdk.io.common.DatabaseTestHelper.deleteTable(DatabaseTestHelper.java:65)
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.deleteTable(HadoopFormatIOIT.java:163)
        at org.apache.beam.sdk.io.common.IOITHelper.executeWithRetry(IOITHelper.java:86)
        at org.apache.beam.sdk.io.common.IOITHelper.executeWithRetry(IOITHelper.java:66)
        at org.apache.beam.sdk.io.hadoop.format.HadoopFormatIOIT.tearDown(HadoopFormatIOIT.java:159)

2 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://builds.apache.org/job/beam_PerformanceTests_HadoopFormat/ws/src/sdks/java/io/hadoop-format/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://builds.apache.org/job/beam_PerformanceTests_HadoopFormat/ws/src/sdks/java/io/hadoop-format/build/reports/tests/integrationTest>
:sdks:java:io:hadoop-format:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 9 mins 4.016 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:io:hadoop-format:integrationTest'.
> There were failing tests. See the report at: file://<https://builds.apache.org/job/beam_PerformanceTests_HadoopFormat/ws/src/sdks/java/io/hadoop-format/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 54s
91 actionable tasks: 59 executed, 32 from cache

Publishing build scan...
https://gradle.com/s/mcii5ypqqppjq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Jenkins build is back to normal : beam_PerformanceTests_HadoopFormat #1454

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/beam_PerformanceTests_HadoopFormat/1454/display/redirect?page=changes>


---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org