You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2020/10/15 03:01:15 UTC

beam_PreCommit_Python_Cron - Build # 3365 - Aborted!

beam_PreCommit_Python_Cron - Build # 3365 - Aborted:

Check console output at https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/3365/ to view the results.

Jenkins build is back to normal : beam_PreCommit_Python_Cron #3369

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/3369/display/redirect?page=changes>


---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_PreCommit_Python_Cron #3368

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/3368/display/redirect?page=changes>

Changes:

[Chad Dombrova] Update mypy to 0.782

[Chad Dombrova] [BEAM-7746] Add type checking to transforms

[Chad Dombrova] fixes

[piotr.szuberski] [BEAM-5551 BEAM-5595 BEAM-6090 BEAM-6091 BEAM-6092 BEAM-6093] Update

[noreply] [BEAM-10587] Support Maps in BigQuery (#12389)

[noreply] [BEAM-9547] Allow wrapping multiple return values. (#13104)


------------------------------------------
[...truncated 1.47 MB...]
      }
    },
    {
      "kind": "ParallelWrite",
      "name": "s10",
      "properties": {
        "display_data": [],
        "encoding": {
          "@type": "kind:windowed_value",
          "component_encodings": [
            {
              "@type": "kind:bytes"
            },
            {
              "@type": "kind:global_window"
            }
          ],
          "is_wrapper": true
        },
        "format": "pubsub",
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s9"
        },
        "pubsub_topic": "projects/apache-beam-testing/topics/wc_topic_output6be791a6-cb5a-4473-9f83-265d4998f5da",
        "user_name": "WriteToPubSub/Write/NativeWrite"
      }
    }
  ],
  "type": "JOB_TYPE_STREAMING"
}
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 createTime: '2020-10-15T18:34:48.739072Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2020-10-15_11_34_47-16300349462787766432'
 location: 'us-central1'
 name: 'beamapp-jenkins-1015183440-580727'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2020-10-15T18:34:48.739072Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: [2020-10-15_11_34_47-16300349462787766432]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 2020-10-15_11_34_47-16300349462787766432
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_11_34_47-16300349462787766432?project=apache-beam-testing

> Task :sdks:python:test-suites:tox:py38:testPy38Cython
collected 3968 items / 3906 deselected / 3 skipped / 59 selected

apache_beam/coders/fast_coders_test.py ..............................    [ 48%]
apache_beam/coders/slow_coders_test.py ..............................    [ 96%]
apache_beam/io/gcp/tests/bigquery_matcher_test.py s                      [ 98%]
apache_beam/runners/portability/stager_test.py .                         [100%]

=============================== warnings summary ===============================
apache_beam/io/vcfio.py:49
  <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/apache_beam/io/vcfio.py>:49: UserWarning: VCF IO will support Python 3 after migration to Nucleus, see: BEAM-5628.
    warnings.warn(

target/.tox-py38-cython/py38-cython/lib/python3.8/site-packages/tenacity/_asyncio.py:42
  <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/target/.tox-py38-cython/py38-cython/lib/python3.8/site-packages/tenacity/_asyncio.py>:42: DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead
    def call(self, fn, *args, **kwargs):

-- Docs: https://docs.pytest.org/en/latest/warnings.html
- generated xml file: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/pytest_py38-cython_no_xdist.xml> -
====== 61 passed, 4 skipped, 3906 deselected, 2 warnings in 8.89 seconds =======
py38-cython run-test-post: commands[0] | bash <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/test-suites/tox/py38/build/srcs/sdks/python/scripts/run_tox_cleanup.sh>
___________________________________ summary ____________________________________
  py38-cython: commands succeeded
  congratulations :)

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2020-10-15_11_34_37-5769165017427102767 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:37.518Z: JOB_MESSAGE_WARNING: Autoscaling is enabled for Dataflow Streaming Engine. Workers will scale between 1 and 100 unless maxNumWorkers is specified.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:37.518Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2020-10-15_11_34_37-5769165017427102767.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:37.518Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2020-10-15_11_34_37-5769165017427102767. The number of workers will be between 1 and 100.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:42.253Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-c.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.206Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.246Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.318Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.343Z: JOB_MESSAGE_DEBUG: Combiner lifting skipped for step group: GroupByKey not followed by a combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.396Z: JOB_MESSAGE_DETAILED: Expanding SplittableProcessKeyed operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.429Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into streaming Read/Write steps
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.483Z: JOB_MESSAGE_DETAILED: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.559Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.597Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.621Z: JOB_MESSAGE_DETAILED: Fusing consumer decode into ReadFromPubSub/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.655Z: JOB_MESSAGE_DETAILED: Fusing consumer split into decode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.676Z: JOB_MESSAGE_DETAILED: Fusing consumer pair_with_one into split
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.707Z: JOB_MESSAGE_DETAILED: Fusing consumer WindowInto(WindowIntoFn) into pair_with_one
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.740Z: JOB_MESSAGE_DETAILED: Fusing consumer group/WriteStream into WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.773Z: JOB_MESSAGE_DETAILED: Fusing consumer group/MergeBuckets into group/ReadStream
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.810Z: JOB_MESSAGE_DETAILED: Fusing consumer count into group/MergeBuckets
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.847Z: JOB_MESSAGE_DETAILED: Fusing consumer format into count
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.878Z: JOB_MESSAGE_DETAILED: Fusing consumer encode into format
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.909Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToPubSub/Write/NativeWrite into encode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.937Z: JOB_MESSAGE_BASIC: The pubsub read for: projects/apache-beam-testing/subscriptions/wc_subscription_input76bffe99-dad9-4b25-85bb-1a06584ed34a is configured to compute input data watermarks based on custom timestamp attribute . Cloud Dataflow has created an additional tracking subscription to do this, which will be cleaned up automatically. For details, see: https://cloud.google.com/dataflow/model/pubsub-io#timestamps-ids
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:45.977Z: JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:46.009Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:46.035Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:46.078Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:47.622Z: JOB_MESSAGE_DETAILED: Pub/Sub resources set up for topic 'projects/apache-beam-testing/topics/wc_topic_input76bffe99-dad9-4b25-85bb-1a06584ed34a'.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:48.726Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:48.761Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:48.801Z: JOB_MESSAGE_BASIC: Starting 1 workers in us-central1-c...
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:06.428Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:11.654Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 so that the pipeline can catch up with its backlog and keep up with its input rate.

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2020-10-15_11_34_47-16300349462787766432 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:47.131Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2020-10-15_11_34_47-16300349462787766432. The number of workers will be between 1 and 100.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:47.131Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2020-10-15_11_34_47-16300349462787766432.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:47.131Z: JOB_MESSAGE_WARNING: Autoscaling is enabled for Dataflow Streaming Engine. Workers will scale between 1 and 100 unless maxNumWorkers is specified.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:56.844Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-c.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:58.752Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:58.786Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:58.928Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:58.961Z: JOB_MESSAGE_DEBUG: Combiner lifting skipped for step group: GroupByKey not followed by a combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.010Z: JOB_MESSAGE_DETAILED: Expanding SplittableProcessKeyed operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.050Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into streaming Read/Write steps
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.119Z: JOB_MESSAGE_DETAILED: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.184Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.228Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.261Z: JOB_MESSAGE_DETAILED: Fusing consumer decode into ReadFromPubSub/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.293Z: JOB_MESSAGE_DETAILED: Fusing consumer split into decode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.326Z: JOB_MESSAGE_DETAILED: Fusing consumer pair_with_one into split
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.359Z: JOB_MESSAGE_DETAILED: Fusing consumer WindowInto(WindowIntoFn) into pair_with_one
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.382Z: JOB_MESSAGE_DETAILED: Fusing consumer group/WriteStream into WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.402Z: JOB_MESSAGE_DETAILED: Fusing consumer group/MergeBuckets into group/ReadStream
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.434Z: JOB_MESSAGE_DETAILED: Fusing consumer count into group/MergeBuckets
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.467Z: JOB_MESSAGE_DETAILED: Fusing consumer format into count
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.502Z: JOB_MESSAGE_DETAILED: Fusing consumer encode into format
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.534Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToPubSub/Write/NativeWrite into encode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.575Z: JOB_MESSAGE_BASIC: The pubsub read for: projects/apache-beam-testing/subscriptions/wc_subscription_input6be791a6-cb5a-4473-9f83-265d4998f5da is configured to compute input data watermarks based on custom timestamp attribute . Cloud Dataflow has created an additional tracking subscription to do this, which will be cleaned up automatically. For details, see: https://cloud.google.com/dataflow/model/pubsub-io#timestamps-ids
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.614Z: JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.645Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.678Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:34:59.712Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:01.516Z: JOB_MESSAGE_DETAILED: Pub/Sub resources set up for topic 'projects/apache-beam-testing/topics/wc_topic_input6be791a6-cb5a-4473-9f83-265d4998f5da'.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:02.794Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:02.896Z: JOB_MESSAGE_BASIC: Starting 1 workers in us-central1-c...
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:02.935Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:25.245Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:35:28.120Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 so that the pipeline can catch up with its backlog and keep up with its input rate.

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:36:02.302Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:36:02.339Z: JOB_MESSAGE_DETAILED: Workers have started successfully.

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:36:12.927Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T18:36:12.966Z: JOB_MESSAGE_DETAILED: Workers have started successfully.

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-15_11_34_37-5769165017427102767 after 363 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-15_11_34_47-16300349462787766432 after 361 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_11_34_37-5769165017427102767?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py37.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 520.844s

OK

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_11_34_47-16300349462787766432?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py36.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 528.670s

OK

> Task :sdks:python:test-suites:dataflow:preCommitIT_V2

FAILURE: Build completed with 2 failures.

1: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sdks:python:test-suites:tox:py36:testPy36Cloud'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
==============================================================================

2: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':sdks:python:test-suites:tox:py38:testPy38CloudCoverage'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
==============================================================================

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 38m 26s
82 actionable tasks: 61 executed, 21 from cache

Publishing build scan...
https://gradle.com/s/zvtzxt64f4aii

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_PreCommit_Python_Cron #3367

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/3367/display/redirect>

Changes:


------------------------------------------
[...truncated 1.46 MB...]
                {
                  "@type": "FastPrimitivesCoder$QlpoOTFBWSZTWYQR6NMAAEDXwH8QgCEJAEBAv279AmAAIABqEqnqGgaABpoyNAGVHpGgMgDQBk0oR6IeoECBiqqU5NY23ndshzT2UPUOGrg42YPi9VyA8lbwwPJgtghxs5Qq1aWwExCDeMa0RHC2QigTCdizz1nx+LuSKcKEhCCPRpg=",
                  "component_encodings": [
                    {
                      "@type": "FastPrimitivesCoder$QlpoOTFBWSZTWYQR6NMAAEDXwH8QgCEJAEBAv279AmAAIABqEqnqGgaABpoyNAGVHpGgMgDQBk0oR6IeoECBiqqU5NY23ndshzT2UPUOGrg42YPi9VyA8lbwwPJgtghxs5Qq1aWwExCDeMa0RHC2QigTCdizz1nx+LuSKcKEhCCPRpg=",
                      "component_encodings": [],
                      "pipeline_proto_coder_id": "ref_Coder_FastPrimitivesCoder_3"
                    },
                    {
                      "@type": "FastPrimitivesCoder$QlpoOTFBWSZTWYQR6NMAAEDXwH8QgCEJAEBAv279AmAAIABqEqnqGgaABpoyNAGVHpGgMgDQBk0oR6IeoECBiqqU5NY23ndshzT2UPUOGrg42YPi9VyA8lbwwPJgtghxs5Qq1aWwExCDeMa0RHC2QigTCdizz1nx+LuSKcKEhCCPRpg=",
                      "component_encodings": [],
                      "pipeline_proto_coder_id": "ref_Coder_FastPrimitivesCoder_3"
                    }
                  ],
                  "is_pair_like": true,
                  "pipeline_proto_coder_id": "ref_Coder_FastPrimitivesCoder_3"
                },
                {
                  "@type": "kind:interval_window"
                }
              ],
              "is_wrapper": true
            },
            "output_name": "None",
            "user_name": "format.out"
          }
        ],
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s7"
        },
        "serialized_fn": "ref_AppliedPTransform_format_10",
        "user_name": "format"
      }
    },
    {
      "kind": "ParallelDo",
      "name": "s9",
      "properties": {
        "display_data": [
          {
            "key": "fn",
            "label": "Transform Function",
            "namespace": "apache_beam.transforms.core.ParDo",
            "shortValue": "CallableWrapperDoFn",
            "type": "STRING",
            "value": "apache_beam.transforms.core.CallableWrapperDoFn"
          },
          {
            "key": "fn",
            "label": "Transform Function",
            "namespace": "apache_beam.transforms.core.CallableWrapperDoFn",
            "type": "STRING",
            "value": "<lambda>"
          }
        ],
        "non_parallel_inputs": {},
        "output_info": [
          {
            "encoding": {
              "@type": "kind:windowed_value",
              "component_encodings": [
                {
                  "@type": "kind:bytes"
                },
                {
                  "@type": "kind:interval_window"
                }
              ],
              "is_wrapper": true
            },
            "output_name": "None",
            "user_name": "encode.out"
          }
        ],
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s8"
        },
        "serialized_fn": "ref_AppliedPTransform_encode_11",
        "user_name": "encode"
      }
    },
    {
      "kind": "ParallelWrite",
      "name": "s10",
      "properties": {
        "display_data": [],
        "encoding": {
          "@type": "kind:windowed_value",
          "component_encodings": [
            {
              "@type": "kind:bytes"
            },
            {
              "@type": "kind:global_window"
            }
          ],
          "is_wrapper": true
        },
        "format": "pubsub",
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s9"
        },
        "pubsub_topic": "projects/apache-beam-testing/topics/wc_topic_outputfba06e15-02bb-4a19-8375-f27036fdb870",
        "user_name": "WriteToPubSub/Write/NativeWrite"
      }
    }
  ],
  "type": "JOB_TYPE_STREAMING"
}
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 createTime: '2020-10-15T12:30:01.712028Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2020-10-15_05_30_00-2139190545619703801'
 location: 'us-central1'
 name: 'beamapp-jenkins-1015122953-352449'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2020-10-15T12:30:01.712028Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: [2020-10-15_05_30_00-2139190545619703801]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 2020-10-15_05_30_00-2139190545619703801
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_05_30_00-2139190545619703801?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2020-10-15_05_30_00-2139190545619703801 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:00.298Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2020-10-15_05_30_00-2139190545619703801.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:00.298Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2020-10-15_05_30_00-2139190545619703801. The number of workers will be between 1 and 100.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:00.298Z: JOB_MESSAGE_WARNING: Autoscaling is enabled for Dataflow Streaming Engine. Workers will scale between 1 and 100 unless maxNumWorkers is specified.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:11.369Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-c.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:12.837Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:12.876Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:12.941Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:12.978Z: JOB_MESSAGE_DEBUG: Combiner lifting skipped for step group: GroupByKey not followed by a combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.013Z: JOB_MESSAGE_DETAILED: Expanding SplittableProcessKeyed operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.066Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into streaming Read/Write steps
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.136Z: JOB_MESSAGE_DETAILED: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.205Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.237Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.270Z: JOB_MESSAGE_DETAILED: Fusing consumer decode into ReadFromPubSub/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.306Z: JOB_MESSAGE_DETAILED: Fusing consumer split into decode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.328Z: JOB_MESSAGE_DETAILED: Fusing consumer pair_with_one into split
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.348Z: JOB_MESSAGE_DETAILED: Fusing consumer WindowInto(WindowIntoFn) into pair_with_one
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.371Z: JOB_MESSAGE_DETAILED: Fusing consumer group/WriteStream into WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.407Z: JOB_MESSAGE_DETAILED: Fusing consumer group/MergeBuckets into group/ReadStream
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.429Z: JOB_MESSAGE_DETAILED: Fusing consumer count into group/MergeBuckets
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.463Z: JOB_MESSAGE_DETAILED: Fusing consumer format into count
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.488Z: JOB_MESSAGE_DETAILED: Fusing consumer encode into format
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.518Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToPubSub/Write/NativeWrite into encode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.561Z: JOB_MESSAGE_BASIC: The pubsub read for: projects/apache-beam-testing/subscriptions/wc_subscription_inputfba06e15-02bb-4a19-8375-f27036fdb870 is configured to compute input data watermarks based on custom timestamp attribute . Cloud Dataflow has created an additional tracking subscription to do this, which will be cleaned up automatically. For details, see: https://cloud.google.com/dataflow/model/pubsub-io#timestamps-ids
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.599Z: JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.628Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.664Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:13.697Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:14.657Z: JOB_MESSAGE_DETAILED: Pub/Sub resources set up for topic 'projects/apache-beam-testing/topics/wc_topic_inputfba06e15-02bb-4a19-8375-f27036fdb870'.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:15.947Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:16.009Z: JOB_MESSAGE_BASIC: Starting 1 workers in us-central1-c...
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:16.168Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:41.643Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:30:44.303Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 so that the pipeline can catch up with its backlog and keep up with its input rate.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:31:24.356Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T12:31:24.454Z: JOB_MESSAGE_DETAILED: Workers have started successfully.

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-15_05_28_23-4143253432510592603 after 360 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-15_05_30_00-2139190545619703801 after 361 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_05_28_23-4143253432510592603?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py37.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 628.089s

OK

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-15_05_30_00-2139190545619703801?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py36.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 583.604s

OK

> Task :sdks:python:test-suites:dataflow:preCommitIT_V2

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:python:test-suites:tox:py38:testPy38CloudCoverage'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 39m 4s
82 actionable tasks: 60 executed, 22 from cache

Publishing build scan...
https://gradle.com/s/xbm2ol52rosno

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_PreCommit_Python_Cron #3366

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/3366/display/redirect?page=changes>

Changes:

[Kenneth Knowles] Disable unsupported categories for Dataflow streaming

[noreply] [BEAM-10967] adding validate runner for Dataflow runner v2 to Java SDK

[noreply] [BEAM-10959] Fix circle buffer. (#13123)


------------------------------------------
[...truncated 1.45 MB...]
                    }
                  ],
                  "is_pair_like": true,
                  "pipeline_proto_coder_id": "ref_Coder_FastPrimitivesCoder_3"
                },
                {
                  "@type": "kind:interval_window"
                }
              ],
              "is_wrapper": true
            },
            "output_name": "None",
            "user_name": "format.out"
          }
        ],
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s7"
        },
        "serialized_fn": "ref_AppliedPTransform_format_10",
        "user_name": "format"
      }
    },
    {
      "kind": "ParallelDo",
      "name": "s9",
      "properties": {
        "display_data": [
          {
            "key": "fn",
            "label": "Transform Function",
            "namespace": "apache_beam.transforms.core.ParDo",
            "shortValue": "CallableWrapperDoFn",
            "type": "STRING",
            "value": "apache_beam.transforms.core.CallableWrapperDoFn"
          },
          {
            "key": "fn",
            "label": "Transform Function",
            "namespace": "apache_beam.transforms.core.CallableWrapperDoFn",
            "type": "STRING",
            "value": "<lambda>"
          }
        ],
        "non_parallel_inputs": {},
        "output_info": [
          {
            "encoding": {
              "@type": "kind:windowed_value",
              "component_encodings": [
                {
                  "@type": "kind:bytes"
                },
                {
                  "@type": "kind:interval_window"
                }
              ],
              "is_wrapper": true
            },
            "output_name": "None",
            "user_name": "encode.out"
          }
        ],
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s8"
        },
        "serialized_fn": "ref_AppliedPTransform_encode_11",
        "user_name": "encode"
      }
    },
    {
      "kind": "ParallelWrite",
      "name": "s10",
      "properties": {
        "display_data": [],
        "encoding": {
          "@type": "kind:windowed_value",
          "component_encodings": [
            {
              "@type": "kind:bytes"
            },
            {
              "@type": "kind:global_window"
            }
          ],
          "is_wrapper": true
        },
        "format": "pubsub",
        "parallel_input": {
          "@type": "OutputReference",
          "output_name": "None",
          "step_name": "s9"
        },
        "pubsub_topic": "projects/apache-beam-testing/topics/wc_topic_output3a195d8d-1789-425f-a6b1-2c9c93c184be",
        "user_name": "WriteToPubSub/Write/NativeWrite"
      }
    }
  ],
  "type": "JOB_TYPE_STREAMING"
}
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 createTime: '2020-10-15T06:29:03.343172Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2020-10-14_23_29_02-16467146955608389949'
 location: 'us-central1'
 name: 'beamapp-jenkins-1015062854-547453'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2020-10-15T06:29:03.343172Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_STREAMING, 2)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: [2020-10-14_23_29_02-16467146955608389949]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 2020-10-14_23_29_02-16467146955608389949
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-14_23_29_02-16467146955608389949?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2020-10-14_23_29_02-16467146955608389949 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:02.030Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2020-10-14_23_29_02-16467146955608389949.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:02.030Z: JOB_MESSAGE_WARNING: Autoscaling is enabled for Dataflow Streaming Engine. Workers will scale between 1 and 100 unless maxNumWorkers is specified.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:02.030Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2020-10-14_23_29_02-16467146955608389949. The number of workers will be between 1 and 100.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:07.137Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-c.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.691Z: JOB_MESSAGE_DETAILED: Expanding SplittableParDo operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.714Z: JOB_MESSAGE_DETAILED: Expanding CollectionToSingleton operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.825Z: JOB_MESSAGE_DETAILED: Expanding CoGroupByKey operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.863Z: JOB_MESSAGE_DEBUG: Combiner lifting skipped for step group: GroupByKey not followed by a combiner.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.900Z: JOB_MESSAGE_DETAILED: Expanding SplittableProcessKeyed operations into optimizable parts.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.930Z: JOB_MESSAGE_DETAILED: Expanding GroupByKey operations into streaming Read/Write steps
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:10.978Z: JOB_MESSAGE_DETAILED: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.032Z: JOB_MESSAGE_DEBUG: Annotating graph with Autotuner information.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.068Z: JOB_MESSAGE_DETAILED: Fusing adjacent ParDo, Read, Write, and Flatten operations
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.103Z: JOB_MESSAGE_DETAILED: Fusing consumer decode into ReadFromPubSub/Read
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.136Z: JOB_MESSAGE_DETAILED: Fusing consumer split into decode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.172Z: JOB_MESSAGE_DETAILED: Fusing consumer pair_with_one into split
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.206Z: JOB_MESSAGE_DETAILED: Fusing consumer WindowInto(WindowIntoFn) into pair_with_one
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.239Z: JOB_MESSAGE_DETAILED: Fusing consumer group/WriteStream into WindowInto(WindowIntoFn)
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.273Z: JOB_MESSAGE_DETAILED: Fusing consumer group/MergeBuckets into group/ReadStream
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.308Z: JOB_MESSAGE_DETAILED: Fusing consumer count into group/MergeBuckets
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.343Z: JOB_MESSAGE_DETAILED: Fusing consumer format into count
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.378Z: JOB_MESSAGE_DETAILED: Fusing consumer encode into format
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.413Z: JOB_MESSAGE_DETAILED: Fusing consumer WriteToPubSub/Write/NativeWrite into encode
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.458Z: JOB_MESSAGE_BASIC: The pubsub read for: projects/apache-beam-testing/subscriptions/wc_subscription_input3a195d8d-1789-425f-a6b1-2c9c93c184be is configured to compute input data watermarks based on custom timestamp attribute . Cloud Dataflow has created an additional tracking subscription to do this, which will be cleaned up automatically. For details, see: https://cloud.google.com/dataflow/model/pubsub-io#timestamps-ids
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.493Z: JOB_MESSAGE_DEBUG: Workflow config is missing a default resource spec.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.526Z: JOB_MESSAGE_DEBUG: Adding StepResource setup and teardown to workflow graph.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.562Z: JOB_MESSAGE_DEBUG: Adding workflow start and stop steps.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:11.595Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:12.358Z: JOB_MESSAGE_DETAILED: Pub/Sub resources set up for topic 'projects/apache-beam-testing/topics/wc_topic_input3a195d8d-1789-425f-a6b1-2c9c93c184be'.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:13.478Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:13.511Z: JOB_MESSAGE_BASIC: Starting 1 workers in us-central1-c...
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:13.538Z: JOB_MESSAGE_DEBUG: Starting worker pool setup.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:23.059Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:18.130Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 so that the pipeline can catch up with its backlog and keep up with its input rate.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:21.648Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:29:37.947Z: JOB_MESSAGE_DETAILED: Autoscaling: Raised the number of workers to 1 so that the pipeline can catch up with its backlog and keep up with its input rate.

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:30:00.148Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:30:00.182Z: JOB_MESSAGE_DETAILED: Workers have started successfully.

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:30:16.922Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:apache_beam.runners.dataflow.dataflow_runner:2020-10-15T06:30:16.947Z: JOB_MESSAGE_DETAILED: Workers have started successfully.

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-14_23_28_46-15513940426167409367 after 361 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
WARNING:apache_beam.runners.dataflow.dataflow_runner:Timing out on waiting for job 2020-10-14_23_29_02-16467146955608389949 after 360 seconds
DEBUG:google.auth._default:Checking None for explicit credentials as part of auth process...
DEBUG:google.auth._default:Checking Cloud SDK credentials as part of auth process...
DEBUG:google.auth._default:Cloud SDK credentials not found on disk; not using them
DEBUG:google.auth._default:Checking for App Engine runtime as part of auth process...
DEBUG:google.auth._default:No App Engine library was found so cannot authentication via App Engine Identity Credentials.
DEBUG:google.auth.transport._http_client:Making request: GET http://169.254.169.254
DEBUG:google.auth.transport._http_client:Making request: GET http://metadata.google.internal/computeMetadata/v1/project/project-id
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): metadata.google.internal:80
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/default/?recursive=true HTTP/1.1" 200 144
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token
DEBUG:urllib3.connectionpool:http://metadata.google.internal:80 "GET /computeMetadata/v1/instance/service-accounts/844138762903-compute@developer.gserviceaccount.com/token HTTP/1.1" 200 221

> Task :sdks:python:test-suites:dataflow:py36:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-14_23_28_46-15513940426167409367?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py36.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 490.860s

OK

> Task :sdks:python:test-suites:dataflow:py37:preCommitIT_streaming_V2
Worker logs: https://console.cloud.google.com/dataflow/jobs/us-central1/2020-10-14_23_29_02-16467146955608389949?project=apache-beam-testing
test_streaming_wordcount_it (apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT) ... ok

----------------------------------------------------------------------
XML: nosetests-preCommitIT-df-py37.xml
----------------------------------------------------------------------
XML: <https://ci-beam.apache.org/job/beam_PreCommit_Python_Cron/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 1 test in 498.960s

OK

> Task :sdks:python:test-suites:dataflow:preCommitIT_V2

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:python:test-suites:tox:py36:testPy36Cloud'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 36m 13s
82 actionable tasks: 61 executed, 21 from cache

Publishing build scan...
https://gradle.com/s/2rhjp3edxihwu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org