You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2020/07/06 06:06:36 UTC

Build failed in Jenkins: beam_PreCommit_Python2_PVR_Flink_Cron #1100

See <https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1100/display/redirect>

Changes:


------------------------------------------
[...truncated 32.99 KB...]
Resolving github.com/pierrec/lz4: commit='ed8d4cc3b461464e69798080a0092bd028910298', urls=[https://github.com/pierrec/lz4.git, git@github.com:pierrec/lz4.git]
Resolving github.com/pierrec/xxHash: commit='a0006b13c722f7f12368c00a3d3c2ae8a999a0c6', urls=[https://github.com/pierrec/xxHash.git, git@github.com:pierrec/xxHash.git]
Resolving github.com/pkg/errors: commit='30136e27e2ac8d167177e8a583aa4c3fea5be833', urls=[https://github.com/pkg/errors.git, git@github.com:pkg/errors.git]
Resolving github.com/pkg/sftp: commit='22e9c1ccc02fc1b9fa3264572e49109b68a86947', urls=[https://github.com/pkg/sftp.git, git@github.com:pkg/sftp.git]
Resolving github.com/prometheus/client_golang: commit='9bb6ab929dcbe1c8393cd9ef70387cb69811bd1c', urls=[https://github.com/prometheus/client_golang.git, git@github.com:prometheus/client_golang.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving github.com/prometheus/procfs: commit='cb4147076ac75738c9a7d279075a253c0cc5acbd', urls=[https://github.com/prometheus/procfs.git, git@github.com:prometheus/procfs.git]
Resolving github.com/rcrowley/go-metrics: commit='8732c616f52954686704c8645fe1a9d59e9df7c1', urls=[https://github.com/rcrowley/go-metrics.git, git@github.com:rcrowley/go-metrics.git]
Resolving github.com/cpuguy83/go-md2man: commit='dc9f53734905c233adfc09fd4f063dce63ce3daf', urls=[https://github.com/cpuguy83/go-md2man.git, git@github.com:cpuguy83/go-md2man.git]
Resolving cached github.com/cpuguy83/go-md2man: commit='dc9f53734905c233adfc09fd4f063dce63ce3daf', urls=[https://github.com/cpuguy83/go-md2man.git, git@github.com:cpuguy83/go-md2man.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving github.com/spf13/afero: commit='bb8f1927f2a9d3ab41c9340aa034f6b803f4359c', urls=[https://github.com/spf13/afero.git, git@github.com:spf13/afero.git]
Resolving github.com/spf13/cast: commit='acbeb36b902d72a7a4c18e8f3241075e7ab763e4', urls=[https://github.com/spf13/cast.git, git@github.com:spf13/cast.git]
Resolving github.com/spf13/cobra: commit='93959269ad99e80983c9ba742a7e01203a4c0e4f', urls=[https://github.com/spf13/cobra.git, git@github.com:spf13/cobra.git]
Resolving github.com/spf13/jwalterweatherman: commit='7c0cea34c8ece3fbeb2b27ab9b59511d360fb394', urls=[https://github.com/spf13/jwalterweatherman.git, git@github.com:spf13/jwalterweatherman.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving github.com/spf13/viper: commit='aafc9e6bc7b7bb53ddaa75a5ef49a17d6e654be5', urls=[https://github.com/spf13/viper.git, git@github.com:spf13/viper.git]
Resolving github.com/stathat/go: commit='74669b9f388d9d788c97399a0824adbfee78400e', urls=[https://github.com/stathat/go.git, git@github.com:stathat/go.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving github.com/xordataexchange/crypt: commit='b2862e3d0a775f18c7cfe02273500ae307b61218', urls=[https://github.com/xordataexchange/crypt.git, git@github.com:xordataexchange/crypt.git]
Resolving go.opencensus.io: commit='aa2b39d1618ef56ba156f27cfcdae9042f68f0bc', urls=[https://github.com/census-instrumentation/opencensus-go]
Resolving golang.org/x/crypto: commit='d9133f5469342136e669e85192a26056b587f503', urls=[https://go.googlesource.com/crypto]
Resolving golang.org/x/debug: commit='95515998a8a4bd7448134b2cb5971dbeb12e0b77', urls=[https://go.googlesource.com/debug]

> Task :sdks:java:core:shadowJar

> Task :sdks:python:test-suites:portable:py2:setupVirtualenv
Collecting pluggy<1,>=0.3.0
  Using cached pluggy-0.13.1-py2.py3-none-any.whl (18 kB)
Collecting py<2,>=1.4.17
  Using cached py-1.9.0-py2.py3-none-any.whl (99 kB)
Collecting toml>=0.9.4
  Using cached toml-0.10.1-py2.py3-none-any.whl (19 kB)
Requirement already satisfied, skipping upgrade: setuptools>=30.0.0 in <https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/build/gradleenv/1866363813/lib/python2.7/site-packages> (from tox==3.11.1) (44.1.1)
Collecting grpcio>=1.14.2
  Using cached grpcio-1.30.0-cp27-cp27mu-manylinux2010_x86_64.whl (3.0 MB)
Collecting protobuf>=3.5.0.post1
  Using cached protobuf-3.12.2-cp27-cp27mu-manylinux1_x86_64.whl (1.3 MB)
Collecting importlib-resources>=1.0; python_version < "3.7"
  Using cached importlib_resources-3.0.0-py2.py3-none-any.whl (23 kB)
Collecting pathlib2<3,>=2.3.3; python_version < "3.4" and sys_platform != "win32"
  Using cached pathlib2-2.3.5-py2.py3-none-any.whl (18 kB)
Collecting appdirs<2,>=1.4.3
  Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting importlib-metadata<2,>=0.12; python_version < "3.8"
  Using cached importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
Collecting distlib<1,>=0.3.0
  Using cached distlib-0.3.1-py2.py3-none-any.whl (335 kB)
Collecting enum34>=1.0.4; python_version < "3.4"
  Using cached enum34-1.1.10-py2-none-any.whl (11 kB)
Collecting futures>=2.2.0; python_version < "3.2"
  Using cached futures-3.3.0-py2-none-any.whl (16 kB)
Collecting singledispatch; python_version < "3.4"
  Using cached singledispatch-3.4.0.3-py2.py3-none-any.whl (12 kB)
Collecting contextlib2; python_version < "3"
  Using cached contextlib2-0.6.0.post1-py2.py3-none-any.whl (9.8 kB)
Collecting zipp>=0.4; python_version < "3.8"
  Using cached zipp-1.2.0-py2.py3-none-any.whl (4.8 kB)
Collecting typing; python_version < "3.5"
  Using cached typing-3.7.4.1-py2-none-any.whl (26 kB)
Processing /home/jenkins/.cache/pip/wheels/58/2c/26/52406f7d1f19bcc47a6fbd1037a5f293492f5cf1d58c539edb/scandir-1.10.0-cp27-cp27mu-linux_x86_64.whl
Collecting configparser>=3.5; python_version < "3"
  Using cached configparser-4.0.2-py2.py3-none-any.whl (22 kB)
Installing collected packages: six, filelock, singledispatch, scandir, pathlib2, contextlib2, zipp, typing, importlib-resources, appdirs, configparser, importlib-metadata, distlib, virtualenv, pluggy, py, toml, tox, enum34, futures, grpcio, protobuf, grpcio-tools, future, mypy-protobuf

> Task :sdks:java:extensions:google-cloud-platform-core:compileJava FROM-CACHE
> Task :sdks:java:extensions:google-cloud-platform-core:classes UP-TO-DATE
> Task :runners:core-construction-java:compileJava FROM-CACHE
> Task :runners:core-construction-java:classes UP-TO-DATE
> Task :sdks:java:extensions:google-cloud-platform-core:jar

> Task :sdks:java:container:pullLicenses
> Task :buildSrc:spotlessGroovy
> Task :buildSrc:spotlessGroovyCheck
> Task :buildSrc:spotlessGroovyGradle
> Task :buildSrc:spotlessGroovyGradleCheck
> Task :buildSrc:spotlessCheck
> Task :buildSrc:pluginUnderTestMetadata
> Task :buildSrc:compileTestJava NO-SOURCE
> Task :buildSrc:compileTestGroovy NO-SOURCE
> Task :buildSrc:processTestResources NO-SOURCE
> Task :buildSrc:testClasses UP-TO-DATE
> Task :buildSrc:test NO-SOURCE
> Task :buildSrc:validateTaskProperties
> Task :buildSrc:check
> Task :buildSrc:build
Configuration on demand is an incubating feature.

> Task :sdks:java:fn-execution:compileJava FROM-CACHE
> Task :sdks:java:fn-execution:classes UP-TO-DATE
> Task :vendor:sdks-java-extensions-protobuf:compileJava FROM-CACHE
> Task :vendor:sdks-java-extensions-protobuf:classes UP-TO-DATE
> Task :sdks:java:fn-execution:jar
> Task :sdks:java:extensions:protobuf:extractIncludeProto
> Task :sdks:java:extensions:protobuf:generateProto NO-SOURCE
> Task :runners:core-construction-java:jar
> Task :sdks:java:extensions:protobuf:compileJava FROM-CACHE
> Task :sdks:java:extensions:protobuf:classes UP-TO-DATE
> Task :sdks:java:extensions:protobuf:jar
> Task :vendor:sdks-java-extensions-protobuf:shadowJar
> Task :runners:core-java:compileJava FROM-CACHE
> Task :runners:core-java:classes UP-TO-DATE
> Task :runners:core-java:jar
> Task :sdks:java:harness:compileJava FROM-CACHE
> Task :sdks:java:harness:classes UP-TO-DATE
> Task :sdks:java:harness:jar

> Task :sdks:go:resolveBuildDependencies
Resolving golang.org/x/net: commit='2fb46b16b8dda405028c50f7c7f0f9dd1fa6bfb1', urls=[https://go.googlesource.com/net]
Resolving golang.org/x/oauth2: commit='a032972e28060ca4f5644acffae3dfc268cc09db', urls=[https://go.googlesource.com/oauth2]
Resolving golang.org/x/sync: commit='fd80eb99c8f653c847d294a001bdf2a3a6f768f5', urls=[https://go.googlesource.com/sync]
Resolving golang.org/x/sys: commit='37707fdb30a5b38865cfb95e5aab41707daec7fd', urls=[https://go.googlesource.com/sys]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]

> Task :sdks:python:test-suites:portable:py2:setupVirtualenv
Successfully installed appdirs-1.4.4 configparser-4.0.2 contextlib2-0.6.0.post1 distlib-0.3.1 enum34-1.1.10 filelock-3.0.12 future-0.18.2 futures-3.3.0 grpcio-1.30.0 grpcio-tools-1.14.2 importlib-metadata-1.7.0 importlib-resources-3.0.0 mypy-protobuf-1.18 pathlib2-2.3.5 pluggy-0.13.1 protobuf-3.12.2 py-1.9.0 scandir-1.10.0 singledispatch-3.4.0.3 six-1.15.0 toml-0.10.1 tox-3.11.1 typing-3.7.4.1 virtualenv-20.0.25 zipp-1.2.0

> Task :sdks:java:container:pullLicenses

> Configure project :sdks:java:container
Found go 1.12 in /usr/bin/go, use it.

> Configure project :sdks:go
Found go 1.12 in /usr/bin/go, use it.

> Configure project :sdks:go:container
Found go 1.12 in /usr/bin/go, use it.

> Configure project :sdks:go:examples
Found go 1.12 in /usr/bin/go, use it.

> Configure project :sdks:go:test
Found go 1.12 in /usr/bin/go, use it.

> Configure project :sdks:python:container
Found go 1.12 in /usr/bin/go, use it.
The message received from the daemon indicates that the daemon has disappeared.
Build request sent: Build{id=0b71288d-79f9-4f82-b500-60fdac7e3507, currentDir=<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src}>
Attempting to read last messages from the daemon log...
Daemon pid: 24457
  log file: /home/jenkins/.gradle/daemon/5.2.1/daemon-24457.out.log
----- Last  20 lines from daemon log file - daemon-24457.out.log -----
06:05:00.356 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Lock acquired on daemon addresses registry.
06:05:00.358 [DEBUG] [org.gradle.cache.internal.DefaultFileLockManager] Releasing lock on daemon addresses registry.
06:05:00.358 [DEBUG] [org.gradle.launcher.daemon.server.DaemonStateCoordinator] resetting idle timer
06:05:00.358 [DEBUG] [org.gradle.launcher.daemon.server.DaemonStateCoordinator] daemon is running. Sleeping until state changes.
06:05:00.359 [INFO] [org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy] Daemon is about to start building Build{id=0b71288d-79f9-4f82-b500-60fdac7e3507, currentDir=<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src}.> Dispatching build started information...
06:05:00.359 [DEBUG] [org.gradle.launcher.daemon.server.SynchronizedDispatchConnection] thread 128: dispatching class org.gradle.launcher.daemon.protocol.BuildStarted
06:05:00.360 [DEBUG] [org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment] Configuring env variables: {PATH=/home/jenkins/tools/java/latest1.8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games, RUN_DISPLAY_URL=https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1100/display/redirect, HUDSON_HOME=/home/jenkins/jenkins-home, RUN_CHANGES_DISPLAY_URL=https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1100/display/redirect?page=changes, JOB_URL=https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/, HUDSON_COOKIE=84d4c93a-9b41-4365-8c9d-e7b2b73a2aaf, SLACK_WEBHOOK_URL=****, MAIL=/var/mail/jenkins, JENKINS_SERVER_COOKIE=14925adb602d4986, LOGNAME=jenkins, PWD=<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src,> JENKINS_URL=https://ci-beam.apache.org/, SHELL=/bin/bash, BUILD_TAG=jenkins-beam_PreCommit_Python2_PVR_Flink_Cron-1100, ROOT_BUILD_CAUSE=TIMERTRIGGER, BUILD_CAUSE_TIMERTRIGGER=true, OLDPWD=<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src,> GIT_CHECKOUT_DIR=src, JENKINS_HOME=/home/jenkins/jenkins-home, sha1=master, NODE_NAME=apache-beam-jenkins-7, BUILD_DISPLAY_NAME=#1100, JOB_DISPLAY_URL=https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/display/redirect, GIT_BRANCH=origin/master, SHLVL=1, GIT_PREVIOUS_COMMIT=4cfd4bf73ffb239ec8c5efaab5ca6c9671e56e31, JAVA_HOME=/home/jenkins/tools/java/latest1.8, BUILD_ID=1100, LANG=en_US.UTF-8, XDG_SESSION_ID=16, JOB_NAME=beam_PreCommit_Python2_PVR_Flink_Cron, SPARK_LOCAL_IP=127.0.0.1, BUILD_CAUSE=TIMERTRIGGER, GIT_PREVIOUS_SUCCESSFUL_COMMIT=4cfd4bf73ffb239ec8c5efaab5ca6c9671e56e31, NODE_LABELS=apache-beam-jenkins-7 beam, HUDSON_URL=https://ci-beam.apache.org/, WORKSPACE=<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/,> ROOT_BUILD_CAUSE_TIMERTRIGGER=true, _=/usr/bin/java, GIT_COMMIT=4cfd4bf73ffb239ec8c5efaab5ca6c9671e56e31, COVERALLS_REPO_TOKEN=****, EXECUTOR_NUMBER=0, HUDSON_SERVER_COOKIE=14925adb602d4986, SSH_CLIENT=3.94.149.17 47544 22, JOB_BASE_NAME=beam_PreCommit_Python2_PVR_Flink_Cron, USER=jenkins, SSH_CONNECTION=3.94.149.17 47544 10.128.0.175 22, BUILD_NUMBER=1100, BUILD_URL=https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1100/, GIT_URL=https://github.com/apache/beam.git, XDG_RUNTIME_DIR=/run/user/1017, HOME=/home/jenkins}
06:05:00.361 [DEBUG] [org.gradle.launcher.daemon.server.exec.LogToClient] About to start relaying all logs to the client via the connection.
06:05:00.361 [INFO] [org.gradle.launcher.daemon.server.exec.LogToClient] The client will now receive all logging from the daemon (pid: 24457). The daemon log file: /home/jenkins/.gradle/daemon/5.2.1/daemon-24457.out.log
06:05:00.362 [INFO] [org.gradle.launcher.daemon.server.exec.LogAndCheckHealth] Starting 2nd build in daemon [uptime: 2 mins 31.348 secs, performance: 100%]
06:05:00.363 [DEBUG] [org.gradle.launcher.daemon.server.exec.ExecuteBuild] The daemon has started executing the build.
06:05:00.363 [DEBUG] [org.gradle.launcher.daemon.server.exec.ExecuteBuild] Executing build with daemon context: DefaultDaemonContext[uid=02e763de-3fcd-46ae-8b7e-811cbf1861ba,javaHome=/usr/lib/jvm/java-8-openjdk-amd64,daemonRegistryDir=/home/jenkins/.gradle/daemon,pid=24457,idleTimeout=10800000,priority=NORMAL,daemonOpts=-Xss10240k,-Dfile.encoding=UTF-8,-Duser.country=US,-Duser.language=en,-Duser.variant]
Configuration on demand is an incubating feature.
Found go 1.12 in /usr/bin/go, use it.
Found go 1.12 in /usr/bin/go, use it.
Found go 1.12 in /usr/bin/go, use it.
Found go 1.12 in /usr/bin/go, use it.
Found go 1.12 in /usr/bin/go, use it.
Found go 1.12 in /usr/bin/go, use it.
Daemon vm is shutting down... The daemon has exited normally or was terminated in response to a user interrupt.
----- End of the daemon log -----


FAILURE: Build failed with an exception.


* What went wrong:
> Task :sdks:java:container:generateLicenseReport
Gradle build daemon disappeared unexpectedly (it may have been killed or may have crashed)

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

> Task :sdks:java:harness:shadowJar
> Task :sdks:java:container:pullLicenses FAILED
> Task :runners:java-fn-execution:compileJava FROM-CACHE
> Task :runners:java-fn-execution:classes UP-TO-DATE
> Task :runners:java-fn-execution:jar
> Task :sdks:java:expansion-service:compileJava FROM-CACHE
> Task :sdks:java:expansion-service:classes UP-TO-DATE
> Task :sdks:java:expansion-service:jar
> Task :runners:java-job-service:compileJava FROM-CACHE
> Task :runners:java-job-service:classes UP-TO-DATE
> Task :sdks:java:io:kafka:compileJava FROM-CACHE
> Task :sdks:java:io:kafka:classes UP-TO-DATE
> Task :runners:java-job-service:jar
> Task :sdks:java:io:kafka:jar
> Task :runners:flink:1.10:compileJava FROM-CACHE
> Task :runners:flink:1.10:classes
> Task :sdks:java:io:google-cloud-platform:compileJava FROM-CACHE
> Task :sdks:java:io:google-cloud-platform:classes UP-TO-DATE
> Task :runners:flink:1.10:jar
> Task :runners:flink:1.10:job-server:compileJava NO-SOURCE
> Task :runners:flink:1.10:job-server:classes UP-TO-DATE
> Task :sdks:java:io:google-cloud-platform:jar
> Task :sdks:java:container:copyDockerfileDependencies
> Task :runners:flink:1.10:job-server:shadowJar

> Task :sdks:go:resolveBuildDependencies
Resolving google.golang.org/api: commit='386d4e5f4f92f86e6aec85985761bba4b938a2d5', urls=[https://code.googlesource.com/google-api-go-client]
Resolving google.golang.org/genproto: commit='2b5a72b8730b0b16380010cfe5286c42108d88e7', urls=[https://github.com/google/go-genproto]
Resolving google.golang.org/grpc: commit='7646b5360d049a7ca31e9133315db43456f39e2e', urls=[https://github.com/grpc/grpc-go]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]
Resolving cached github.com/coreos/etcd: commit='11214aa33bf5a47d3d9d8dafe0f6b97237dfe921', urls=[https://github.com/coreos/etcd.git, git@github.com:coreos/etcd.git]

> Task :sdks:go:installDependencies
> Task :sdks:go:buildLinuxAmd64
> Task :sdks:go:goBuild

> Task :sdks:java:container:resolveBuildDependencies
Resolving ./github.com/apache/beam/sdks/go@<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/sdks/go>

> Task :sdks:java:container:installDependencies
> Task :sdks:java:container:buildLinuxAmd64
> Task :sdks:java:container:goBuild

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:container:pullLicenses'.
> Process 'command './sdks/java/container/license_scripts/license_script.sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 1m 53s
73 actionable tasks: 53 executed, 19 from cache, 1 up-to-date

Publishing build scan...
https://gradle.com/s/rhdiseylnfh2w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Jenkins build is back to normal : beam_PreCommit_Python2_PVR_Flink_Cron #1104

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1104/display/redirect?page=changes>


---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_PreCommit_Python2_PVR_Flink_Cron #1103

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1103/display/redirect?page=changes>

Changes:

[nielm] Remove usage of jdk-internal NotNull annotation

[srohde] Remove passthrough_pcollection_output_ids and

[srohde] yield none, pvalue for single output transforms

[Robert Burke] [BEAM-9615] Add Row coder functions.

[Robert Burke] !fixup go version restrictions on shift.

[noreply] !fixup address comments & go1.12 compiler error

[ajamato] [BEAM-10317] Python - Update BigQueryIO to tag BigQuery Jobs with the

[heejong] fix formatting for pull request post-commit status table

[ajamato] [BEAM-10317] Java - Update BigQueryIO to tag BigQuery Jobs with the

[ajamato] Add requests to python deps

[noreply] [BEAM-9785] Add Python 3.8 postcommit tests (#11788)

[noreply] Merge pull request #12161 from [BEAM-10378] Adding Azure IO FileSystem

[noreply] Use bz2 to compress pickler output for better compression. (#12057)


------------------------------------------
[...truncated 2.42 MB...]
Caused by: java.lang.RuntimeException: Failed to finish remote bundle
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator$SdkHarnessDoFnRunner.finishBundle(ExecutableStageDoFnOperator.java:769)
	at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:89)
	at org.apache.beam.runners.core.SimplePushbackSideInputDoFnRunner.finishBundle(SimplePushbackSideInputDoFnRunner.java:124)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.invokeFinishBundle(DoFnOperator.java:836)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator.close(ExecutableStageDoFnOperator.java:489)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:618)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$afterInvoke$1(StreamTask.java:498)
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:496)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:477)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:708)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:533)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
	at org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:57)
	at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:458)
	at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:547)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator$SdkHarnessDoFnRunner.finishBundle(ExecutableStageDoFnOperator.java:763)
	... 12 more
Caused by: java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

	at org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:177)
	at org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:157)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
ERROR:root:java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

INFO:apache_beam.runners.portability.portable_runner:Job state changed to FAILED
.sssINFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:44795
WARNING:root:Make sure that locally built Python SDK docker image has Python 2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.24.0.dev. If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
  with Pipeline() as p:
    p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set.
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
[[5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} (2/2)] WARN org.apache.flink.metrics.MetricGroup - The operator name [5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} exceeded the 80 characters length limit and was truncated.
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel for localhost:45859.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with unbounded number of workers.
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for localhost:34941.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
[[5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} (1/2)] WARN org.apache.flink.metrics.MetricGroup - The operator name [5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} exceeded the 80 characters length limit and was truncated.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for localhost:44853
[assert_that/Group/GroupByKey -> [3]assert_that/{Group, Unkey, Match} (1/2)] WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for unknown endpoint.
INFO:apache_beam.runners.worker.sdk_worker:No more requests from control plane
INFO:apache_beam.runners.worker.sdk_worker:SDK Harness waiting for in-flight requests to complete
INFO:apache_beam.runners.worker.data_plane:Closing all cached grpc data channels.
INFO:apache_beam.runners.worker.sdk_worker:Closing all cached gRPC state handlers.
INFO:apache_beam.runners.worker.sdk_worker:Done consuming work.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:45027
WARNING:root:Make sure that locally built Python SDK docker image has Python 2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.24.0.dev. If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
  with Pipeline() as p:
    p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set.
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel for localhost:41431.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with unbounded number of workers.
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for localhost:44011.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for localhost:40955
INFO:apache_beam.runners.worker.sdk_worker:No more requests from control plane
INFO:apache_beam.runners.worker.sdk_worker:SDK Harness waiting for in-flight requests to complete
INFO:apache_beam.runners.worker.data_plane:Closing all cached grpc data channels.
INFO:apache_beam.runners.worker.sdk_worker:Closing all cached gRPC state handlers.
[assert_that/Group/GroupByKey -> [3]assert_that/{Group, Unkey, Match} (1/2)] WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for unknown endpoint.
[grpc-default-executor-0] WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for unknown endpoint.
Jul 07, 2020 12:38:10 AM org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor run
SEVERE: Exception while executing runnable org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed@798e0b90
java.lang.IllegalStateException: call already closed
	at org.apache.beam.vendor.grpc.v1p26p0.com.google.common.base.Preconditions.checkState(Preconditions.java:511)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl.closeInternal(ServerCallImpl.java:209)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl.close(ServerCallImpl.java:202)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onCompleted(ServerCalls.java:371)
	at org.apache.beam.runners.fnexecution.state.GrpcStateService$Inbound.onCompleted(GrpcStateService.java:150)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onHalfClose(ServerCalls.java:262)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:817)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

INFO:apache_beam.runners.worker.sdk_worker:Done consuming work.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.INFO:__main__:removing conf dir: /tmp/flinktest-confxa0vkn

----------------------------------------------------------------------
Ran 82 tests in 503.384s

OK (skipped=15)
Exception in thread Thread-76 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
  File "apache_beam/runners/worker/data_plane.py", line 184, in run
  File "apache_beam/runners/worker/sdk_worker.py", line 437, in shutdown_inactive_bundle_processors
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'time'
Exception in thread Thread-1232 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
  File "apache_beam/runners/worker/data_plane.py", line 184, in run
  File "apache_beam/runners/worker/sdk_worker.py", line 437, in shutdown_inactive_bundle_processors
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'time'

FAILURE: Build failed with an exception.

* Where:
Script '<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/sdks/python/test-suites/portable/common.gradle'> line: 57

* What went wrong:
Execution failed for task ':sdks:python:test-suites:portable:py2:flinkCompatibilityMatrixBatchLOOPBACK'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 31m 9s
78 actionable tasks: 58 executed, 19 from cache, 1 up-to-date

Publishing build scan...
https://gradle.com/s/sffkjwv4rgnpe

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_PreCommit_Python2_PVR_Flink_Cron #1102

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1102/display/redirect>

Changes:


------------------------------------------
[...truncated 2.53 MB...]
	... 4 more
Caused by: java.lang.RuntimeException: Failed to finish remote bundle
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator$SdkHarnessDoFnRunner.finishBundle(ExecutableStageDoFnOperator.java:769)
	at org.apache.beam.runners.flink.metrics.DoFnRunnerWithMetricsUpdate.finishBundle(DoFnRunnerWithMetricsUpdate.java:89)
	at org.apache.beam.runners.core.SimplePushbackSideInputDoFnRunner.finishBundle(SimplePushbackSideInputDoFnRunner.java:124)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.invokeFinishBundle(DoFnOperator.java:836)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator.close(ExecutableStageDoFnOperator.java:489)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:618)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$afterInvoke$1(StreamTask.java:498)
	at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.afterInvoke(StreamTask.java:496)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:477)
	at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:708)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:533)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
	at org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:57)
	at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:458)
	at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:547)
	at org.apache.beam.runners.flink.translation.wrappers.streaming.ExecutableStageDoFnOperator$SdkHarnessDoFnRunner.finishBundle(ExecutableStageDoFnOperator.java:763)
	... 12 more
Caused by: java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

	at org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:177)
	at org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:157)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
ERROR:root:java.lang.RuntimeException: Error received from SDK harness for instruction 4: Traceback (most recent call last):
  File "apache_beam/runners/worker/sdk_worker.py", line 253, in _execute
    response = task()
  File "apache_beam/runners/worker/sdk_worker.py", line 310, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "apache_beam/runners/worker/sdk_worker.py", line 480, in do_instruction
    getattr(request, request_type), request.instruction_id)
  File "apache_beam/runners/worker/sdk_worker.py", line 515, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "apache_beam/runners/worker/bundle_processor.py", line 978, in process_bundle
    element.data)
  File "apache_beam/runners/worker/bundle_processor.py", line 218, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 332, in output
    cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 195, in receive
    self.consumer.process(windowed_value)
  File "apache_beam/runners/worker/operations.py", line 764, in process
    delayed_application = self.dofn_runner.process_with_sized_restriction(o)
  File "apache_beam/runners/common.py", line 975, in process_with_sized_restriction
    watermark_estimator=watermark_estimator)
  File "apache_beam/runners/common.py", line 712, in invoke_process
    windowed_value, additional_args, additional_kwargs)
  File "apache_beam/runners/common.py", line 819, in _invoke_process_per_window
    self.threadsafe_restriction_tracker.check_done()
  File "apache_beam/runners/sdf_utils.py", line 115, in check_done
    return self._restriction_tracker.check_done()
  File "apache_beam/io/restriction_trackers.py", line 101, in check_done
    self._range.stop))
ValueError: OffsetRestrictionTracker is not done since work in range [0, 6) has not been claimed.

INFO:apache_beam.runners.portability.portable_runner:Job state changed to FAILED
.sssINFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:42159
WARNING:root:Make sure that locally built Python SDK docker image has Python 2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.24.0.dev. If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
  with Pipeline() as p:
    p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set.
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
[[5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} (1/2)] WARN org.apache.flink.metrics.MetricGroup - The operator name [5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} exceeded the 80 characters length limit and was truncated.
[[5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} (2/2)] WARN org.apache.flink.metrics.MetricGroup - The operator name [5]{Create, Map(<lambda at fn_runner_test.py:489>), WindowInto(WindowIntoFn), Map(<lambda at fn_runner_test.py:492>)} exceeded the 80 characters length limit and was truncated.
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel for localhost:37011.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with unbounded number of workers.
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for localhost:33045.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for localhost:46853
[assert_that/Group/GroupByKey -> [3]assert_that/{Group, Unkey, Match} (1/2)] WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for unknown endpoint.
ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane.
Traceback (most recent call last):
  File "apache_beam/runners/worker/data_plane.py", line 528, in _read_inputs
    for elements in elements_iterator:
  File "<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",> line 413, in next
    return self._next()
  File "<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",> line 706, in _next
    raise self
_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.CANCELLED
	details = "Multiplexer hanging up"
	debug_error_string = "{"created":"@1594060260.483832317","description":"Error received from peer ipv4:127.0.0.1:46853","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Multiplexer hanging up","grpc_status":1}"
>
Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "apache_beam/runners/worker/data_plane.py", line 545, in <lambda>
    target=lambda: self._read_inputs(elements_iterator),
  File "apache_beam/runners/worker/data_plane.py", line 528, in _read_inputs
    for elements in elements_iterator:
  File "<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",> line 413, in next
    return self._next()
  File "<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/build/gradleenv/1866363813/local/lib/python2.7/site-packages/grpc/_channel.py",> line 706, in _next
    raise self
_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.CANCELLED
	details = "Multiplexer hanging up"
	debug_error_string = "{"created":"@1594060260.483832317","description":"Error received from peer ipv4:127.0.0.1:46853","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Multiplexer hanging up","grpc_status":1}"
>

INFO:apache_beam.runners.worker.sdk_worker:No more requests from control plane
INFO:apache_beam.runners.worker.sdk_worker:SDK Harness waiting for in-flight requests to complete
INFO:apache_beam.runners.worker.data_plane:Closing all cached grpc data channels.
INFO:apache_beam.runners.worker.sdk_worker:Closing all cached gRPC state handlers.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
INFO:apache_beam.runners.worker.sdk_worker:Done consuming work.
.INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:34159
WARNING:root:Make sure that locally built Python SDK docker image has Python 2.7 interpreter.
INFO:root:Using Python SDK docker image: apache/beam_python2.7_sdk:2.24.0.dev. If the image is not available at local, we will try to pull from hub.docker.com
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
  with Pipeline() as p:
    p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set.
[flink-runner-job-invoker] WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
INFO:apache_beam.runners.worker.statecache:Creating state cache with size 0
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure control channel for localhost:37351.
INFO:apache_beam.runners.worker.sdk_worker:Control channel established.
INFO:apache_beam.runners.worker.sdk_worker:Initializing SDKHarness with unbounded number of workers.
INFO:apache_beam.runners.worker.sdk_worker:Creating insecure state channel for localhost:41939.
INFO:apache_beam.runners.worker.sdk_worker:State channel established.
INFO:apache_beam.runners.worker.data_plane:Creating client data channel for localhost:46077
[assert_that/Group/GroupByKey -> [3]assert_that/{Group, Unkey, Match} (1/2)] WARN org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer - Hanged up for unknown endpoint.
INFO:apache_beam.runners.worker.sdk_worker:No more requests from control plane
INFO:apache_beam.runners.worker.sdk_worker:SDK Harness waiting for in-flight requests to complete
INFO:apache_beam.runners.worker.data_plane:Closing all cached grpc data channels.
INFO:apache_beam.runners.worker.sdk_worker:Closing all cached gRPC state handlers.
INFO:apache_beam.runners.worker.sdk_worker:Done consuming work.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to DONE
.INFO:__main__:removing conf dir: /tmp/flinktest-confowGKSp

----------------------------------------------------------------------
Ran 82 tests in 154.694s

OK (skipped=15)

FAILURE: Build failed with an exception.

* Where:
Script '<https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/ws/src/sdks/python/test-suites/portable/common.gradle'> line: 57

* What went wrong:
Execution failed for task ':sdks:python:test-suites:portable:py2:flinkCompatibilityMatrixStreamingLOOPBACK'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 23m 51s
78 actionable tasks: 58 executed, 19 from cache, 1 up-to-date

Publishing build scan...
https://gradle.com/s/yizzgtyxuvaxq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


beam_PreCommit_Python2_PVR_Flink_Cron - Build # 1101 - Aborted!

Posted by Apache Jenkins Server <je...@builds.apache.org>.
beam_PreCommit_Python2_PVR_Flink_Cron - Build # 1101 - Aborted:

Check console output at https://ci-beam.apache.org/job/beam_PreCommit_Python2_PVR_Flink_Cron/1101/ to view the results.