You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2023/08/31 18:58:04 UTC

Build failed in Jenkins: beam_Inference_Python_Benchmarks_Dataflow #357

See <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/357/display/redirect>

Changes:


------------------------------------------
[...truncated 176.02 KB...]
Collecting nvidia-cublas-cu11==11.10.3.66 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl (317.1 MB)
Collecting nvidia-cufft-cu11==10.9.0.58 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cufft_cu11-10.9.0.58-py3-none-manylinux1_x86_64.whl (168.4 MB)
Collecting nvidia-curand-cu11==10.2.10.91 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_curand_cu11-10.2.10.91-py3-none-manylinux1_x86_64.whl (54.6 MB)
Collecting nvidia-cusolver-cu11==11.4.0.1 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusolver_cu11-11.4.0.1-2-py3-none-manylinux1_x86_64.whl (102.6 MB)
Collecting nvidia-cusparse-cu11==11.7.4.91 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_cusparse_cu11-11.7.4.91-py3-none-manylinux1_x86_64.whl (173.2 MB)
Collecting nvidia-nccl-cu11==2.14.3 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl (177.1 MB)
Collecting nvidia-nvtx-cu11==11.7.91 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl (98 kB)
Collecting triton==2.0.0 (from torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached triton-2.0.0-1-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (63.2 MB)
Requirement already satisfied: setuptools in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from nvidia-cublas-cu11==11.10.3.66->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18)) (68.1.2)
Requirement already satisfied: wheel in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from nvidia-cublas-cu11==11.10.3.66->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18)) (0.41.2)
Collecting cmake (from triton==2.0.0->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for cmake from https://files.pythonhosted.org/packages/2e/51/3a4672a819b4532a378bfefad8f886cfe71057556e0d4eefb64523fd370a/cmake-3.27.2-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata
  Using cached cmake-3.27.2-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (6.7 kB)
Collecting lit (from triton==2.0.0->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached lit-16.0.6-py3-none-any.whl
Requirement already satisfied: numpy in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.24.4)
Requirement already satisfied: requests in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2.31.0)
Collecting huggingface-hub<1.0,>=0.15.1 (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for huggingface-hub<1.0,>=0.15.1 from https://files.pythonhosted.org/packages/7f/c4/adcbe9a696c135578cabcbdd7331332daad4d49b7c43688bc2d36b3a47d2/huggingface_hub-0.16.4-py3-none-any.whl.metadata
  Using cached huggingface_hub-0.16.4-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: packaging>=20.0 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (23.1)
Requirement already satisfied: pyyaml>=5.1 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (6.0.1)
Requirement already satisfied: regex!=2019.12.17 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21)) (2023.8.8)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Using cached tokenizers-0.13.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
Collecting safetensors>=0.3.1 (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for safetensors>=0.3.1 from https://files.pythonhosted.org/packages/21/12/d95158b4fdd0422faf019038be0be874d7bf3d9f9bd0b1b529f73853cec2/safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.7 kB)
Collecting tqdm>=4.27 (from transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for tqdm>=4.27 from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting fsspec (from huggingface-hub<1.0,>=0.15.1->transformers>=4.18.0->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 21))
  Obtaining dependency information for fsspec from https://files.pythonhosted.org/packages/e3/bd/4c0a4619494188a9db5d77e2100ab7d544a42e76b2447869d8e124e981d8/fsspec-2023.6.0-py3-none-any.whl.metadata
  Using cached fsspec-2023.6.0-py3-none-any.whl.metadata (6.7 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/de/e2/32c14301bb023986dff527a49325b6259cab4ebb4633f69de54af312fc45/MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Requirement already satisfied: charset-normalizer<4,>=2 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from requests->torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.2.0)
Requirement already satisfied: idna<4,>=2.5 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from requests->torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from requests->torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/lib/python3.8/site-packages> (from requests->torchvision>=0.8.2->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 19)) (2023.7.22)
Collecting mpmath>=0.19 (from sympy->torch>=1.7.1->-r apache_beam/ml/inference/torch_tests_requirements.txt (line 18))
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached Pillow-10.0.0-cp38-cp38-manylinux_2_28_x86_64.whl (3.4 MB)
Using cached transformers-4.32.1-py3-none-any.whl (7.5 MB)
Using cached huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
Using cached safetensors-0.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Using cached MarkupSafe-2.1.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Using cached cmake-3.27.2-py2.py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (26.1 MB)
Using cached fsspec-2023.6.0-py3-none-any.whl (163 kB)
Installing collected packages: tokenizers, safetensors, mpmath, lit, cmake, tqdm, sympy, pillow, nvidia-nvtx-cu11, nvidia-nccl-cu11, nvidia-cusparse-cu11, nvidia-curand-cu11, nvidia-cufft-cu11, nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cupti-cu11, nvidia-cublas-cu11, networkx, MarkupSafe, fsspec, nvidia-cusolver-cu11, nvidia-cudnn-cu11, jinja2, huggingface-hub, transformers, triton, torch, torchvision
Successfully installed MarkupSafe-2.1.3 cmake-3.27.2 fsspec-2023.6.0 huggingface-hub-0.16.4 jinja2-3.1.2 lit-16.0.6 mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 pillow-10.0.0 safetensors-0.3.3 sympy-1.12 tokenizers-0.13.3 torch-2.0.1 torchvision-0.15.2 tqdm-4.66.1 transformers-4.32.1 triton-2.0.0
INFO:root:Device is set to CPU
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:apache_beam.runners.portability.stager:Executing command: ['<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/build/gradleenv/1329484227/bin/python',> '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', '/tmp/tmp2je3w2mk/tmp_requirements.txt', '--exists-action', 'i', '--no-deps', '--implementation', 'cp', '--abi', 'cp38', '--platform', 'manylinux2014_x86_64']
INFO:apache_beam.runners.portability.stager:Copying Beam SDK "<https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/ws/src/sdks/python/build/apache-beam.tar.gz"> to staging location.
INFO:apache_beam.runners.dataflow.dataflow_runner:Pipeline has additional dependencies to be installed in SDK **** container, consider using the SDK container image pre-building workflow to avoid repetitive installations. Learn more on https://cloud.google.com/dataflow/docs/guides/using-custom-containers#prebuild
INFO:root:Using provided Python SDK container image: gcr.io/cloud-dataflow/v1beta3/beam_python3.8_sdk:beam-master-20230717
INFO:root:Python SDK container image set to "gcr.io/cloud-dataflow/v1beta3/beam_python3.8_sdk:beam-master-20230717" for Docker environment
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function pack_combiners at 0x7f7bb172cf70> ====================
INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function sort_stages at 0x7f7bb172b790> ====================
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/requirements.txt...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/requirements.txt in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/mock-2.0.0-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/mock-2.0.0-py2.py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/seaborn-0.12.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/seaborn-0.12.2-py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/PyHamcrest-1.10.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/PyHamcrest-1.10.1-py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/transformers-4.32.1-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/transformers-4.32.1-py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/inflection-0.5.1-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/inflection-0.5.1-py2.py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/beautifulsoup4-4.12.2-py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/beautifulsoup4-4.12.2-py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/parameterized-0.7.5-py2.py3-none-any.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/parameterized-0.7.5-py2.py3-none-any.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/torch-2.0.1-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/torch-2.0.1-cp38-cp38-manylinux1_x86_64.whl in 44 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/torchvision-0.15.2-cp38-cp38-manylinux1_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/torchvision-0.15.2-cp38-cp38-manylinux1_x86_64.whl in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/Pillow-10.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/Pillow-10.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/matplotlib-3.7.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/matplotlib-3.7.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/scikit_learn-1.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl in 1 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/dataflow_python_sdk.tar...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/dataflow_python_sdk.tar in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/pipeline.pb...
INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://temp-storage-for-perf-tests/loadtests/benchmark-tests-pytorch-imagenet-python0831155053.1693506843.820663/pipeline.pb in 0 seconds.
INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job
 clientRequestId: '20230831183403821850-4872'
 createTime: '2023-08-31T18:34:55.089195Z'
 currentStateTime: '1970-01-01T00:00:00Z'
 id: '2023-08-31_11_34_54-251862262020413173'
 location: 'us-central1'
 name: 'benchmark-tests-pytorch-imagenet-python0831155053'
 projectId: 'apache-beam-testing'
 stageStates: []
 startTime: '2023-08-31T18:34:55.089195Z'
 steps: []
 tempFiles: []
 type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)>
INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: [2023-08-31_11_34_54-251862262020413173]
INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 2023-08-31_11_34_54-251862262020413173
INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2023-08-31_11_34_54-251862262020413173?project=apache-beam-testing
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2023-08-31_11_34_54-251862262020413173 is in state JOB_STATE_PENDING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:34:55.533Z: JOB_MESSAGE_BASIC: The pipeline is using shuffle service with a (boot) persistent disk size / type other than the default. If that configuration was intended solely to speed up the non-service shuffle, consider removing it to reduce costs as those disks are unused by the shuffle service.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:34:58.399Z: JOB_MESSAGE_BASIC: Worker configuration: n1-standard-2 in us-central1-b.
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:35:00.814Z: JOB_MESSAGE_BASIC: Executing operation ReadImageNames/Read/Impulse+ReadImageNames/Read/Map(<lambda at iobase.py:911>)+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:35:00.835Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3736>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:35:00.908Z: JOB_MESSAGE_BASIC: Starting 75 ****s in us-central1-b...
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2023-08-31_11_34_54-251862262020413173 is in state JOB_STATE_RUNNING
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:35:12.866Z: JOB_MESSAGE_BASIC: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:15.998Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/DoOnce/Impulse+WriteOutputToGCS/Write/WriteImpl/DoOnce/FlatMap(<lambda at core.py:3736>)+WriteOutputToGCS/Write/WriteImpl/DoOnce/Map(decode)+WriteOutputToGCS/Write/WriteImpl/InitializeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.120Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.137Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.156Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.169Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/WriteBundles/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.187Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.201Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input0
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.407Z: JOB_MESSAGE_BASIC: Finished operation ReadImageNames/Read/Impulse+ReadImageNames/Read/Map(<lambda at iobase.py:911>)+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/PairWithRestriction+ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/SplitWithSizing
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:16.509Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:17.542Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Create
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:42:17.720Z: JOB_MESSAGE_BASIC: Executing operation ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda at iobase.py:1143>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:21.682Z: JOB_MESSAGE_BASIC: Finished operation ref_AppliedPTransform_ReadImageNames-Read-SDFBoundedSourceReader-ParDo-SDFBoundedSourceDoFn-_7/ProcessElementAndRestrictionWithSizing+FilterEmptyLines+PyTorchRunInference/BeamML_RunInference_Preprocess-0+PyTorchRunInference/BatchElements/ParDo(_GlobalWindowsBatchingDoFn)+PyTorchRunInference/BeamML_RunInference+PyTorchRunInference/BeamML_RunInference_Postprocess-0+WriteOutputToGCS/Write/WriteImpl/Map(<lambda at iobase.py:1143>)+WriteOutputToGCS/Write/WriteImpl/WindowInto(WindowIntoFn)+WriteOutputToGCS/Write/WriteImpl/GroupByKey/Write
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:21.732Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Close
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:21.813Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Close
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:21.908Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Read+WriteOutputToGCS/Write/WriteImpl/WriteBundles
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:24.884Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/GroupByKey/Read+WriteOutputToGCS/Write/WriteImpl/WriteBundles
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:24.956Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:24.975Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:25.007Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:25.023Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/PreFinalize/View-python_side_input1
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:25.107Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/PreFinalize
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:26.956Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/PreFinalize
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:27.048Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input2
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:27.102Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite/View-python_side_input2
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:27.201Z: JOB_MESSAGE_BASIC: Executing operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:28.627Z: JOB_MESSAGE_BASIC: Finished operation WriteOutputToGCS/Write/WriteImpl/FinalizeWrite
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:55:28.807Z: JOB_MESSAGE_BASIC: Stopping **** pool...
INFO:apache_beam.runners.dataflow.dataflow_runner:2023-08-31T18:57:50.960Z: JOB_MESSAGE_BASIC: Worker pool stopped.
INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2023-08-31_11_34_54-251862262020413173 is in state JOB_STATE_DONE
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Load test results for test: 0a566ceecd9244999969986af607af59 and timestamp: 1693508281.5192473:
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_num_inferences Value: 50000
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_request_batch_byte_size Value: 5049
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_request_batch_byte_size Value: 7055
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_request_batch_byte_size Value: 102
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_request_batch_byte_size Value: 4475311
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_request_batch_byte_size Value: 886
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_batch_latency_micro_secs Value: 5049
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_batch_latency_micro_secs Value: 69289429
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_batch_latency_micro_secs Value: 415110
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_batch_latency_micro_secs Value: 31889881288
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_batch_latency_micro_secs Value: 6316078
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_count_inference_request_batch_size Value: 5049
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_max_inference_request_batch_size Value: 80
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_min_inference_request_batch_size Value: 1
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_inference_request_batch_size Value: 50000
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_inference_request_batch_size Value: 9
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_count_load_model_latency_milli_secs Value: 150
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_max_load_model_latency_milli_secs Value: 537336
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_min_load_model_latency_milli_secs Value: 101307
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_load_model_latency_milli_secs Value: 19011527
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_load_model_latency_milli_secs Value: 126743
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_count_model_byte_size Value: 150
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_max_model_byte_size Value: 604639232
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_min_model_byte_size Value: 559144960
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_sum_model_byte_size Value: 84366962688
INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: BeamML_PyTorch_pytorchruninference/beamml_runinference_mean_model_byte_size Value: 562446417

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.6.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD SUCCESSFUL in 27m 31s
15 actionable tasks: 5 executed, 10 up-to-date

Publishing build scan...
https://ge.apache.org/s/6yia2uqx4sffs

FATAL: command execution failed
hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@2689ac79:apache-beam-jenkins-13": Remote call on apache-beam-jenkins-13 failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:285)
	at com.sun.proxy.$Proxy138.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1215)
	at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1207)
	at hudson.Launcher$ProcStarter.join(Launcher.java:524)
	at hudson.plugins.gradle.Gradle.perform(Gradle.java:321)
	at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:814)
	at hudson.model.Build$BuildExecution.build(Build.java:199)
	at hudson.model.Build$BuildExecution.doRun(Build.java:164)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:522)
	at hudson.model.Run.execute(Run.java:1896)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)
Caused by: java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1470)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:902)
	at hudson.slaves.SlaveComputer.access$100(SlaveComputer.java:111)
	at hudson.slaves.SlaveComputer$2.run(SlaveComputer.java:782)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure
FATAL: Channel "hudson.remoting.Channel@2689ac79:apache-beam-jenkins-13": Remote call on apache-beam-jenkins-13 failed. The channel is closing down or has closed down
java.io.IOException
	at hudson.remoting.Channel.close(Channel.java:1470)
	at hudson.remoting.Channel.close(Channel.java:1447)
	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:902)
	at hudson.slaves.SlaveComputer.access$100(SlaveComputer.java:111)
	at hudson.slaves.SlaveComputer$2.run(SlaveComputer.java:782)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@2689ac79:apache-beam-jenkins-13": Remote call on apache-beam-jenkins-13 failed. The channel is closing down or has closed down
	at hudson.remoting.Channel.call(Channel.java:993)
	at hudson.Launcher$RemoteLauncher.kill(Launcher.java:1150)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
	at hudson.model.Run.execute(Run.java:1896)
	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:44)
	at hudson.model.ResourceController.execute(ResourceController.java:101)
	at hudson.model.Executor.run(Executor.java:442)

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Jenkins build is back to normal : beam_Inference_Python_Benchmarks_Dataflow #358

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow/358/display/redirect>


---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org