You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Janek Bevendorff (Jira)" <ji...@apache.org> on 2022/01/24 09:08:00 UTC
[jira] [Comment Edited] (BEAM-12792) Beam worker only installs --extra_package once
[ https://issues.apache.org/jira/browse/BEAM-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17480934#comment-17480934 ]
Janek Bevendorff edited comment on BEAM-12792 at 1/24/22, 9:07 AM:
-------------------------------------------------------------------
There are two modes. One deployment mode is an ad-hoc application cluster, which is deployed explicitly for one job. The other is a session cluster, which can host multiple jobs (I guess that's the more common choice).
A session cluster can be deployed either as a static deployment (manual or via the third-party Flink K8S operator CRD, examples here: [https://github.com/spotify/flink-on-k8s-operator/tree/master/examples/beam|https://github.com/spotify/flink-on-k8s-operator/tree/master/examples/beam]) or a dynamic one via Flink's built-in "native" K8S mode ([https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/]). Neither binds the lifecycle explicitly to a job. The native mode will spawn new pods on demand and despawn them again afterwards (after a configured timeout), but if you submit two jobs in short succession, nothing prevents Flink from reusing an already running container from the previous job.
was (Author: phoerious):
There are two modes. One deployment mode is an ad-hoc application cluster, which is deployed explicitly for one job. The other is a session cluster, which can host multiple jobs (I guess that's the more common choice).
A session cluster can be deployed either as a static deployment (manual or via the third-party Flink K8S operator CRD, examples here: [https://github.com/spotify/flink-on-k8s-operator/tree/master/examples/beam|https://github.com/spotify/flink-on-k8s-operator/tree/master/examples/beam),]) or a dynamic one via Flink's built-in "native" K8S mode ([https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/).]). Neither binds the lifecycle explicitly to a job. The native mode will spawn new pods on demand and despawn them again afterwards (after a configured timeout), but if you submit two jobs in short succession, nothing prevents Flink from reusing an already running container from the previous job.
> Beam worker only installs --extra_package once
> ----------------------------------------------
>
> Key: BEAM-12792
> URL: https://issues.apache.org/jira/browse/BEAM-12792
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-harness
> Affects Versions: 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0
> Environment: Kubernetes 1.20 on Ubuntu 18.04.
> Reporter: Jens Wiren
> Priority: P1
> Labels: FlinkRunner, beam
>
> I'm running TFX pipelines on a Flink cluster using Beam in k8s. However, extra python packages passed to the Flink runner (or rather beam worker side-car) are only installed once per deployment cycle. Example:
> # Flink is deployed and is up and running
> # A TFX pipeline starts, submits a job to Flink along with a python whl of custom code and beam ops.
> # The beam worker installs the package and the pipeline finishes succesfully.
> # A new TFX pipeline is build where a new beam fn is introduced, the pipline is started and the new whl is submitted as in step 2).
> # This time, the new package is not being installed in the beam worker causing the job to fail due to a reference which does not exist in the beam worker, since it didn't install the new package.
>
> I started using Flink from beam version 2.27 and it has been an issue all the time.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)