You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2022/01/11 10:23:53 UTC

[GitHub] [airflow] hterik commented on a change in pull request #20806: Don't retry kubernetes pod start forever.

hterik commented on a change in pull request #20806:
URL: https://github.com/apache/airflow/pull/20806#discussion_r782007176



##########
File path: airflow/executors/kubernetes_executor.py
##########
@@ -599,6 +608,9 @@ def sync(self) -> None:
         for _ in range(self.kube_config.worker_pods_creation_batch_size):
             try:
                 task = self.task_queue.get_nowait()
+                if datetime.utcnow() < task.next_allowed_retry:
+                    continue

Review comment:
       Not sure how this will work together with the self.event_scheduler.run(blocking=False) on line 646 below. Will it keep looping very fast as long as the queue has items or is there some other delay built in here?

##########
File path: airflow/executors/kubernetes_executor.py
##########
@@ -50,8 +50,17 @@
 from airflow.utils.session import provide_session
 from airflow.utils.state import State
 
-# TaskInstance key, command, configuration, pod_template_file
-KubernetesJobType = Tuple[TaskInstanceKey, CommandType, Any, Optional[str]]
+
+class KubernetesJobType(NamedTuple):
+    key: TaskInstanceKey
+    command: CommandType
+    config: Any
+    pod_template_file: Optional[str]
+    next_allowed_retry: datetime
+    retry_count: int
+
+
+RETRY_BACKOFF_SECONDS = [1, 2, 5, 10, 10, 30, 60, 60]

Review comment:
       Are these numbers reasonable? Or should it continue longer?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@airflow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org