You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2022/02/06 13:28:04 UTC

[GitHub] [airflow] potiuk commented on issue #21087: KubernetesJobWatcher failing on HTTP 410 errors, jobs stuck in scheduled state

potiuk commented on issue #21087:
URL: https://github.com/apache/airflow/issues/21087#issuecomment-1030832535


   I think this is also similar root cause as #12644 .   @dimberman @jedcunningham @kaxil  - or maybe somoene else who has some more experiences with K8S deployments in "real life" - this error "Resource too old" is returned by K8S when there are  too many changes to a version of K8S resource.
   
   But I am just wondering - it really happens IMHO because we deploy some changes "incrementally" too frequently (and too many times) in the chart/deployment ? Or maybe because we do NOT deploy the "full" deployment where we should? 
   
   I am not too experienced in long running K8S deployments, but for me it looks like something that this could be solved by identifiying which resources those are and implement some full "re-deployment" from time to time.
   
   It might be, that this is outside of our control as well, but I've seen some other people complaining about that recently so maybe we could have someone who has more insights there to take a look ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@airflow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org