You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/08/13 22:13:00 UTC

[jira] [Commented] (AIRFLOW-4526) KubernetesPodOperator gets stuck in Running state when get_logs is set to True and there is a long gap without logs from pod

    [ https://issues.apache.org/jira/browse/AIRFLOW-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906667#comment-16906667 ] 

ASF GitHub Bot commented on AIRFLOW-4526:
-----------------------------------------

thealmightygrant commented on pull request #5813: [AIRFLOW-4526] KubernetesPodOperator gets stuck in Running state when get_logs is set to True and there is a long gap without logs from pod
URL: https://github.com/apache/airflow/pull/5813
 
 
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Airflow Jira](https://issues.apache.org/jira/browse/AIRFLOW-4526) issues and references them in the PR title.
     - https://issues.apache.org/jira/browse/AIRFLOW-4526
   
   ### Description
   
   - [x] I delved through the airflow/k8s python code, and it looks as if we can alleviate the above referenced JIRA issue by adding the ability for users to supply a timeout for the connection to the logs of the image running in the KubernetesPodOperator. From what I can glean of the codebase, the `pod_launcher` calls [`read_namespaced_pod_log`](https://github.com/apache/airflow/blob/5899cec01e0ea69a54e650a9e1abdbcd5370e120/airflow/kubernetes/pod_launcher.py#L149), which then calls the k8s API via a `GET` request to /api/v1/namespaces/{namespace}/pods/{name}/log`. This `GET` request is fulfilled by the API client through a [`request`](https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api_client.py#L354). Every request has the option to provide several options, one of which is `_preload_content` which is used in Airflow currently. In this PR, I am adding the ability to use another option `_request_timeout`. This `_request_timeout` is fulfilled by the `RESTClientObject` which is implemented [here](https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L57). Within the REST client, the `_request_timeout` is taken care of [here](https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L142) by `urllib3` using a [`Timeout`](https://urllib3.readthedocs.io/en/latest/user-guide.html#using-timeouts). 
   
   A `urllib3` Timeout allows you to "to control how long (in seconds) requests are allowed to run before being aborted. This means that when logs are not received within 10 minutes, this call will properly be aborted and the retry logic recently implemented in [PR-5284](https://github.com/apache/airflow/pull/5284) will then kick in. After three retries, as far as I can tell, the pod operator task would then be marked as a failure.
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for this extremely good reason: this is an extremely hard to reproduce failure case...we have been unable to reproduce it in a reliable manner during testing, but it does occur on long running tasks (> 1hr). Any task that runs for a long period of time and has connection issues is not easy to test. 
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
     1. Subject is separated from body by a blank line
     1. Subject is limited to 50 characters (not including Jira issue reference)
     1. Subject does not end with a period
     1. Subject uses the imperative mood ("add", not "adding")
     1. Body wraps at 72 characters
     1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [x] In case of new functionality, my PR adds documentation that describes how to use it.
     - All the public functions and the classes in the PR contain docstrings that explain what it does
     - If you implement backwards incompatible changes, please leave a note in the [Updating.md](https://github.com/apache/airflow/blob/master/UPDATING.md) so we can assign it to a appropriate release
   
   ### Code Quality
   
   - [x] Passes `flake8`
   
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> KubernetesPodOperator gets stuck in Running state when get_logs is set to True and there is a long gap without logs from pod
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-4526
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-4526
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: operators
>         Environment: Azure Kubernetes Service cluster with Airflow based on puckel/docker-airflow
>            Reporter: Christian Lellmann
>            Priority: Major
>              Labels: kubernetes
>             Fix For: 2.0.0
>
>
> When setting the `get_logs` parameter in the KubernetesPodOperator to True the Operator task get stuck in the Running state if the pod that is run by the task (in_cluster mode) writes some logs and then stops writing logs for a longer time (few minutes) before continuing writing. The continued logging isn't fetched anymore and the pod states aren't checked anymore. So, the completion of the pod isn't recognized and the tasks never finishes.
>  
> Assumption:
> In the `monitor_pod` method of the pod launcher ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L97]) the `read_namespaced_pod_log` method of the kubernetes client get stuck in the `Follow=True` stream ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L108]) because if there is a time without logs from the pod the method doesn't forward the following logs anymore, probably.
> So, the `pod_launcher` doesn't check the pod states later anymore ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L118]) and doesn't recognize the complete state -> the task sticks in Running.
> When disabling the `get_logs` parameter everything works because the log stream is skipped.
>  
> Suggestion:
> Poll the logs actively without the `Follow` parameter set to True in parallel with the pod state checking.
> So, it's possible to fetch the logs without the described connection problem and coincidently check the pod state to be definetly able to recognize the end states of the pods.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)