You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2021/02/05 16:25:32 UTC

[GitHub] [airflow] potiuk commented on a change in pull request #13832: removing try-catch block to fix timeout exception getting ignored in aws batch operator

potiuk commented on a change in pull request #13832:
URL: https://github.com/apache/airflow/pull/13832#discussion_r571089084



##########
File path: airflow/providers/amazon/aws/operators/batch.py
##########
@@ -177,29 +177,26 @@ def submit_job(self, context: Dict):  # pylint: disable=unused-argument
             self.job_id = response["jobId"]
 
             self.log.info("AWS Batch job (%s) started: %s", self.job_id, response)
-
         except Exception as e:
             self.log.error("AWS Batch job (%s) failed submission", self.job_id)
             raise AirflowException(e)
 
     def monitor_job(self, context: Dict):  # pylint: disable=unused-argument
         """
         Monitor an AWS Batch job
+        monitor_job can raise an exception or an AirflowTaskTimeout can be raised if execution_timeout
+        is given while creating the task. These exceptions should be handled in taskinstance.py
+        instead of here like it was previously done

Review comment:
       It's about the only place where the Airflow Task Timeout exception @dstandish in the entire code. I think it's not at all 'common' knowledge what's going on here. But i do agree that refering to "previous" state is not needed. So if that can be removed maybe we have a nice compromise ;).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org