You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2020/04/21 01:40:50 UTC

[GitHub] [airflow] ephraimbuddy opened a new pull request #8481: Add hook and operator for google cloud life sciences

ephraimbuddy opened a new pull request #8481:
URL: https://github.com/apache/airflow/pull/8481


   ---
   This PR addresses [#8272](https://github.com/apache/airflow/issues/8272)
   
   Make sure to mark the boxes below before creating PR: [x]
   
   - [ ] Description above provides context of the change
   - [ ] Unit tests coverage for changes (not needed for documentation changes)
   - [ ] Commits follow "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)"
   - [ ] Relevant documentation is updated including usage instructions.
   - [ ] I will engage committers as explained in [Contribution Workflow Example](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#contribution-workflow-example).
   
   ---
   In case of fundamental code change, Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvements+Proposals)) is needed.
   In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).
   In case of backwards incompatible changes please leave a note in [UPDATING.md](https://github.com/apache/airflow/blob/master/UPDATING.md).
   Read the [Pull Request Guidelines](https://github.com/apache/airflow/blob/master/CONTRIBUTING.rst#pull-request-guidelines) for more information.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on issue #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on issue #8481:
URL: https://github.com/apache/airflow/pull/8481#issuecomment-617516126


   I have run the system test and I have the error message below.  I suspect the file you are referring to does not exist in my bucket. Can you create it in the `setUp` method?
   ```
   [2020-04-22 02:43:49,073] {taskinstance.py:1186} ERROR - {'code': 9, 'message': 'Execution failed: generic::failed_precondition: while running "gsutil cp gs://test-airflow-life-sciences/input.in /tmp": unexpected exit status 1 was not ignored'}
   Traceback (most recent call last):
     File "/opt/airflow/airflow/models/taskinstance.py", line 1024, in _run_raw_task
       result = task_copy.execute(context=context)
     File "/opt/airflow/airflow/providers/google/cloud/operators/life_sciences.py", line 74, in execute
       project_id=self.project_id)
     File "/opt/airflow/airflow/providers/google/common/hooks/base_google.py", line 345, in inner_wrapper
       return func(self, *args, **kwargs)
     File "/opt/airflow/airflow/providers/google/cloud/hooks/life_sciences.py", line 101, in run_pipeline
       self._wait_for_operation_to_complete(operation_name)
     File "/opt/airflow/airflow/providers/google/cloud/hooks/life_sciences.py", line 148, in _wait_for_operation_to_complete
       raise AirflowException(str(error))
   airflow.exceptions.AirflowException: {'code': 9, 'message': 'Execution failed: generic::failed_precondition: while running "gsutil cp gs://test-airflow-life-sciences/input.in /tmp": unexpected exit status 1 was not ignored'}
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412612368



##########
File path: docs/howto/operator/gcp/life_sciences.rst
##########
@@ -0,0 +1,79 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+
+
+Google Cloud Life Sciences Operators
+====================================
+The `Google Cloud Life Sciences <https://cloud.google.com/life-sciences/>`__ is a service that executes
+series of compute engine containers on the google cloud. It is used to process, analyze and annotate genomics

Review comment:
       ```suggestion
   series of compute engine containers on the Google Cloud Platform. It is used to process, analyze and annotate genomics
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r414964222



##########
File path: airflow/providers/google/cloud/example_dags/example_life_sciences.py
##########
@@ -0,0 +1,108 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Example Airflow DAG that displays interactions with Google Cloud Life Sciences.
+
+This DAG relies on the following OS environment variables:
+
+* GCP_PROJECT_ID - Google Cloud Project to use for the Cloud Function.
+* GCP_GCS_BUCKET - Google Cloud Storage Bucket to use
+* GCP_LIFE_SCIENCES_LOCATION - The Location of the Google Cloud Project

Review comment:
       ```suggestion
   ```
   We do not need variable lists because they are very difficult to maintain in good condition. In this case, one variable is missing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] ephraimbuddy commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
ephraimbuddy commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412782575



##########
File path: airflow/providers/google/cloud/example_dags/example_life_sciences.py
##########
@@ -0,0 +1,106 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Example Airflow DAG that displays interactions with Google Cloud Life Sciences.
+
+This DAG relies on the following OS environment variables:
+
+* GCP_PROJECT_ID - Google Cloud Project to use for the Cloud Function.
+* GCP_GCS_BUCKET - Google Cloud Storage Bucket to use
+* GCP_LOCATION - The Location of the Google Cloud Project
+"""
+import os
+
+from airflow import models
+from airflow.providers.google.cloud.operators.life_sciences import LifeSciencesRunPipelineOperator
+from airflow.utils import dates
+
+PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "example-project-id")
+BUCKET = os.environ.get("GCP_GCS_BUCKET", "example-bucket")
+LOCATION = os.environ.get("GCP_LOCATION", 'example-location')

Review comment:
       Ok. Thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] ephraimbuddy commented on pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
ephraimbuddy commented on pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#issuecomment-619340048


   Thanks so much @mik-laj , you make things so easy! Will like to be like you when I grow up on this!!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412611758



##########
File path: airflow/providers/google/cloud/example_dags/example_life_sciences.py
##########
@@ -0,0 +1,106 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Example Airflow DAG that displays interactions with Google Cloud Life Sciences.
+
+This DAG relies on the following OS environment variables:
+
+* GCP_PROJECT_ID - Google Cloud Project to use for the Cloud Function.
+* GCP_GCS_BUCKET - Google Cloud Storage Bucket to use
+* GCP_LOCATION - The Location of the Google Cloud Project
+"""
+import os
+
+from airflow import models
+from airflow.providers.google.cloud.operators.life_sciences import LifeSciencesRunPipelineOperator
+from airflow.utils import dates
+
+PROJECT_ID = os.environ.get("GCP_PROJECT_ID", "example-project-id")
+BUCKET = os.environ.get("GCP_GCS_BUCKET", "example-bucket")
+LOCATION = os.environ.get("GCP_LOCATION", 'example-location')

Review comment:
       ```suggestion
   LOCATION = os.environ.get("GCP_LIFE_SCIENCE_LOCATION", 'europe-west4')
   ```
   Can you use unique name for variable? This is useful because we have many system tests. The default value that may work is also useful because it allows you to run tests with fewer environment variables. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] kaxil commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
kaxil commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412583737



##########
File path: airflow/providers/google/cloud/hooks/life_sciences.py
##########
@@ -0,0 +1,150 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Hook for Google Cloud Life Sciences service"""
+
+import time
+from typing import Any, Dict, Optional
+
+import google.api_core.path_template
+from googleapiclient.discovery import build
+
+from airflow.exceptions import AirflowException
+from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
+
+# Time to sleep between active checks of the operation results
+TIME_TO_SLEEP_IN_SECONDS = 5
+
+
+# noinspection PyAbstractClass
+class LifeSciencesHook(GoogleBaseHook):
+    """
+    Hook for the Google Cloud Life Sciences APIs.
+
+    All the methods in the hook where project_id is used must be called with
+    keyword arguments rather than positional.
+
+    :param api_version: API version used (for example v1 or v1beta1).
+    :type api_version: str
+    :param gcp_conn_id: The connection ID to use when fetching connection info.
+    :type gcp_conn_id: str
+    :param delegate_to: The account to impersonate, if any.
+        For this to work, the service account making the request must have
+        domain-wide delegation enabled.
+    :type delegate_to: str
+    """
+
+    _conn = None  # type: Optional[Any]
+
+    def __init__(
+        self,
+        api_version: str = "v2beta",
+        gcp_conn_id: str = "google_cloud_default",
+        delegate_to: Optional[str] = None
+    ) -> None:
+        super().__init__(gcp_conn_id, delegate_to)
+        self.api_version = api_version
+
+    def get_conn(self):
+        """
+        Retrieves the connection to Cloud Life Sciences.
+
+        :return: Google Cloud Life Sciences service object.
+        """
+        if not self._conn:
+            http_authorized = self._authorize()
+            self._conn = build("lifesciences", self.api_version,
+                               http=http_authorized, cache_discovery=False)
+        return self._conn
+
+    @GoogleBaseHook.fallback_to_default_project_id
+    def run_pipeline(self, body: Dict, location: str, project_id: str):
+        """
+        Runs a pipeline
+
+        :param body: The request body.
+        :type body: dict
+        :param location: The location of the project. For example: "us-east1".
+        :type location: str
+        :param project_id: Optional, Google Cloud Project project_id where the function belongs.
+            If set to None or missing, the default project_id from the GCP connection is used.
+        :type project_id: str
+        :return: Dict

Review comment:
       ```suggestion
           :rtype: dict
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412612902



##########
File path: docs/howto/operator/gcp/life_sciences.rst
##########
@@ -0,0 +1,79 @@
+ .. Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+ ..   http://www.apache.org/licenses/LICENSE-2.0
+
+ .. Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+
+
+Google Cloud Life Sciences Operators
+====================================
+The `Google Cloud Life Sciences <https://cloud.google.com/life-sciences/>`__ is a service that executes
+series of compute engine containers on the google cloud. It is used to process, analyze and annotate genomics
+and biomedical data at scale.
+
+.. contents::
+  :depth: 1
+  :local:
+
+
+Prerequisite Tasks
+^^^^^^^^^^^^^^^^^^
+
+.. include:: _partials/prerequisite_tasks.rst
+
+
+Pipeline Configuration
+^^^^^^^^^^^^^^^^^^^^^^
+In order to run the pipeline, it is necessary to configure the request body.
+Here is an example of the pipeline configuration with a single action.
+
+.. exampleinclude:: ../../../../airflow/providers/google/cloud/example_dags/example_life_sciences.py
+    :language: python
+    :dedent: 0
+    :start-after: [START howto_configure_simple_action_pipeline]
+    :end-before: [END howto_configure_simple_action_pipeline]
+
+The pipeline can also be configured with multiple action.
+
+.. exampleinclude:: ../../../../airflow/providers/google/cloud/example_dags/example_life_sciences.py
+    :language: python
+    :dedent: 0
+    :start-after: [START howto_configure_multiple_action_pipeline]
+    :end-before: [END howto_configure_multiple_action_pipeline]
+
+Read about the `request body parameters <https://cloud.google.com/life-sciences/docs/reference/rest/v2beta/projects.locations.pipelines/run?authuser=1#request-body/>`__
+to understand all the fields you can include in the configuration
+
+.. _howto/operator:LifeSciencesRunPipelineOperator:
+
+LifeSciencesRunPipelineOperator

Review comment:
       ```suggestion
   Running a pipeline
   ```
   The title should be readable for new users. The title should be intended for new users. It should describe the operation, not the operator name.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] ephraimbuddy commented on pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
ephraimbuddy commented on pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#issuecomment-618939275


   Hi @mik-laj , I didn't run the system tests. I have not been able to set it up. However I did use example dag to test it in real scenario and it worked. Can you point me to how I can be able to create and upload file to storage so that the system can be able to run the test. I am currently trying hard to do the system test myself. Thanks


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] mik-laj commented on pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
mik-laj commented on pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#issuecomment-619311338


   The system tests only work if you have run breeze with a MySQL or Postgres database.
   ```
   ./breeze --backend postgres
   ./breeze --backend mysql
   ```
   
   To run system tests for GCP you need two things:
   * Environment variables
   * Key for the service account
   
   The key can be any key that has sufficient permissions. I often use the key with the "Project owner" role.  The key should be saved in a `/files/airflow-breeze-config/keys/gcp_life_sciences.json` file.
   You can check the environment variables in the example DAG, In this case, you should set the following variables:
   ```
   export GCP_PROJECT_ID=example-project-id
   export GCP_GCS_BUCKET=example-bucket
   ```
   These variables are important because they provide test isolation or are required by authorization.
   
   When we have everything configured, you can run the following command to run the test.
   ```
   pytest tests/providers/google/cloud/operators/test_life_sciences_system.py --system google -s
   ```
   During development, to make sure everything works, I often run tests using the following command.
   ```
   GCP_GCS_BUCKET=airflow-life-science-$RANDOM pytest tests/providers/google/cloud/operators/test_life_sciences_system.py --system google -s
   ```
   That way I can be sure that everything works.
   
   The pytest displays helpful error messages. However, you can get lost in them if you run tests for the entire project.
   ```
   root@6750e034be9c:/opt/airflow# GCP_GCS_BUCKET=airflow-life-science-$RANDOM pytest tests/providers/google/cloud/operators/test_life_sciences_system.py --system google -s
   =========================================================================================== test session starts ============================================================================================
   platform linux -- Python 3.6.10, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /usr/local/bin/python
   cachedir: .pytest_cache
   rootdir: /opt/airflow, inifile: pytest.ini
   plugins: rerunfailures-9.0, forked-1.1.3, xdist-1.31.0, timeout-1.3.4, flaky-3.6.1, instafail-0.4.1.post0, requests-mock-1.7.0, celery-4.4.2, cov-2.8.1
   collected 1 item
   
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function <- ../../Users/kamilbregula/devel/google-airflow/airflow/tests/providers/google/cloud/operators/test_life_sciences_system.py SKIPPED
   
   ========================================================================================= short test summary info ==========================================================================================
   SKIPPED [1] /Users/kamilbregula/devel/google-airflow/airflow/tests/conftest.py:308: The test requires credential file /files/airflow-breeze-config/keys/gcp_life_sciences.json: <TestCaseFunction test_run_example_dag_function>
   ============================================================================================ 1 skipped in 1.06s ============================================================================================
   root@6750e034be9c:/opt/airflow#
   ```
   The crucial is the "short test summary info" section
   ```
   SKIPPED [1] /Users/kamilbregula/devel/google-airflow/airflow/tests/conftest.py:308: The test requires credential file /files/airflow-breeze-config/keys/gcp_life_sciences.json: <TestCaseFunction test_run_example_dag_function>
   ```
   This message indicates that the key is missing and you must create it.
   
   More information about creating a service account is available: https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-accounts-upload-gcloud
   
   <details>
   
   ```
   root@6750e034be9c:/opt/airflow# GCP_GCS_BUCKET=airflow-life-science-$RANDOM pytest tests/providers/google/cloud/operators/test_life_sciences_system.py --system google -s
   =========================================================================================== test session starts ============================================================================================
   platform linux -- Python 3.6.10, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /usr/local/bin/python
   cachedir: .pytest_cache
   rootdir: /opt/airflow, inifile: pytest.ini
   plugins: rerunfailures-9.0, forked-1.1.3, xdist-1.31.0, timeout-1.3.4, flaky-3.6.1, instafail-0.4.1.post0, requests-mock-1.7.0, celery-4.4.2, cov-2.8.1
   collected 1 item
   
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function ========================= AIRFLOW ==========================
   Home of the user: /root
   Airflow home /root/airflow
   Skipping initializing of the DB as it was initialized already.
   You can re-initialize the database by adding --with-db-init flag when running tests.
   [2020-04-25 02:46:19,780] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_life_sciences.json'
   [2020-04-25 02:46:20,671] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:20,672] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-transfer-account@polidea-airflow.iam.gserviceaccount.com]
   
   
   Removing all log files except previous_runs
   
   [2020-04-25 02:46:20,738] {logging_command_executor.py:33} INFO - Executing: 'gcloud config set core/project polidea-airflow'
   [2020-04-25 02:46:21,385] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:21,386] {logging_command_executor.py:41} INFO - Stderr: Updated property [core/project].
   
   [2020-04-25 02:46:21,386] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_gcs.json'
   [2020-04-25 02:46:22,099] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:22,100] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-account@polidea-airflow.iam.gserviceaccount.com]
   
   [2020-04-25 02:46:22,100] {logging_command_executor.py:33} INFO - Executing: 'gsutil mb -c regional -l us-central1 gs://airflow-life-science-9878'
   [2020-04-25 02:46:24,594] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:24,595] {logging_command_executor.py:41} INFO - Stderr: Creating gs://airflow-life-science-9878/...
   
   [2020-04-25 02:46:24,965] {logging_command_executor.py:33} INFO - Executing: 'gcloud config set core/project polidea-airflow'
   [2020-04-25 02:46:25,597] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:25,598] {logging_command_executor.py:41} INFO - Stderr: Updated property [core/project].
   
   [2020-04-25 02:46:25,599] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_gcs.json'
   [2020-04-25 02:46:26,352] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:26,353] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-account@polidea-airflow.iam.gserviceaccount.com]
   
   [2020-04-25 02:46:26,354] {logging_command_executor.py:33} INFO - Executing: 'gsutil cp /tmp/airflow-gcprsd8gqdb/input.in gs://airflow-life-science-9878'
   [2020-04-25 02:46:41,301] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:41,302] {logging_command_executor.py:41} INFO - Stderr: Copying file:///tmp/airflow-gcprsd8gqdb/input.in [Content-Type=application/octet-stream]...
   / [0 files][    0.0 B/  2.9 MiB]
   / [0 files][  2.1 MiB/  2.9 MiB]
   -
   \
   \ [0 files][  2.8 MiB/  2.9 MiB]
   |
   | [1 files][  2.9 MiB/  2.9 MiB]
   /
   Operation completed over 1 objects/2.9 MiB.
   
   [2020-04-25 02:46:41,320] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_life_sciences.json'
   [2020-04-25 02:46:42,266] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:46:42,266] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-transfer-account@polidea-airflow.iam.gserviceaccount.com]
   
   [2020-04-25 02:46:42,267] {system_tests_class.py:137} INFO - Looking for DAG: example_gcp_life_sciences in /opt/airflow/airflow/providers/google/cloud/example_dags
   [2020-04-25 02:46:42,267] {dagbag.py:368} INFO - Filling up the DagBag from /opt/airflow/airflow/providers/google/cloud/example_dags
   [2020-04-25 02:46:44,641] {system_tests_class.py:151} INFO - Attempting to run DAG: example_gcp_life_sciences
   [2020-04-25 02:46:45,291] {taskinstance.py:718} INFO - Dependencies all met for <TaskInstance: example_gcp_life_sciences.simple-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>
   [2020-04-25 02:46:45,303] {base_executor.py:75} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_gcp_life_sciences', 'simple-action-pipeline', '2020-04-24T00:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py', '--cfg-path', '/tmp/tmpywpbvs43']
   [2020-04-25 02:46:45,317] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:45,785] {local_executor.py:66} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_gcp_life_sciences', 'simple-action-pipeline', '2020-04-24T00:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py', '--cfg-path', '/tmp/tmpywpbvs43']
   [2020-04-25 02:46:45,801] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:45,821] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:46,789] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:46,806] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:47,795] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:47,813] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:47,957] {dagbag.py:368} INFO - Filling up the DagBag from /opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py
   Running <TaskInstance: example_gcp_life_sciences.simple-action-pipeline 2020-04-24T00:00:00+00:00 [None]> on host 6750e034be9c
   [2020-04-25 02:46:48,804] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:48,825] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:49,808] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:49,821] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:50,817] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:50,831] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:51,825] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:51,840] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:52,833] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:52,848] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:53,842] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:53,857] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:54,846] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:54,865] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:55,853] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:55,867] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:56,863] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:56,877] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:57,864] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:57,878] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:58,825] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:58,848] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:46:59,830] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:46:59,851] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:00,844] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:00,857] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:01,852] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:01,867] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:02,861] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:02,874] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:03,859] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:03,875] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:04,871] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:04,885] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:05,883] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:05,906] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:06,890] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:06,903] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:07,895] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:07,909] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:08,898] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:08,917] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:09,907] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:09,929] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:10,915] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:10,932] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:11,929] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:11,947] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:12,934] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:12,949] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:13,934] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:13,960] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:14,946] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:14,963] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:15,944] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:15,963] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:16,966] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:16,980] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:17,964] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:17,979] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:18,967] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:18,984] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:19,971] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:19,986] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:20,974] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:20,996] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:21,983] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:22,000] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:22,987] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:23,005] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:23,989] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:24,006] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:24,994] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:25,010] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:25,995] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:26,010] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:27,007] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 0 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:27,021] {taskinstance.py:712} INFO - Dependencies not met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_tasks_state={'total': 1, 'successes': 0, 'skipped': 0, 'failed': 0, 'upstream_failed': 0, 'done': 0}, upstream_task_ids={'simple-action-pipeline'}
   [2020-04-25 02:47:27,975] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 1 | succeeded: 1 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 1
   [2020-04-25 02:47:27,999] {taskinstance.py:718} INFO - Dependencies all met for <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [scheduled]>
   [2020-04-25 02:47:28,006] {base_executor.py:75} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_gcp_life_sciences', 'multi-action-pipeline', '2020-04-24T00:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py', '--cfg-path', '/tmp/tmpdr3aiox9']
   [2020-04-25 02:47:28,969] {backfill_job.py:262} WARNING - ('example_gcp_life_sciences', 'simple-action-pipeline', datetime.datetime(2020, 4, 24, 0, 0, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 2) state success not in running=dict_values([<TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24 00:00:00+00:00 [queued]>])
   [2020-04-25 02:47:28,969] {local_executor.py:66} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_gcp_life_sciences', 'multi-action-pipeline', '2020-04-24T00:00:00+00:00', '--local', '--pool', 'default_pool', '--subdir', '/opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py', '--cfg-path', '/tmp/tmpdr3aiox9']
   [2020-04-25 02:47:28,991] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:29,980] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:30,986] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:31,113] {dagbag.py:368} INFO - Filling up the DagBag from /opt/airflow/airflow/providers/google/cloud/example_dags/example_life_sciences.py
   Running <TaskInstance: example_gcp_life_sciences.multi-action-pipeline 2020-04-24T00:00:00+00:00 [None]> on host 6750e034be9c
   [2020-04-25 02:47:31,994] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:33,000] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:34,008] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:35,009] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:36,025] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:37,032] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:38,041] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:39,046] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:40,055] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:41,062] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:42,068] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:43,075] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:44,075] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:45,082] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:46,096] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:47,101] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:48,118] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:49,123] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:50,138] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:51,146] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:52,146] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:53,162] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:54,169] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:55,174] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:56,175] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:57,186] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:58,153] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:47:59,157] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:00,160] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:01,173] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:02,180] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:03,186] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:04,194] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:05,201] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:06,201] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:07,216] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:08,219] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:09,234] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:10,236] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:11,260] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:12,250] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:13,254] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:14,259] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:15,271] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:16,292] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:17,301] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:18,301] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:19,311] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:20,317] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:21,324] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:22,327] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:23,330] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:24,334] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:25,336] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:26,341] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:27,349] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:28,324] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:29,331] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:30,344] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:31,348] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:32,354] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:33,364] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:34,373] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:35,384] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:36,387] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:37,395] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:38,405] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:39,416] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:40,418] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:41,427] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:42,433] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:43,444] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:44,454] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:45,452] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:46,465] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:47,474] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:48,481] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:49,490] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:50,493] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:51,502] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:52,511] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:53,519] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:54,524] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:55,527] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:56,535] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:57,540] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:58,512] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:48:59,516] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:00,522] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:01,522] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:02,535] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:03,544] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:04,543] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:05,551] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:06,561] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:07,573] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:08,581] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:09,585] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:10,596] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:11,610] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:12,618] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:13,622] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:14,624] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:15,627] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:16,630] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:17,642] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:18,643] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:19,653] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:20,661] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:21,662] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:22,682] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:23,685] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:24,690] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:25,701] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:26,708] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:27,712] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:28,685] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:29,694] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:30,696] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:31,707] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:32,706] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:33,716] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:34,720] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:35,722] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:36,729] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:37,728] {backfill_job.py:379} INFO - [backfill progress] | finished run 0 of 1 | tasks waiting: 0 | succeeded: 1 | running: 1 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:38,735] {dagrun.py:336} INFO - Marking run <DagRun example_gcp_life_sciences @ 2020-04-24 00:00:00+00:00: backfill__2020-04-24T00:00:00+00:00, externally triggered: False> successful
   [2020-04-25 02:49:38,749] {backfill_job.py:379} INFO - [backfill progress] | finished run 1 of 1 | tasks waiting: 0 | succeeded: 2 | running: 0 | failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 0
   [2020-04-25 02:49:38,993] {backfill_job.py:830} INFO - Backfill done. Exiting.
   [2020-04-25 02:49:39,014] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_life_sciences.json'
   [2020-04-25 02:49:40,148] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:49:40,148] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-transfer-account@polidea-airflow.iam.gserviceaccount.com]
   
   [2020-04-25 02:49:40,152] {logging_command_executor.py:33} INFO - Executing: 'gcloud config set core/project polidea-airflow'
   [2020-04-25 02:49:40,880] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:49:40,881] {logging_command_executor.py:41} INFO - Stderr: Updated property [core/project].
   
   [2020-04-25 02:49:40,881] {logging_command_executor.py:33} INFO - Executing: 'gcloud auth activate-service-account --key-file=/files/airflow-breeze-config/keys/gcp_gcs.json'
   [2020-04-25 02:49:41,655] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:49:41,656] {logging_command_executor.py:41} INFO - Stderr: Activated service account credentials for: [gcp-storage-account@polidea-airflow.iam.gserviceaccount.com]
   
   [2020-04-25 02:49:41,656] {logging_command_executor.py:33} INFO - Executing: 'gsutil -m rm -r gs://airflow-life-science-9878'
   [2020-04-25 02:49:44,373] {logging_command_executor.py:40} INFO - Stdout:
   [2020-04-25 02:49:44,373] {logging_command_executor.py:41} INFO - Stderr: Removing gs://airflow-life-science-9878/input.in#1587782801052913...
   Removing gs://airflow-life-science-9878/output.in#1587782971651713...
   / [1/2 objects]  50% Done
   / [2/2 objects] 100% Done
   Operation completed over 2 objects.
   Removing gs://airflow-life-science-9878/...
   
   
   Saving all log files to /root/airflow/logs/previous_runs/2020-04-25_02_49_44
   
   PASSED
   
   ============================================================================================= warnings summary =============================================================================================
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function
     /opt/airflow/airflow/providers/google/cloud/example_dags/example_mlengine.py:82: DeprecationWarning: This operator is deprecated. Consider using operators for specific operations: MLEngineCreateModelOperator, MLEngineGetModelOperator.
       "name": MODEL_NAME,
   
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function
     /opt/airflow/airflow/providers/google/cloud/example_dags/example_mlengine.py:91: DeprecationWarning: This operator is deprecated. Consider using operators for specific operations: MLEngineCreateModelOperator, MLEngineGetModelOperator.
       "name": MODEL_NAME,
   
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function
     /usr/local/lib/python3.6/site-packages/future/standard_library/__init__.py:65: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
       import imp
   
   tests/providers/google/cloud/operators/test_life_sciences_system.py::CloudLifeSciencesExampleDagsSystemTest::test_run_example_dag_function
     /opt/airflow/airflow/providers/google/cloud/example_dags/example_datacatalog.py:26: DeprecationWarning: This module is deprecated. Please use `airflow.operators.bash`.
       from airflow.operators.bash_operator import BashOperator
   
   -- Docs: https://docs.pytest.org/en/latest/warnings.html
   ================================================================================ 1 passed, 4 warnings in 205.58s (0:03:25) =================================================================================
   root@6750e034be9c:/opt/airflow#
   ```
   
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] kaxil commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
kaxil commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412584323



##########
File path: airflow/providers/google/cloud/operators/life_sciences.py
##########
@@ -0,0 +1,74 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Operators that interact with Google Cloud Life Sciences service."""
+
+from typing import Iterable, Optional
+
+from airflow.exceptions import AirflowException
+from airflow.models import BaseOperator
+from airflow.providers.google.cloud.hooks.life_sciences import LifeSciencesHook
+from airflow.utils.decorators import apply_defaults
+
+
+class LifeSciencesRunPipelineOperator(BaseOperator):
+    """
+    Runs a pipeline

Review comment:
       Can we add a little more description, I know it is obvious but still. "Runs a Life Sciences Pipeline" or something




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] kaxil commented on a change in pull request #8481: Add hook and operator for google cloud life sciences

Posted by GitBox <gi...@apache.org>.
kaxil commented on a change in pull request #8481:
URL: https://github.com/apache/airflow/pull/8481#discussion_r412583737



##########
File path: airflow/providers/google/cloud/hooks/life_sciences.py
##########
@@ -0,0 +1,150 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Hook for Google Cloud Life Sciences service"""
+
+import time
+from typing import Any, Dict, Optional
+
+import google.api_core.path_template
+from googleapiclient.discovery import build
+
+from airflow.exceptions import AirflowException
+from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
+
+# Time to sleep between active checks of the operation results
+TIME_TO_SLEEP_IN_SECONDS = 5
+
+
+# noinspection PyAbstractClass
+class LifeSciencesHook(GoogleBaseHook):
+    """
+    Hook for the Google Cloud Life Sciences APIs.
+
+    All the methods in the hook where project_id is used must be called with
+    keyword arguments rather than positional.
+
+    :param api_version: API version used (for example v1 or v1beta1).
+    :type api_version: str
+    :param gcp_conn_id: The connection ID to use when fetching connection info.
+    :type gcp_conn_id: str
+    :param delegate_to: The account to impersonate, if any.
+        For this to work, the service account making the request must have
+        domain-wide delegation enabled.
+    :type delegate_to: str
+    """
+
+    _conn = None  # type: Optional[Any]
+
+    def __init__(
+        self,
+        api_version: str = "v2beta",
+        gcp_conn_id: str = "google_cloud_default",
+        delegate_to: Optional[str] = None
+    ) -> None:
+        super().__init__(gcp_conn_id, delegate_to)
+        self.api_version = api_version
+
+    def get_conn(self):
+        """
+        Retrieves the connection to Cloud Life Sciences.
+
+        :return: Google Cloud Life Sciences service object.
+        """
+        if not self._conn:
+            http_authorized = self._authorize()
+            self._conn = build("lifesciences", self.api_version,
+                               http=http_authorized, cache_discovery=False)
+        return self._conn
+
+    @GoogleBaseHook.fallback_to_default_project_id
+    def run_pipeline(self, body: Dict, location: str, project_id: str):
+        """
+        Runs a pipeline
+
+        :param body: The request body.
+        :type body: dict
+        :param location: The location of the project. For example: "us-east1".
+        :type location: str
+        :param project_id: Optional, Google Cloud Project project_id where the function belongs.
+            If set to None or missing, the default project_id from the GCP connection is used.
+        :type project_id: str
+        :return: Dict

Review comment:
       ```suggestion
           :return: dict
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org