You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by GitBox <gi...@apache.org> on 2020/06/29 12:22:22 UTC

[GitHub] [airflow] davido912 opened a new issue #9564: Official Airflow Docker Image Problem

davido912 opened a new issue #9564:
URL: https://github.com/apache/airflow/issues/9564


   Hey hey,
   
   Trying to adapt working using a dockerised version of Airflow (apache/airflow:1.10.10-python3.6) and load our repository to it.
   However, in comparison to using Airflow locally installed in a virtual environment, the dockerised version is extremely slow, operators queue for long as if there is only one worker that can take care of the jobs. And pipelines do not manage to execute in full due to the fact that the scheduler crashes - I still couldn't figure out why. 
   The scheduler fires the following:
   Process QueuedLocalWorker-5:
   Process QueuedLocalWorker-7:
   Process QueuedLocalWorker-9:
   Process QueuedLocalWorker-16:
   Process QueuedLocalWorker-12:
   Process QueuedLocalWorker-17:
   Process QueuedLocalWorker-10:
   Process QueuedLocalWorker-6:
   Traceback (most recent call last):
   Process QueuedLocalWorker-14:
   ...
   ...
   ...
   SystemExit: 0
   
   and eventually:
   BrokenPipeError: [Errno 32] Broken pipe.
   
   Did anyone experience this or anyone knows what is going on? 
   Very much appreciate any feedback.
   Thanks!!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] potiuk commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
potiuk commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-651110995


   Did you try to compare the configuration you had in your local virtualenv and in the docker image? The configuration that comes with the Docker image is "default one" and you likely modified it manually in your local virtualenv. If you want to use your own configuration you need to mount it to the Dockerfile under `/opt/airflow/airflow.cfg`. If you do not do it, then default configuration will be automatically generated first time you run airflow and the configuration will be like that (just did it now):
   
   You need to compare it to your local configuration and see what kind of differences you have and also see if you have some other variables that can influence the configuration (following the AIRFLOW__*  pattern).
   
   Also I am closing this ticket, because this kind of question/troubleshooting does not belong to Github issues. If you want to follow this and ask for troubleshooing/help - head to the the Slack and you will find #prod-docler-image channel. See https://airflow.apache.org/community/ "Ask a question" part.
   
   ```
   [core]
   # The folder where your airflow pipelines live, most likely a
   # subfolder in a code repository. This path must be absolute.
   dags_folder = /opt/airflow/dags
   
   # The folder where airflow should store its log files
   # This path must be absolute
   base_log_folder = /opt/airflow/logs
   
   # Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search.
   # Set this to True if you want to enable remote logging.
   remote_logging = False
   
   # Users must supply an Airflow connection id that provides access to the storage
   # location.https://airflow.apache.org/community/
   remote_log_conn_id =
   remote_base_log_folder =
   encrypt_s3_logs = False
   
   # Logging level
   logging_level = INFO
   
   # Logging level for Flask-appbuilder UI
   fab_logging_level = WARN
   
   # Logging class
   # Specify the class that will specify the logging configuration
   # This class has to be on the python classpath
   # Example: logging_config_class = my.path.default_local_settings.LOGGING_CONFIG
   logging_config_class =
   
   # Flag to enable/disable Colored logs in Console
   # Colour the logs when the controlling terminal is a TTY.
   colored_console_log = True
   
   # Log format for when Colored logs is enabled
   colored_log_format = [%%(blue)s%%(asctime)s%%(reset)s] {%%(blue)s%%(filename)s:%%(reset)s%%(lineno)d} %%(log_color)s%%(levelname)s%%(reset)s - %%(log_color)s%%(message)s%%(reset)s
   colored_formatter_class = airflow.utils.log.colored_log.CustomTTYColoredFormatter
   
   # Format of Log line
   log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
   simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
   
   # Log filename format
   log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log
   log_processor_filename_template = {{ filename }}.log
   dag_processor_manager_log_location = /opt/airflow/logs/dag_processor_manager/dag_processor_manager.log
   
   # Name of handler to read task instance logs.
   # Default to use task handler.
   task_log_reader = task
   
   # Hostname by providing a path to a callable, which will resolve the hostname.
   # The format is "package:function".
   #
   # For example, default value "socket:getfqdn" means that result from getfqdn() of "socket"
   # package will be used as hostname.
   #
   # No argument should be required in the function specified.
   # If using IP address as hostname is preferred, use value ``airflow.utils.net:get_host_ip_address``
   hostname_callable = socket:getfqdn
   
   # Default timezone in case supplied date times are naive
   # can be utc (default), system, or any IANA timezone string (e.g. Europe/Amsterdam)
   default_timezone = utc
   
   # The executor class that airflow should use. Choices include
   # SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor, KubernetesExecutor
   executor = SequentialExecutor
   
   # The SqlAlchemy connection string to the metadata database.
   # SqlAlchemy supports many different database engine, more information
   # their website
   sql_alchemy_conn = sqlite:////opt/airflow/airflow.db
   
   # The encoding for the databases
   sql_engine_encoding = utf-8
   
   # If SqlAlchemy should pool database connections.
   sql_alchemy_pool_enabled = True
   
   # The SqlAlchemy pool size is the maximum number of database connections
   # in the pool. 0 indicates no limit.
   sql_alchemy_pool_size = 5
   
   # The maximum overflow size of the pool.
   # When the number of checked-out connections reaches the size set in pool_size,
   # additional connections will be returned up to this limit.
   # When those additional connections are returned to the pool, they are disconnected and discarded.
   # It follows then that the total number of simultaneous connections the pool will allow
   # is pool_size + max_overflow,
   # and the total number of "sleeping" connections the pool will allow is pool_size.
   # max_overflow can be set to -1 to indicate no overflow limit;
   # no limit will be placed on the total number of concurrent connections. Defaults to 10.
   sql_alchemy_max_overflow = 10
   
   # The SqlAlchemy pool recycle is the number of seconds a connection
   # can be idle in the pool before it is invalidated. This config does
   # not apply to sqlite. If the number of DB connections is ever exceeded,
   # a lower config value will allow the system to recover faster.
   sql_alchemy_pool_recycle = 1800
   
   # Check connection at the start of each connection pool checkout.
   # Typically, this is a simple statement like "SELECT 1".
   # More information here:
   # https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic
   sql_alchemy_pool_pre_ping = True
   
   # The schema to use for the metadata database.
   # SqlAlchemy supports databases with the concept of multiple schemas.
   sql_alchemy_schema =
   
   # The amount of parallelism as a setting to the executor. This defines
   # the max number of task instances that should run simultaneously
   # on this airflow installation
   parallelism = 32
   
   # The number of task instances allowed to run concurrently by the scheduler
   dag_concurrency = 16
   
   # Are DAGs paused by default at creation
   dags_are_paused_at_creation = True
   
   # The maximum number of active DAG runs per DAG
   max_active_runs_per_dag = 16
   
   # Whether to load the DAG examples that ship with Airflow. It's good to
   # get started, but you probably want to set this to False in a production
   # environment
   load_examples = True
   
   # Whether to load the default connections that ship with Airflow. It's good to
   # get started, but you probably want to set this to False in a production
   # environment
   load_default_connections = True
   
   # Where your Airflow plugins are stored
   plugins_folder = /opt/airflow/plugins
   
   # Secret key to save connection passwords in the db
   fernet_key = 6G06q06WAFi8XhYgssRzNOu_DxJELWN7QCXiypduu_Y=
   
   # Whether to disable pickling dags
   donot_pickle = False
   
   # How long before timing out a python file import
   dagbag_import_timeout = 30
   
   # How long before timing out a DagFileProcessor, which processes a dag file
   dag_file_processor_timeout = 50
   
   # The class to use for running task instances in a subprocess
   task_runner = StandardTaskRunner
   
   # If set, tasks without a ``run_as_user`` argument will be run with this user
   # Can be used to de-elevate a sudo user running Airflow when executing tasks
   default_impersonation =
   
   # What security module to use (for example kerberos)
   security =
   
   # If set to False enables some unsecure features like Charts and Ad Hoc Queries.
   # In 2.0 will default to True.
   secure_mode = False
   
   # Turn unit test mode on (overwrites many configuration options with test
   # values at runtime)
   unit_test_mode = False
   
   # Whether to enable pickling for xcom (note that this is insecure and allows for
   # RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
   enable_xcom_pickling = True
   
   # When a task is killed forcefully, this is the amount of time in seconds that
   # it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
   killed_task_cleanup_time = 60
   
   # Whether to override params with dag_run.conf. If you pass some key-value pairs
   # through ``airflow dags backfill -c`` or
   # ``airflow dags trigger -c``, the key-value pairs will override the existing ones in params.
   dag_run_conf_overrides_params = False
   
   # Worker initialisation check to validate Metadata Database connection
   worker_precheck = False
   
   # When discovering DAGs, ignore any files that don't contain the strings ``DAG`` and ``airflow``.
   dag_discovery_safe_mode = True
   
   # The number of retries each task is going to have by default. Can be overridden at dag or task level.
   default_task_retries = 0
   
   # Whether to serialise DAGs and persist them in DB.
   # If set to True, Webserver reads from DB instead of parsing DAG files
   # More details: https://airflow.apache.org/docs/stable/dag-serialization.html
   store_serialized_dags = False
   
   # Updating serialized DAG can not be faster than a minimum interval to reduce database write rate.
   min_serialized_dag_update_interval = 30
   
   # Whether to persist DAG files code in DB.
   # If set to True, Webserver reads file contents from DB instead of
   # trying to access files in a DAG folder. Defaults to same as the
   # ``store_serialized_dags`` setting.
   store_dag_code = %(store_serialized_dags)s
   
   # Maximum number of Rendered Task Instance Fields (Template Fields) per task to store
   # in the Database.
   # When Dag Serialization is enabled (``store_serialized_dags=True``), all the template_fields
   # for each of Task Instance are stored in the Database.
   # Keeping this number small may cause an error when you try to view ``Rendered`` tab in
   # TaskInstance view for older tasks.
   max_num_rendered_ti_fields_per_task = 30
   
   # On each dagrun check against defined SLAs
   check_slas = True
   
   [secrets]
   # Full class name of secrets backend to enable (will precede env vars and metastore in search path)
   # Example: backend = airflow.contrib.secrets.aws_systems_manager.SystemsManagerParameterStoreBackend
   backend =
   
   # The backend_kwargs param is loaded into a dictionary and passed to __init__ of secrets backend class.
   # See documentation for the secrets backend you are using. JSON is expected.
   # Example for AWS Systems Manager ParameterStore:
   # ``{"connections_prefix": "/airflow/connections", "profile_name": "default"}``
   backend_kwargs =
   
   [cli]
   # In what way should the cli access the API. The LocalClient will use the
   # database directly, while the json_client will use the api running on the
   # webserver
   api_client = airflow.api.client.local_client
   
   # If you set web_server_url_prefix, do NOT forget to append it here, ex:
   # ``endpoint_url = http://localhost:8080/myroot``
   # So api will look like: ``http://localhost:8080/myroot/api/experimental/...``
   endpoint_url = http://localhost:8080
   
   [debug]
   # Used only with DebugExecutor. If set to True DAG will fail with first
   # failed task. Helpful for debugging purposes.
   fail_fast = False
   
   [api]
   # How to authenticate users of the API
   auth_backend = airflow.api.auth.backend.default
   
   [lineage]
   # what lineage backend to use
   backend =
   
   [atlas]
   sasl_enabled = False
   host =
   port = 21000
   username =
   password =
   
   [operators]
   # The default owner assigned to each new operator, unless
   # provided explicitly or passed via ``default_args``
   default_owner = airflow
   default_cpus = 1
   default_ram = 512
   default_disk = 512
   default_gpus = 0
   
   [hive]
   # Default mapreduce queue for HiveOperator tasks
   default_hive_mapred_queue =
   
   [webserver]
   # The base url of your website as airflow cannot guess what domain or
   # cname you are using. This is used in automated emails that
   # airflow sends to point links to the right web server
   base_url = http://localhost:8080
   
   # Default timezone to display all dates in the RBAC UI, can be UTC, system, or
   # any IANA timezone string (e.g. Europe/Amsterdam). If left empty the
   # default value of core/default_timezone will be used
   # Example: default_ui_timezone = America/New_York
   default_ui_timezone = UTC
   
   # The ip specified when starting the web server
   web_server_host = 0.0.0.0
   
   # The port on which to run the web server
   web_server_port = 8080
   
   # Paths to the SSL certificate and key for the web server. When both are
   # provided SSL will be enabled. This does not change the web server port.
   web_server_ssl_cert =
   
   # Paths to the SSL certificate and key for the web server. When both are
   # provided SSL will be enabled. This does not change the web server port.
   web_server_ssl_key =
   
   # Number of seconds the webserver waits before killing gunicorn master that doesn't respond
   web_server_master_timeout = 120
   
   # Number of seconds the gunicorn webserver waits before timing out on a worker
   web_server_worker_timeout = 120
   
   # Number of workers to refresh at a time. When set to 0, worker refresh is
   # disabled. When nonzero, airflow periodically refreshes webserver workers by
   # bringing up new ones and killing old ones.
   worker_refresh_batch_size = 1
   
   # Number of seconds to wait before refreshing a batch of workers.
   worker_refresh_interval = 30
   
   # Secret key used to run your flask app
   # It should be as random as possible
   secret_key = temporary_key
   
   # Number of workers to run the Gunicorn web server
   workers = 4
   
   # The worker class gunicorn should use. Choices include
   # sync (default), eventlet, gevent
   worker_class = sync
   
   # Log files for the gunicorn webserver. '-' means log to stderr.
   access_logfile = -
   
   # Log files for the gunicorn webserver. '-' means log to stderr.
   error_logfile = -
   
   # Expose the configuration file in the web server
   expose_config = False
   
   # Expose hostname in the web server
   expose_hostname = True
   
   # Expose stacktrace in the web server
   expose_stacktrace = True
   
   # Set to true to turn on authentication:
   # https://airflow.apache.org/security.html#web-authentication
   authenticate = False
   
   # Filter the list of dags by owner name (requires authentication to be enabled)
   filter_by_owner = False
   
   # Filtering mode. Choices include user (default) and ldapgroup.
   # Ldap group filtering requires using the ldap backend
   #
   # Note that the ldap server needs the "memberOf" overlay to be set up
   # in order to user the ldapgroup mode.
   owner_mode = user
   
   # Default DAG view. Valid values are:
   # tree, graph, duration, gantt, landing_times
   dag_default_view = tree
   
   # "Default DAG orientation. Valid values are:"
   # LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
   dag_orientation = LR
   
   # Puts the webserver in demonstration mode; blurs the names of Operators for
   # privacy.
   demo_mode = False
   
   # The amount of time (in secs) webserver will wait for initial handshake
   # while fetching logs from other worker machine
   log_fetch_timeout_sec = 5
   
   # Time interval (in secs) to wait before next log fetching.
   log_fetch_delay_sec = 2
   
   # Distance away from page bottom to enable auto tailing.
   log_auto_tailing_offset = 30
   
   # Animation speed for auto tailing log display.
   log_animation_speed = 1000
   
   # By default, the webserver shows paused DAGs. Flip this to hide paused
   # DAGs by default
   hide_paused_dags_by_default = False
   
   # Consistent page size across all listing views in the UI
   page_size = 100
   
   # Use FAB-based webserver with RBAC feature
   rbac = False
   
   # Define the color of navigation bar
   navbar_color = #007A87
   
   # Default dagrun to show in UI
   default_dag_run_display_number = 25
   
   # Enable werkzeug ``ProxyFix`` middleware for reverse proxy
   enable_proxy_fix = False
   
   # Number of values to trust for ``X-Forwarded-For``.
   # More info: https://werkzeug.palletsprojects.com/en/0.16.x/middleware/proxy_fix/
   proxy_fix_x_for = 1
   
   # Number of values to trust for ``X-Forwarded-Proto``
   proxy_fix_x_proto = 1
   
   # Number of values to trust for ``X-Forwarded-Host``
   proxy_fix_x_host = 1
   
   # Number of values to trust for ``X-Forwarded-Port``
   proxy_fix_x_port = 1
   
   # Number of values to trust for ``X-Forwarded-Prefix``
   proxy_fix_x_prefix = 1
   
   # Set secure flag on session cookie
   cookie_secure = False
   
   # Set samesite policy on session cookie
   cookie_samesite =
   
   # Default setting for wrap toggle on DAG code and TI log views.
   default_wrap = False
   
   # Allow the UI to be rendered in a frame
   x_frame_enabled = True
   
   # Send anonymous user activity to your analytics tool
   # choose from google_analytics, segment, or metarouter
   # analytics_tool =
   
   # Unique ID of your account in the analytics tool
   # analytics_id =
   
   # Update FAB permissions and sync security manager roles
   # on webserver startup
   update_fab_perms = True
   
   # Minutes of non-activity before logged out from UI
   # 0 means never get forcibly logged out
   force_log_out_after = 0
   
   # The UI cookie lifetime in days
   session_lifetime_days = 30
   
   [email]
   email_backend = airflow.utils.email.send_email_smtp
   
   [smtp]
   
   # If you want airflow to send emails on retries, failure, and you want to use
   # the airflow.utils.email.send_email_smtp function, you have to configure an
   # smtp server here
   smtp_host = localhost
   smtp_starttls = True
   smtp_ssl = False
   # Example: smtp_user = airflow
   # smtp_user =
   # Example: smtp_password = airflow
   # smtp_password =
   smtp_port = 25
   smtp_mail_from = airflow@example.com
   
   [sentry]
   
   # Sentry (https://docs.sentry.io) integration
   sentry_dsn =
   
   [celery]
   
   # This section only applies if you are using the CeleryExecutor in
   # ``[core]`` section above
   # The app name that will be used by celery
   celery_app_name = airflow.executors.celery_executor
   
   # The concurrency that will be used when starting workers with the
   # ``airflow celery worker`` command. This defines the number of task instances that
   # a worker will take, so size up your workers based on the resources on
   # your worker box and the nature of your tasks
   worker_concurrency = 16
   
   # The maximum and minimum concurrency that will be used when starting workers with the
   # ``airflow celery worker`` command (always keep minimum processes, but grow
   # to maximum if necessary). Note the value should be max_concurrency,min_concurrency
   # Pick these numbers based on resources on worker box and the nature of the task.
   # If autoscale option is available, worker_concurrency will be ignored.
   # http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html#cmdoption-celery-worker-autoscale
   # Example: worker_autoscale = 16,12
   # worker_autoscale =
   
   # When you start an airflow worker, airflow starts a tiny web server
   # subprocess to serve the workers local log files to the airflow main
   # web server, who then builds pages and sends them to users. This defines
   # the port on which the logs are served. It needs to be unused, and open
   # visible from the main web server to connect into the workers.
   worker_log_server_port = 8793
   
   # The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
   # a sqlalchemy database. Refer to the Celery documentation for more
   # information.
   # http://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-settings
   broker_url = sqla+mysql://airflow:airflow@localhost:3306/airflow
   
   # The Celery result_backend. When a job finishes, it needs to update the
   # metadata of the job. Therefore it will post a message on a message bus,
   # or insert it into a database (depending of the backend)
   # This status is used by the scheduler to update the state of the task
   # The use of a database is highly recommended
   # http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-result-backend-settings
   result_backend = db+mysql://airflow:airflow@localhost:3306/airflow
   
   # Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
   # it ``airflow flower``. This defines the IP that Celery Flower runs on
   flower_host = 0.0.0.0
   
   # The root URL for Flower
   # Example: flower_url_prefix = /flower
   flower_url_prefix =
   
   # This defines the port that Celery Flower runs on
   flower_port = 5555
   
   # Securing Flower with Basic Authentication
   # Accepts user:password pairs separated by a comma
   # Example: flower_basic_auth = user1:password1,user2:password2
   flower_basic_auth =
   
   # Default queue that tasks get assigned to and that worker listen on.
   default_queue = default
   
   # How many processes CeleryExecutor uses to sync task state.
   # 0 means to use max(1, number of cores - 1) processes.
   sync_parallelism = 0
   
   # Import path for celery configuration options
   celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
   
   # In case of using SSL
   ssl_active = False
   ssl_key =
   ssl_cert =
   ssl_cacert =
   
   # Celery Pool implementation.
   # Choices include: prefork (default), eventlet, gevent or solo.
   # See:
   # https://docs.celeryproject.org/en/latest/userguide/workers.html#concurrency
   # https://docs.celeryproject.org/en/latest/userguide/concurrency/eventlet.html
   pool = prefork
   
   # The number of seconds to wait before timing out ``send_task_to_executor`` or
   # ``fetch_celery_task_state`` operations.
   operation_timeout = 2
   
   [celery_broker_transport_options]
   
   # This section is for specifying options which can be passed to the
   # underlying celery broker transport. See:
   # http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_transport_options
   # The visibility timeout defines the number of seconds to wait for the worker
   # to acknowledge the task before the message is redelivered to another worker.
   # Make sure to increase the visibility timeout to match the time of the longest
   # ETA you're planning to use.
   # visibility_timeout is only supported for Redis and SQS celery brokers.
   # See:
   # http://docs.celeryproject.org/en/master/userguide/configuration.html#std:setting-broker_transport_options
   # Example: visibility_timeout = 21600
   # visibility_timeout =
   
   [dask]
   
   # This section only applies if you are using the DaskExecutor in
   # [core] section above
   # The IP address and port of the Dask cluster's scheduler.
   cluster_address = 127.0.0.1:8786
   
   # TLS/ SSL settings to access a secured Dask scheduler.
   tls_ca =
   tls_cert =
   tls_key =
   
   [scheduler]
   # Task instances listen for external kill signal (when you clear tasks
   # from the CLI or the UI), this defines the frequency at which they should
   # listen (in seconds).
   job_heartbeat_sec = 5
   
   # The scheduler constantly tries to trigger new tasks (look at the
   # scheduler section in the docs for more information). This defines
   # how often the scheduler should run (in seconds).
   scheduler_heartbeat_sec = 5
   
   # After how much time should the scheduler terminate in seconds
   # -1 indicates to run continuously (see also num_runs)
   run_duration = -1
   
   # The number of times to try to schedule each DAG file
   # -1 indicates unlimited number
   num_runs = -1
   
   # The number of seconds to wait between consecutive DAG file processing
   processor_poll_interval = 1
   
   # after how much time (seconds) a new DAGs should be picked up from the filesystem
   min_file_process_interval = 0
   
   # How often (in seconds) to scan the DAGs directory for new files. Default to 5 minutes.
   dag_dir_list_interval = 300
   
   # How often should stats be printed to the logs. Setting to 0 will disable printing stats
   print_stats_interval = 30
   
   # If the last scheduler heartbeat happened more than scheduler_health_check_threshold
   # ago (in seconds), scheduler is considered unhealthy.
   # This is used by the health check in the "/health" endpoint
   scheduler_health_check_threshold = 30
   child_process_log_directory = /opt/airflow/logs/scheduler
   
   # Local task jobs periodically heartbeat to the DB. If the job has
   # not heartbeat in this many seconds, the scheduler will mark the
   # associated task instance as failed and will re-schedule the task.
   scheduler_zombie_task_threshold = 300
   
   # Turn off scheduler catchup by setting this to False.
   # Default behavior is unchanged and
   # Command Line Backfills still work, but the scheduler
   # will not do scheduler catchup if this is False,
   # however it can be set on a per DAG basis in the
   # DAG definition (catchup)
   catchup_by_default = True
   
   # This changes the batch size of queries in the scheduling main loop.
   # If this is too high, SQL query performance may be impacted by one
   # or more of the following:
   # - reversion to full table scan
   # - complexity of query predicate
   # - excessive locking
   # Additionally, you may hit the maximum allowable query length for your db.
   # Set this to 0 for no limit (not advised)
   max_tis_per_query = 512
   
   # Statsd (https://github.com/etsy/statsd) integration settings
   statsd_on = False
   statsd_host = localhost
   statsd_port = 8125
   statsd_prefix = airflow
   
   # If you want to avoid send all the available metrics to StatsD,
   # you can configure an allow list of prefixes to send only the metrics that
   # start with the elements of the list (e.g: scheduler,executor,dagrun)
   statsd_allow_list =
   
   # The scheduler can run multiple threads in parallel to schedule dags.
   # This defines how many threads will run.
   max_threads = 2
   authenticate = False
   
   # Turn off scheduler use of cron intervals by setting this to False.
   # DAGs submitted manually in the web UI or with trigger_dag will still run.
   use_job_schedule = True
   
   # Allow externally triggered DagRuns for Execution Dates in the future
   # Only has effect if schedule_interval is set to None in DAG
   allow_trigger_in_future = False
   
   [ldap]
   # set this to ldaps://<your.ldap.server>:<port>
   uri =
   user_filter = objectClass=*
   user_name_attr = uid
   group_member_attr = memberOf
   superuser_filter =
   data_profiler_filter =
   bind_user = cn=Manager,dc=example,dc=com
   bind_password = insecure
   basedn = dc=example,dc=com
   cacert = /etc/ca/ldap_ca.crt
   search_scope = LEVEL
   
   # This setting allows the use of LDAP servers that either return a
   # broken schema, or do not return a schema.
   ignore_malformed_schema = False
   
   [mesos]
   # Mesos master address which MesosExecutor will connect to.
   master = localhost:5050
   
   # The framework name which Airflow scheduler will register itself as on mesos
   framework_name = Airflow
   
   # Number of cpu cores required for running one task instance using
   # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
   # command on a mesos slave
   task_cpu = 1
   
   # Memory in MB required for running one task instance using
   # 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
   # command on a mesos slave
   task_memory = 256
   
   # Enable framework checkpointing for mesos
   # See http://mesos.apache.org/documentation/latest/slave-recovery/
   checkpoint = False
   
   # Failover timeout in milliseconds.
   # When checkpointing is enabled and this option is set, Mesos waits
   # until the configured timeout for
   # the MesosExecutor framework to re-register after a failover. Mesos
   # shuts down running tasks if the
   # MesosExecutor framework fails to re-register within this timeframe.
   # Example: failover_timeout = 604800
   # failover_timeout =
   
   # Enable framework authentication for mesos
   # See http://mesos.apache.org/documentation/latest/configuration/
   authenticate = False
   
   # Mesos credentials, if authentication is enabled
   # Example: default_principal = admin
   # default_principal =
   # Example: default_secret = admin
   # default_secret =
   
   # Optional Docker Image to run on slave before running the command
   # This image should be accessible from mesos slave i.e mesos slave
   # should be able to pull this docker image before executing the command.
   # Example: docker_image_slave = puckel/docker-airflow
   # docker_image_slave =
   
   [kerberos]
   ccache = /tmp/airflow_krb5_ccache
   
   # gets augmented with fqdn
   principal = airflow
   reinit_frequency = 3600
   kinit_path = kinit
   keytab = airflow.keytab
   
   [github_enterprise]
   api_rev = v3
   
   [admin]
   # UI to hide sensitive variable fields when set to True
   hide_sensitive_variable_fields = True
   
   [elasticsearch]
   # Elasticsearch host
   host =
   
   # Format of the log_id, which is used to query for a given tasks logs
   log_id_template = {dag_id}-{task_id}-{execution_date}-{try_number}
   
   # Used to mark the end of a log stream for a task
   end_of_log_mark = end_of_log
   
   # Qualified URL for an elasticsearch frontend (like Kibana) with a template argument for log_id
   # Code will construct log_id using the log_id template from the argument above.
   # NOTE: The code will prefix the https:// automatically, don't include that here.
   frontend =
   
   # Write the task logs to the stdout of the worker, rather than the default files
   write_stdout = False
   
   # Instead of the default log formatter, write the log lines as JSON
   json_format = False
   
   # Log fields to also attach to the json output, if enabled
   json_fields = asctime, filename, lineno, levelname, message
   
   [elasticsearch_configs]
   use_ssl = False
   verify_certs = True
   
   [kubernetes]
   # The repository, tag and imagePullPolicy of the Kubernetes Image for the Worker to Run
   worker_container_repository =
   worker_container_tag =
   worker_container_image_pull_policy = IfNotPresent
   
   # If True, all worker pods will be deleted upon termination
   delete_worker_pods = True
   
   # If False (and delete_worker_pods is True),
   # failed worker pods will not be deleted so users can investigate them.
   delete_worker_pods_on_failure = False
   
   # Number of Kubernetes Worker Pod creation calls per scheduler loop
   worker_pods_creation_batch_size = 1
   
   # The Kubernetes namespace where airflow workers should be created. Defaults to ``default``
   namespace = default
   
   # The name of the Kubernetes ConfigMap containing the Airflow Configuration (this file)
   # Example: airflow_configmap = airflow-configmap
   airflow_configmap =
   
   # The name of the Kubernetes ConfigMap containing ``airflow_local_settings.py`` file.
   #
   # For example:
   #
   # ``airflow_local_settings_configmap = "airflow-configmap"`` if you have the following ConfigMap.
   #
   # ``airflow-configmap.yaml``:
   #
   # .. code-block:: yaml
   #
   #   ---
   #   apiVersion: v1
   #   kind: ConfigMap
   #   metadata:
   #     name: airflow-configmap
   #   data:
   #     airflow_local_settings.py: |
   #         def pod_mutation_hook(pod):
   #             ...
   #     airflow.cfg: |
   #         ...
   # Example: airflow_local_settings_configmap = airflow-configmap
   airflow_local_settings_configmap =
   
   # For docker image already contains DAGs, this is set to ``True``, and the worker will
   # search for dags in dags_folder,
   # otherwise use git sync or dags volume claim to mount DAGs
   dags_in_image = False
   
   # For either git sync or volume mounted DAGs, the worker will look in this subpath for DAGs
   dags_volume_subpath =
   
   # For DAGs mounted via a volume claim (mutually exclusive with git-sync and host path)
   dags_volume_claim =
   
   # For volume mounted logs, the worker will look in this subpath for logs
   logs_volume_subpath =
   
   # A shared volume claim for the logs
   logs_volume_claim =
   
   # For DAGs mounted via a hostPath volume (mutually exclusive with volume claim and git-sync)
   # Useful in local environment, discouraged in production
   dags_volume_host =
   
   # A hostPath volume for the logs
   # Useful in local environment, discouraged in production
   logs_volume_host =
   
   # A list of configMapsRefs to envFrom. If more than one configMap is
   # specified, provide a comma separated list: configmap_a,configmap_b
   env_from_configmap_ref =
   
   # A list of secretRefs to envFrom. If more than one secret is
   # specified, provide a comma separated list: secret_a,secret_b
   env_from_secret_ref =
   
   # Git credentials and repository for DAGs mounted via Git (mutually exclusive with volume claim)
   git_repo =
   git_branch =
   
   # Use a shallow clone with a history truncated to the specified number of commits.
   # 0 - do not use shallow clone.
   git_sync_depth = 1
   git_subpath =
   
   # The specific rev or hash the git_sync init container will checkout
   # This becomes GIT_SYNC_REV environment variable in the git_sync init container for worker pods
   git_sync_rev =
   
   # Use git_user and git_password for user authentication or git_ssh_key_secret_name
   # and git_ssh_key_secret_key for SSH authentication
   git_user =
   git_password =
   git_sync_root = /git
   git_sync_dest = repo
   
   # Mount point of the volume if git-sync is being used.
   # i.e. /opt/airflow/dags
   git_dags_folder_mount_point =
   
   # To get Git-sync SSH authentication set up follow this format
   #
   # ``airflow-secrets.yaml``:
   #
   # .. code-block:: yaml
   #
   #   ---
   #   apiVersion: v1
   #   kind: Secret
   #   metadata:
   #     name: airflow-secrets
   #   data:
   #     # key needs to be gitSshKey
   #     gitSshKey: <base64_encoded_data>
   # Example: git_ssh_key_secret_name = airflow-secrets
   git_ssh_key_secret_name =
   
   # To get Git-sync SSH authentication set up follow this format
   #
   # ``airflow-configmap.yaml``:
   #
   # .. code-block:: yaml
   #
   #   ---
   #   apiVersion: v1
   #   kind: ConfigMap
   #   metadata:
   #     name: airflow-configmap
   #   data:
   #     known_hosts: |
   #         github.com ssh-rsa <...>
   #     airflow.cfg: |
   #         ...
   # Example: git_ssh_known_hosts_configmap_name = airflow-configmap
   git_ssh_known_hosts_configmap_name =
   
   # To give the git_sync init container credentials via a secret, create a secret
   # with two fields: GIT_SYNC_USERNAME and GIT_SYNC_PASSWORD (example below) and
   # add ``git_sync_credentials_secret = <secret_name>`` to your airflow config under the
   # ``kubernetes`` section
   #
   # Secret Example:
   #
   # .. code-block:: yaml
   #
   #   ---
   #   apiVersion: v1
   #   kind: Secret
   #   metadata:
   #     name: git-credentials
   #   data:
   #     GIT_SYNC_USERNAME: <base64_encoded_git_username>
   #     GIT_SYNC_PASSWORD: <base64_encoded_git_password>
   git_sync_credentials_secret =
   
   # For cloning DAGs from git repositories into volumes: https://github.com/kubernetes/git-sync
   git_sync_container_repository = k8s.gcr.io/git-sync
   git_sync_container_tag = v3.1.1
   git_sync_init_container_name = git-sync-clone
   git_sync_run_as_user = 65533
   
   # The name of the Kubernetes service account to be associated with airflow workers, if any.
   # Service accounts are required for workers that require access to secrets or cluster resources.
   # See the Kubernetes RBAC documentation for more:
   # https://kubernetes.io/docs/admin/authorization/rbac/
   worker_service_account_name =
   
   # Any image pull secrets to be given to worker pods, If more than one secret is
   # required, provide a comma separated list: secret_a,secret_b
   image_pull_secrets =
   
   # GCP Service Account Keys to be provided to tasks run on Kubernetes Executors
   # Should be supplied in the format: key-name-1:key-path-1,key-name-2:key-path-2
   gcp_service_account_keys =
   
   # Use the service account kubernetes gives to pods to connect to kubernetes cluster.
   # It's intended for clients that expect to be running inside a pod running on kubernetes.
   # It will raise an exception if called from a process not running in a kubernetes environment.
   in_cluster = True
   
   # When running with in_cluster=False change the default cluster_context or config_file
   # options to Kubernetes client. Leave blank these to use default behaviour like ``kubectl`` has.
   # cluster_context =
   # config_file =
   
   # Affinity configuration as a single line formatted JSON object.
   # See the affinity model for top-level key names (e.g. ``nodeAffinity``, etc.):
   # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#affinity-v1-core
   affinity =
   
   # A list of toleration objects as a single line formatted JSON array
   # See:
   # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#toleration-v1-core
   tolerations =
   
   # Keyword parameters to pass while calling a kubernetes client core_v1_api methods
   # from Kubernetes Executor provided as a single line formatted JSON dictionary string.
   # List of supported params are similar for all core_v1_apis, hence a single config
   # variable for all apis.
   # See:
   # https://raw.githubusercontent.com/kubernetes-client/python/master/kubernetes/client/apis/core_v1_api.py
   # Note that if no _request_timeout is specified, the kubernetes client will wait indefinitely
   # for kubernetes api responses, which will cause the scheduler to hang.
   # The timeout is specified as [connect timeout, read timeout]
   kube_client_request_args =
   
   # Specifies the uid to run the first process of the worker pods containers as
   run_as_user = 50000
   
   # Specifies a gid to associate with all containers in the worker pods
   # if using a git_ssh_key_secret_name use an fs_group
   # that allows for the key to be read, e.g. 65533
   fs_group =
   
   [kubernetes_node_selectors]
   
   # The Key-value pairs to be given to worker pods.
   # The worker pods will be scheduled to the nodes of the specified key-value pairs.
   # Should be supplied in the format: key = value
   
   [kubernetes_annotations]
   
   # The Key-value annotations pairs to be given to worker pods.
   # Should be supplied in the format: key = value
   
   [kubernetes_environment_variables]
   
   # The scheduler sets the following environment variables into your workers. You may define as
   # many environment variables as needed and the kubernetes launcher will set them in the launched workers.
   # Environment variables in this section are defined as follows
   # ``<environment_variable_key> = <environment_variable_value>``
   #
   # For example if you wanted to set an environment variable with value `prod` and key
   # ``ENVIRONMENT`` you would follow the following format:
   # ENVIRONMENT = prod
   #
   # Additionally you may override worker airflow settings with the ``AIRFLOW__<SECTION>__<KEY>``
   # formatting as supported by airflow normally.
   
   [kubernetes_secrets]
   
   # The scheduler mounts the following secrets into your workers as they are launched by the
   # scheduler. You may define as many secrets as needed and the kubernetes launcher will parse the
   # defined secrets and mount them as secret environment variables in the launched workers.
   # Secrets in this section are defined as follows
   # ``<environment_variable_mount> = <kubernetes_secret_object>=<kubernetes_secret_key>``
   #
   # For example if you wanted to mount a kubernetes secret key named ``postgres_password`` from the
   # kubernetes secret object ``airflow-secret`` as the environment variable ``POSTGRES_PASSWORD`` into
   # your workers you would follow the following format:
   # ``POSTGRES_PASSWORD = airflow-secret=postgres_credentials``
   #
   # Additionally you may override worker airflow settings with the ``AIRFLOW__<SECTION>__<KEY>``
   # formatting as supported by airflow normally.
   
   [kubernetes_labels]
   
   # The Key-value pairs to be given to worker pods.
   # The worker pods will be given these static labels, as well as some additional dynamic labels
   # to identify the task.
   # Should be supplied in the format: ``key = value``
   ```
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle edited a comment on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle edited a comment on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716352466


   ~Most of the time a single restart of the scheduler service is enough to get around this.~ But this happens quite often.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-717789142


   I am closing this as it seems to only be a problem if the services are not correctly setup (like in my case).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle edited a comment on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle edited a comment on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716593574


   Sorry, I forgot to respond back. The issue for me was that I am running docker-compose and I run the `db upgrade` command directly after the db container was created. The init failed and that's why this issue occured for me.
   ```
     fdb-airflow-db:
       image: library/postgres:latest
       container_name: fdb_airflow_db
       env_file: postgres.airflow.env
       ports:
         - 55433:5432
       restart: always
     fdb-airflow-db-init:
       <<: *airflow-environment
       build: .
       container_name: fdb_airflow_db_init
       command: db upgrade
       depends_on:
         - fdb-airflow-db
   ```
   If I manually run `docker-compose start fdb-airflow-db-init` it `db upgrade`s successfully and the scheduler does not die.
   ```
     fdb-airflow-scheduler:
       <<: *airflow-environment
       container_name: fdb_airflow_scheduler
       command: scheduler
       <<: *airflow-volumes
       depends_on:
         - fdb-airflow-db-init
       restart: always
   ```
   (scheduler `depends_on` the init container but it still starts even if the init failed.)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716351145


   Hey @potiuk I am running the code from latest master and experiencing this issue, too, with following changes to the default config:
   ```
   AIRFLOW_HOME=/opt/airflow
   AIRFLOW__CORE__EXECUTOR=LocalExecutor
   AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://<sqlalchemy-connection-string>
   AIRFLOW__CORE__FERNET_KEY=<fernet-key>
   AIRFLOW__CORE__MAX_ACTIVE_RUNS_PER_DAG=100
   ```
   That's all. (Probably also occuring in the default config. It might have sth. to do with LocalExecutor only?!)
   ```
   Process QueuedLocalWorker-4:
   Process QueuedLocalWorker-9:
   Traceback (most recent call last):
     File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
       self.run()
     File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/local_executor.py", line 67, in run
       return super().run()
   Process QueuedLocalWorker-13:
     File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
       self._target(*self._args, **self._kwargs)
     File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/local_executor.py", line 172, in do_work
       key, command = self.task_queue.get()
   Traceback (most recent call last):
     File "<string>", line 2, in get
     File "/usr/local/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
       kind, result = conn.recv()
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 250, in recv
       buf = self._recv_bytes()
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
       buf = self._recv(4)
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
       raise EOFError
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle edited a comment on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle edited a comment on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716351145


   Hey @potiuk I am running the code from latest master and experiencing this issue, too, with following changes to the default config:
   ```
   AIRFLOW_HOME=/opt/airflow
   AIRFLOW__CORE__EXECUTOR=LocalExecutor
   AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://<sqlalchemy-connection-string>
   AIRFLOW__CORE__FERNET_KEY=<fernet-key>
   AIRFLOW__CORE__MAX_ACTIVE_RUNS_PER_DAG=100
   ```
   That's all. (Probably also occuring with default config. It might have sth. to do with LocalExecutor only?!)
   ```
   Process QueuedLocalWorker-4:
   Process QueuedLocalWorker-9:
   Traceback (most recent call last):
     File "/usr/local/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
       self.run()
     File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/local_executor.py", line 67, in run
       return super().run()
   Process QueuedLocalWorker-13:
     File "/usr/local/lib/python3.8/multiprocessing/process.py", line 108, in run
       self._target(*self._args, **self._kwargs)
     File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/local_executor.py", line 172, in do_work
       key, command = self.task_queue.get()
   Traceback (most recent call last):
     File "<string>", line 2, in get
     File "/usr/local/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
       kind, result = conn.recv()
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 250, in recv
       buf = self._recv_bytes()
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
       buf = self._recv(4)
     File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 383, in _recv
       raise EOFError
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle closed issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle closed issue #9564:
URL: https://github.com/apache/airflow/issues/9564


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716352466


   Most of the time a single restart of the scheduler service is enough to get around this. But this happens quite often.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] potiuk closed issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
potiuk closed issue #9564:
URL: https://github.com/apache/airflow/issues/9564


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] potiuk commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
potiuk commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716465840


   Hmm. Is it also super-slow comparing to the virtualenv one? 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716593574


   Sorry, I forgot to respond back. The issue for me was that I am running docker-compose and I run the `db upgrade` command directly after the db container was created. The init failed and that's why this issue occured for me.
   ```
     fdb-airflow-db:
       image: library/postgres:latest
       container_name: fdb_airflow_db
       env_file: postgres.airflow.env
       ports:
         - 55433:5432
       restart: always
     fdb-airflow-db-init:
       <<: *airflow-environment
       build: .
       container_name: fdb_airflow_db_init
       command: db upgrade
       depends_on:
         - fdb-airflow-db
   ```
   If I manually run `docker-compose start fdb-airflow-db-init` it `db upgrade`s successfully and the scheduler does not die.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [airflow] feluelle commented on issue #9564: Official Airflow Docker Image Problem

Posted by GitBox <gi...@apache.org>.
feluelle commented on issue #9564:
URL: https://github.com/apache/airflow/issues/9564#issuecomment-716703669


   BTW I fixed the problem entirely by using healthchecks from here: https://github.com/peter-evans/docker-compose-healthcheck but I had to downgrade the docker-compoe file version from `3.8` to `2.4` as conditional healthchecks do not work anymore in version 3+.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org