You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@airflow.apache.org by "Parhy (Jira)" <ji...@apache.org> on 2020/10/15 17:57:00 UTC

[jira] [Closed] (AIRFLOW-6798) Add option for service account values for KubernetesPodOperator

     [ https://issues.apache.org/jira/browse/AIRFLOW-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Parhy closed AIRFLOW-6798.
--------------------------
    Resolution: Fixed

> Add option for service account values for KubernetesPodOperator
> ---------------------------------------------------------------
>
>                 Key: AIRFLOW-6798
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-6798
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: contrib
>    Affects Versions: 1.10.3
>         Environment: dev
>            Reporter: Parhy
>            Priority: Major
>              Labels: features
>
> I am trying to run  the below dag in a k8s environment.
>  
> from airflow import DAG
> from datetime import datetime, timedelta
> from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
> from airflow import configuration as conf
> from airflow.contrib.kubernetes.pod import Resources
> default_args = {
>  'owner': 'airflow',
>  'depends_on_past': False,
>  'start_date': datetime(2019, 1, 1),
>  'email_on_failure': False,
>  'email_on_retry': False,
>  'retries': 1,
>  'retry_delay': timedelta(minutes=5),
> }
> namespace = conf.get('kubernetes', 'namespace')
> # This will detect the default namespace locally and read the
> # environment namespace when deployed to Astronomer.
> dag = DAG('example_kubernetes_pod',
>  schedule_interval='@once',
>  default_args=default_args)
> compute_resource = Resources()
> compute_resource.request_cpu = '5000m'
> compute_resource.request_memory = '512Mi'
> compute_resource.limit_cpu = '800m'
> compute_resource.limit_memory = '1Gi'
> #compute_resource = \{'request_cpu': '500m', 'request_memory': '512Mi', 'limit_cpu': '800m', 'limit_memory': '1Gi'}
> with dag:
>  k = KubernetesPodOperator(
>  namespace=namespace,
>  image="hello-world",
>  labels=\{"foo": "bar"},
>  name="airflow-test-pod",
>  task_id="task-one",
>  in_cluster=False, # if set to true, will look in the cluster, if false, looks for file
>  resources=compute_resource,
>  config_file=None,
>  is_delete_pod_operator=True,
>  get_logs=True)
>  
> I am getting the below error 
>  
> HTTP response headers: HTTPHeaderDict(\{'Audit-Id': 'xxxxx', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Thu, 13 Feb 2020 17:00:11 GMT', 'Content-Length': '276'})
> HTTP response body: \{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \"system:serviceaccount:xxx:default\" cannot create resource \"pods\" in API group \"\" in the namespace \"xxx\"","reason":"Forbidden","details":\{"kind":"pods"},"code":403}
>  
> I understand its trying to use the default serviceaccount in my namespace and default don't have permission to create pod.
> Can we pass the name of the serviceaccount which I created which has permission to do so.
> Please let me know. 
>  
> KubernetesExecutor is working fine as in that case the scheduler pod is added with the service account which has permission through rolebinding to create the pod.
>  
> Thanks in advance,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)