You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@openwhisk.apache.org by Michele Sciabarra <mi...@sciabarra.com> on 2019/01/05 14:05:28 UTC

What is the problem with the DockerContainerFactory in Kubernetes?

I am deploying OpenWhisk in Kubernetes using the helm charts, it works, but I noticed the issues with the external event providers: "the issue is that user action containers created by the DockerContainerFactory are not configured to themselves be able to invoke Kubernetes services"

In the documentation, it says that  need either:

- use an external couchdb
- use a "lower performance" KubernetesContainerFactory
- specify the IP of the kube-dns server

I do not understand exactly what is the problem, why the KubernetesContainerFactory is lower performance? Also, why using an external couchdb or providing the IP of the DNS server solve this problem?

Furthermore, the documentation says to use Kubernetes 1.10 and 1.11,  I used 1.13 and apparently works, the fact it now uses CoreDNS may help?

-- 
  Michele Sciabarra
  michele@sciabarra.com

Re: What is the problem with the DockerContainerFactory in Kubernetes?

Posted by David P Grove <gr...@us.ibm.com>.


Michele Sciabarra <mi...@sciabarra.com> wrote on 01/05/2019 09:05:28 AM:
>
> I am deploying OpenWhisk in Kubernetes using the helm charts, it
> works, but I noticed the issues with the external event providers:
> "the issue is that user action containers created by the
> DockerContainerFactory are not configured to themselves be able to
> invoke Kubernetes services"
>
> In the documentation, it says that  need either:
>
> - use an external couchdb
> - use a "lower performance" KubernetesContainerFactory
> - specify the IP of the kube-dns server
>
> I do not understand exactly what is the problem, ...
> Also, why using an
> external couchdb or providing the IP of the DNS server solve this
problem?
>

See [1] for the DNS discussion.  In brief, the DockerContainerFactory by
default does not configure /etc/resolv.conf in the user action containers
to enable Kubernetes service names like couchdb.openwhisk to be resolved.
So when the action that OpenWhisk internally use to create an alarm trigger
runs and tries to talk to the internal couchdb instance using this name,
the DNS lookup fails and therefore the trigger creation fails.   If we can
agree to merge something like [2] in the core project, then I do believe we
can smooth over this issue by doing some additional DNS configuration in
the Helm chart / invoker.yaml.

> why the KubernetesContainerFactory is lower performance?


(a) Pod creation time is higher than docker container creation time,
therefore cold starts are more expensive (about 4x last time I measured).
(b) Log extraction via the Kubernetes APIs is extremely slow, which delays
containers from being ready for reuse, which reduces overall system
throughput.


> Furthermore, the documentation says to use Kubernetes 1.10 and 1.11,
> I used 1.13 and apparently works, the fact it now uses CoreDNS may help?
>

I do not believe the change from kube-dns to coredns will fix the provider
DNS problem.

I've also been deploying just fine on Kubernetes 1.12, but haven't updated
the docs because we aren't doing TravisCI testing with Kube 1.12 yet.

--dave

[1] https://github.com/apache/incubator-openwhisk-deploy-kube/issues/382
[2] https://github.com/apache/incubator-openwhisk/pull/4176