You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mesos.apache.org by Michał Łowicki <ml...@gmail.com> on 2018/12/03 16:40:02 UTC

Re: Propose to create a Kubernetes framework for Mesos

On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org> wrote:

> Cameron and Michal: I would love to understand your motivations and use
> cases for a k8s Mesos framework in a bit more detail. Looks like you are
> willing to rewrite your existing app definitions into k8s API spec. At this
> point, why are you still interested in Mesos as a CAAS backend? Is it
> because of scalability / reliability? Or is it because you still want to
> run non-k8s workloads/frameworks in this world? What are these workloads?
>

Mesos with its scalability and ability to run many frameworks (like
cron-like jobs, spark, proprietary) gives more flexibility in the long run.
Right now we're at the stage where Marathon UI in public version isn't
maintained so looking to have something with better community support.
Having entity like k8s-compliant scheduler maybe could help with adopting
other community-driven solutions but I also think that going into that
direction should be well thought and planned process.


>
> In general, I'm in favor of Mesos coming shipped with a default scheduler.
> I think it might help with the adoption similar to what happened with the
> command/default executor. In hindsight, we should've done this a long time
> ago. But, oh well, we were too optimistic that a single "default" scheduler
> will rule in the ecosystem which didn't quite pan out.
>
> However, I'm not sure if re-implementing k8s-scheduler as a Mesos framework
> is the right approach. I imagine k8s scheduler is significant piece of
> code  which we need to re-implement and on top of it as new API objects are
> added to k8s API, we need to keep pace with k8s scheduler for parity. The
> approach we (in the community) took with Spark (and Jenkins to some extent)
> was for the scheduling innovation happen in Spark community and we just let
> Spark launch spark executors via Mesos and let Spark launch its tasks out
> of band of Mesos. We used to have a version of Spark framework (fine
> grained mode?) where spark tasks were launched via Mesos offers but that
> was deprecated, partly because of maintainability. Will this k8s framework
> have similar problem? Sounds like one of the problems with the existing k8s
> framework implementations it the pre-launching of kubelets; can we use the
> k8s autoscaler to solve that problem?
>
> Also, I think (I might be wrong) most k8s users are not directly creating
> pods via the API but rather using higher level abstractions like replica
> sets, stateful sets, daemon sets etc. How will that fit into this
> architecture? Will the framework need to re-implement those controllers as
> well?
>
> Is there an integration point in k8s ecosystem where we can reuse the
> existing k8s schedulers and controllers but run the pods with mesos
> container runtime?
>
> All, in all, I'm +1 to explore the ideas in a WG.
>
>
> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io> wrote:
>
> > Hello all,
> >
> > As a Kubernetes fan, I am excited about this proposal.
> > However, I would challenge this community to think more abstractly about
> > the problem you want to address and any solution requirements before
> > discussing implementation details, such as adopting VK.
> >
> > Don't take me wrong, VK is a great concept: a Kubernetes node that
> > delegates container management to someone else.
> > But allow me to clarify a few things about it:
> >
> > - VK simply provides a very limited subset of the kubelet functionality,
> > namely the Kubernetes node registration and the observation of Pods that
> > have been assigned to it. It doesn't do pod (intra or inter) networking
> nor
> > delegates to CNI, doesn't do volume mounting, and so on.
> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
> > understand anything else than a Pod and its dependencies (e.g. ConfigMap
> or
> > Secret), meaning other primitives, such as DaemonSet, Deployment,
> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
> > - While the kubelet manages containers through CRI API (Container Runtime
> > Interface), the VK does it through its own Provider API.
> > - kubelet translates from Kubernetes primitives to CRI primitives, so CRI
> > implementations only need to understand CRI. However, the VK does no
> > translation and passes Kubernetes primitives directly to a provider,
> > requiring the VK provider to understand Kubernetes primitives.
> > - kubelet talks to CRI implementations through a gRPC socket. VK talks to
> > providers in-process and is highly-opinionated about the fact a provider
> > has no lifecycle (there's no _start_ or _stop_, as there would be for a
> > framework). There are talks about having Provide API over gRPC but it's
> not
> > trivial to decide[2].
> >
> > Now, if you are still thinking about implementation details, and having
> > some experience trying to create a VK provider for Mesos[1], I can tell
> you
> > the VK, as is today, is not a seamless fit.
> > That said, I am willing to help you figure out the design and pick the
> > right pieces to execute, if this is indeed something you want to do.
> >
> > 1 -
> >
> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
> >
> > Cheers,
> > Pires
> >
> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
> >
> >> + user list as well to hear more feedback from Mesos users.
> >>
> >> I am +1 on this proposal to create a Mesos framework that exposes k8s
> >> API, and provide nodeless
> >> <
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> >
> >> experience to users.
> >>
> >> Creating Mesos framework that provides k8s API is not a new idea. For
> >> instance, the following are the two prior attempts:
> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
> >> 2. https://mesosphere.com/product/kubernetes-engine/
> >>
> >> Both of the above solutions will run unmodified kubelets for workloads
> >> (i.e., pods). Some users might prefer that way, and we should not
> preclude
> >> that on Mesos. However, the reason this nodeless (aka, virtual kubelet)
> >> idea got me very excited is because it provides us an opportunity to
> create
> >> a truly integrated solution to bridge k8s and Mesos.
> >>
> >> K8s gets popular for reasons. IMO, the followings are the key:
> >> (1) API machinery. This includes API extension mechanism (CRD
> >> <
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> >),
> >> simple-to-program client, versioning, authn/authz, etc.
> >> (2) It expose basic scheduling primitives, and let users/vendors focus
> on
> >> orchestration (i.e., Operators). In contrast, Mesos framework is
> >> significantly harder to program due to the need for doing scheduling
> also.
> >> Although we have scheduling libraries like Fenzo
> >> <https://github.com/Netflix/Fenzo>, the whole community suffers from
> >> fragmentation because there's no "default" solution.
> >>
> >> ** Why this proposal is more integrated than prior solutions?*
> >>
> >> This is because prior solutions are more like installer for k8s. You
> >> either need to pre-reserve resources
> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet, or
> fork
> >> k8s scheduler to bring up kubelet on demand
> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
> Complexity
> >> is definitely a concern since both systems are involved. In contrast,
> the
> >> proposal propose to run k8s workloads (pods) directly on Mesos by
> >> translating pod spec to tasks/executors in Mesos. It's just another
> Mesos
> >> framework, but you can extend that framework behavior using k8s API
> >> extension mechanism (CRD and Operator)!
> >>
> >> ** Compare to just using k8s?*
> >>
> >> First of all, IMO, k8s is just an API spec. Any implementation that
> >> passes conformance tests is vanilla k8s experience. I understand that by
> >> going nodeless, some of the concepts in k8s no longer applies (e.g.,
> >> NodeAffinity, NodeSelector). I am actually less worried about this for
> two
> >> reasons: 1) Big stakeholders are behind nodeless, including Microsoft,
> AWS,
> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use cases
> >> (e.g., in public clouds).
> >>
> >> In fact, we can also choose to implement those k8s APIs that make the
> >> most sense first, and maybe define our own APIs, leveraging the
> >> extensibility of the k8s API machinery!
> >>
> >> If we do want to compare to upstream k8s implementation, i think the
> main
> >> benefit is that:
> >> 1) You can still run your existing custom Mesos frameworks as it is
> >> today, but start to provide your users some k8s API experiences
> >> 2) Scalability. Mesos is inherently more scalable than k8s because it
> >> takes different trade-offs. You can run multiple copies of the same
> >> frameworks (similar to marathon on marathon) to reach large scale if the
> >> k8s framework itself cannot scale beyond certain limit.
> >>
> >> ** Why putting this framework in Mesos repo?*
> >>
> >> Historically, the problem with Mesos community is fragmentation. People
> >> create different solutions for the same set of problems. Having a
> "blessed"
> >> solution in the Mesos repo has the following benefits:
> >> 1) License and ownership. It's under Apache already.
> >> 2) Attract contributions. Less fragmentation.
> >> 3) Established high quality in the repository.
> >>
> >> **** What's my suggestion for next steps? ****
> >>
> >> I suggest we create a working group for this. Any other PMC that likes
> >> this idea, please chime in here.
> >>
> >> - Jie
> >>
> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <me...@icloud.com.invalid>
> >> wrote:
> >>
> >>>
> >>>
> >>> 发自我的 iPhone
> >>>
> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
> >>> >
> >>> > I'm in favour of the proposal, Cameron. Building a bridge between
> >>> Mesos and
> >>> > Kubernetes will be beneficial for both communities. Virtual kubelet
> >>> effort
> >>> > looks promising indeed and is definitely a worthwhile approach to
> >>> build the
> >>> > bridge.
> >>> >
> >>> > While we will need some sort of a scheduler when implementing a
> >>> provider
> >>> > for mesos, we don't need to implement and use a "default" one: a
> simple
> >>> > mesos-go based scheduler will be fine for the start. We can of course
> >>> > consider building a default scheduler, but this will significantly
> >>> increase
> >>> > the size of the project.
> >>> >
> >>> > An exercise we will have to do here is determine which parts of a
> >>> > kubernetes task specification can be "converted" and hence launched
> on
> >>> a
> >>> > Mesos cluster. Once we have a working prototype we can start testing
> >>> and
> >>> > collecting data.
> >>> >
> >>> > Do you want to come up with a plan and maybe a more detailed
> proposal?
> >>> >
> >>> > Best,
> >>> > Alex
> >>>
> >>
>


-- 
BR,
Michał Łowicki

Re: Propose to create a Kubernetes framework for Mesos

Posted by Jie Yu <yu...@gmail.com>.
Thanks for the discussion so far! Looks like folks are pretty interested in
this, which is great!

I created a Slack channel in Apache Mesos slack (#virtual-kubelet). Please
join the channel for further discussions! (see this instruction
<http://mesos.apache.org/community/#slack> for joining Apache Mesos slack)

Given folks interested in this spread across the world, the working group
will be coordinated asynchronously in the Slack channel.

- Jie

On Mon, Dec 10, 2018 at 11:20 AM Cameron Chen <yi...@gmail.com> wrote:

> We now have both Mesos and Kubernetes(not running on Mesos) running in
> production.As Jie mentioned,with this proposal,I mainly want to solve the
> static partition issue.I agree to explore the ideas in a WG.
>
>
> Cameron
>
> Jie Yu <yu...@gmail.com> 于2018年12月6日周四 上午9:48写道:
>
> > I'd like to get some feedback on what Mesos users want. I can potentially
> > see two major use cases:
> >
> > (1) I just want k8s to run on Mesos, along with other Mesos frameworks,
> > sharing the same resources pool. I don't really care about nodeless.
> > Ideally, i'd like to run upstream k8s (include kubelet). The original k8s
> > on mesos framework has been retired, and the new Mesosphere MKE is not
> open
> > source, and only runs on Mesosphere DC/OS. I need one open source
> solution
> > here.
> > (2) I want nodeless because I believe it has a tighter integration with
> > Mesos, as compared to (2), and can solve the static partition issue. (1)
> is
> > more like a k8s installer, and you can do that without Mesos.
> >
> > *Can folks chime in here?*
> >
> > However, I'm not sure if re-implementing k8s-scheduler as a Mesos
> framework
> > > is the right approach. I imagine k8s scheduler is significant piece of
> > > code  which we need to re-implement and on top of it as new API objects
> > are
> > > added to k8s API, we need to keep pace with k8s scheduler for parity.
> The
> > > approach we (in the community) took with Spark (and Jenkins to some
> > extent)
> > > was for the scheduling innovation happen in Spark community and we just
> > let
> > > Spark launch spark executors via Mesos and let Spark launch its tasks
> out
> > > of band of Mesos. We used to have a version of Spark framework (fine
> > > grained mode?) where spark tasks were launched via Mesos offers but
> that
> > > was deprecated, partly because of maintainability. Will this k8s
> > framework
> > > have similar problem? Sounds like one of the problems with the existing
> > k8s
> > > framework implementations it the pre-launching of kubelets; can we use
> > the
> > > k8s autoscaler to solve that problem?
> >
> >
> > This is a good concern. It's around 17k lines of code in k8s scheduler.
> >
> > Jies-MacBook-Pro:scheduler jie$ pwd
> > /Users/jie/workspace/kubernetes/pkg/scheduler
> > Jies-MacBook-Pro:scheduler jie$ loc --exclude .*_test.go
> >
> >
> --------------------------------------------------------------------------------
> >  Language             Files        Lines        Blank      Comment
> >  Code
> >
> >
> --------------------------------------------------------------------------------
> >  Go                      83        17429         2165         3798
> > 11466
> >
> >
> --------------------------------------------------------------------------------
> >  Total                   83        17429         2165         3798
> > 11466
> >
> >
> --------------------------------------------------------------------------------
> >
> > Also, I think (I might be wrong) most k8s users are not directly creating
> > > pods via the API but rather using higher level abstractions like
> replica
> > > sets, stateful sets, daemon sets etc. How will that fit into this
> > > architecture? Will the framework need to re-implement those controllers
> > as
> > > well?
> >
> >
> > This is not true. You can re-use most of the controllers. Those
> controllers
> > will create pods as you said, and the mesos framework will be responsible
> > for scheduling those pods created.
> >
> > - Jie
> >
> > On Mon, Dec 3, 2018 at 9:56 AM Cecile, Adam <Ad...@hitec.lu>
> wrote:
> >
> > > On 12/3/18 5:40 PM, Michał Łowicki wrote:
> > >
> > >
> > >
> > > On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org>
> wrote:
> > >
> > >> Cameron and Michal: I would love to understand your motivations and
> use
> > >> cases for a k8s Mesos framework in a bit more detail. Looks like you
> are
> > >> willing to rewrite your existing app definitions into k8s API spec. At
> > >> this
> > >> point, why are you still interested in Mesos as a CAAS backend? Is it
> > >> because of scalability / reliability? Or is it because you still want
> to
> > >> run non-k8s workloads/frameworks in this world? What are these
> > workloads?
> > >>
> > >
> > > Mesos with its scalability and ability to run many frameworks (like
> > > cron-like jobs, spark, proprietary) gives more flexibility in the long
> > run.
> > > Right now we're at the stage where Marathon UI in public version isn't
> > > maintained so looking to have something with better community support.
> > > Having entity like k8s-compliant scheduler maybe could help with
> adopting
> > > other community-driven solutions but I also think that going into that
> > > direction should be well thought and planned process.
> > >
> > > We're sharing the exact same feeling. My next project will probably go
> > > full k8s because I don't feel confident in mesos future as an
> opensource
> > > project.
> > >
> > > Marathon UI still not supporting GPUs (even in JSON mode, thanks to
> > > marshaling) is the tip of the iceberg. I reported the issue ages ago
> and
> > I
> > > can understand nobody cares because DC/OS comes with a different
> > > (closed-source I bet) UI.
> > >
> > >
> > >
> > >>
> > >> In general, I'm in favor of Mesos coming shipped with a default
> > scheduler.
> > >> I think it might help with the adoption similar to what happened with
> > the
> > >> command/default executor. In hindsight, we should've done this a long
> > time
> > >> ago. But, oh well, we were too optimistic that a single "default"
> > >> scheduler
> > >> will rule in the ecosystem which didn't quite pan out.
> > >>
> > >> However, I'm not sure if re-implementing k8s-scheduler as a Mesos
> > >> framework
> > >> is the right approach. I imagine k8s scheduler is significant piece of
> > >> code  which we need to re-implement and on top of it as new API
> objects
> > >> are
> > >> added to k8s API, we need to keep pace with k8s scheduler for parity.
> > The
> > >> approach we (in the community) took with Spark (and Jenkins to some
> > >> extent)
> > >> was for the scheduling innovation happen in Spark community and we
> just
> > >> let
> > >> Spark launch spark executors via Mesos and let Spark launch its tasks
> > out
> > >> of band of Mesos. We used to have a version of Spark framework (fine
> > >> grained mode?) where spark tasks were launched via Mesos offers but
> that
> > >> was deprecated, partly because of maintainability. Will this k8s
> > framework
> > >> have similar problem? Sounds like one of the problems with the
> existing
> > >> k8s
> > >> framework implementations it the pre-launching of kubelets; can we use
> > the
> > >> k8s autoscaler to solve that problem?
> > >>
> > >> Also, I think (I might be wrong) most k8s users are not directly
> > creating
> > >> pods via the API but rather using higher level abstractions like
> replica
> > >> sets, stateful sets, daemon sets etc. How will that fit into this
> > >> architecture? Will the framework need to re-implement those
> controllers
> > as
> > >> well?
> > >>
> > >> Is there an integration point in k8s ecosystem where we can reuse the
> > >> existing k8s schedulers and controllers but run the pods with mesos
> > >> container runtime?
> > >>
> > >> All, in all, I'm +1 to explore the ideas in a WG.
> > >>
> > >>
> > >> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io>
> > wrote:
> > >>
> > >> > Hello all,
> > >> >
> > >> > As a Kubernetes fan, I am excited about this proposal.
> > >> > However, I would challenge this community to think more abstractly
> > about
> > >> > the problem you want to address and any solution requirements before
> > >> > discussing implementation details, such as adopting VK.
> > >> >
> > >> > Don't take me wrong, VK is a great concept: a Kubernetes node that
> > >> > delegates container management to someone else.
> > >> > But allow me to clarify a few things about it:
> > >> >
> > >> > - VK simply provides a very limited subset of the kubelet
> > functionality,
> > >> > namely the Kubernetes node registration and the observation of Pods
> > that
> > >> > have been assigned to it. It doesn't do pod (intra or inter)
> > networking
> > >> nor
> > >> > delegates to CNI, doesn't do volume mounting, and so on.
> > >> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
> > >> > understand anything else than a Pod and its dependencies (e.g.
> > >> ConfigMap or
> > >> > Secret), meaning other primitives, such as DaemonSet, Deployment,
> > >> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
> > >> > - While the kubelet manages containers through CRI API (Container
> > >> Runtime
> > >> > Interface), the VK does it through its own Provider API.
> > >> > - kubelet translates from Kubernetes primitives to CRI primitives,
> so
> > >> CRI
> > >> > implementations only need to understand CRI. However, the VK does no
> > >> > translation and passes Kubernetes primitives directly to a provider,
> > >> > requiring the VK provider to understand Kubernetes primitives.
> > >> > - kubelet talks to CRI implementations through a gRPC socket. VK
> talks
> > >> to
> > >> > providers in-process and is highly-opinionated about the fact a
> > provider
> > >> > has no lifecycle (there's no _start_ or _stop_, as there would be
> for
> > a
> > >> > framework). There are talks about having Provide API over gRPC but
> > it's
> > >> not
> > >> > trivial to decide[2].
> > >> >
> > >> > Now, if you are still thinking about implementation details, and
> > having
> > >> > some experience trying to create a VK provider for Mesos[1], I can
> > tell
> > >> you
> > >> > the VK, as is today, is not a seamless fit.
> > >> > That said, I am willing to help you figure out the design and pick
> the
> > >> > right pieces to execute, if this is indeed something you want to do.
> > >> >
> > >> > 1 -
> > >> >
> > >>
> >
> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
> > >> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
> > >> >
> > >> > Cheers,
> > >> > Pires
> > >> >
> > >> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
> > >> >
> > >> >> + user list as well to hear more feedback from Mesos users.
> > >> >>
> > >> >> I am +1 on this proposal to create a Mesos framework that exposes
> k8s
> > >> >> API, and provide nodeless
> > >> >> <
> > >>
> >
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> > >> >
> > >> >> experience to users.
> > >> >>
> > >> >> Creating Mesos framework that provides k8s API is not a new idea.
> For
> > >> >> instance, the following are the two prior attempts:
> > >> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
> > >> >> 2. https://mesosphere.com/product/kubernetes-engine/
> > >> >>
> > >> >> Both of the above solutions will run unmodified kubelets for
> > workloads
> > >> >> (i.e., pods). Some users might prefer that way, and we should not
> > >> preclude
> > >> >> that on Mesos. However, the reason this nodeless (aka, virtual
> > kubelet)
> > >> >> idea got me very excited is because it provides us an opportunity
> to
> > >> create
> > >> >> a truly integrated solution to bridge k8s and Mesos.
> > >> >>
> > >> >> K8s gets popular for reasons. IMO, the followings are the key:
> > >> >> (1) API machinery. This includes API extension mechanism (CRD
> > >> >> <
> > >>
> >
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> > >> >),
> > >> >> simple-to-program client, versioning, authn/authz, etc.
> > >> >> (2) It expose basic scheduling primitives, and let users/vendors
> > focus
> > >> on
> > >> >> orchestration (i.e., Operators). In contrast, Mesos framework is
> > >> >> significantly harder to program due to the need for doing
> scheduling
> > >> also.
> > >> >> Although we have scheduling libraries like Fenzo
> > >> >> <https://github.com/Netflix/Fenzo>, the whole community suffers
> from
> > >> >> fragmentation because there's no "default" solution.
> > >> >>
> > >> >> ** Why this proposal is more integrated than prior solutions?*
> > >> >>
> > >> >> This is because prior solutions are more like installer for k8s.
> You
> > >> >> either need to pre-reserve resources
> > >> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet,
> or
> > >> fork
> > >> >> k8s scheduler to bring up kubelet on demand
> > >> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
> > >> Complexity
> > >> >> is definitely a concern since both systems are involved. In
> contrast,
> > >> the
> > >> >> proposal propose to run k8s workloads (pods) directly on Mesos by
> > >> >> translating pod spec to tasks/executors in Mesos. It's just another
> > >> Mesos
> > >> >> framework, but you can extend that framework behavior using k8s API
> > >> >> extension mechanism (CRD and Operator)!
> > >> >>
> > >> >> ** Compare to just using k8s?*
> > >> >>
> > >> >> First of all, IMO, k8s is just an API spec. Any implementation that
> > >> >> passes conformance tests is vanilla k8s experience. I understand
> that
> > >> by
> > >> >> going nodeless, some of the concepts in k8s no longer applies
> (e.g.,
> > >> >> NodeAffinity, NodeSelector). I am actually less worried about this
> > for
> > >> two
> > >> >> reasons: 1) Big stakeholders are behind nodeless, including
> > Microsoft,
> > >> AWS,
> > >> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use
> > cases
> > >> >> (e.g., in public clouds).
> > >> >>
> > >> >> In fact, we can also choose to implement those k8s APIs that make
> the
> > >> >> most sense first, and maybe define our own APIs, leveraging the
> > >> >> extensibility of the k8s API machinery!
> > >> >>
> > >> >> If we do want to compare to upstream k8s implementation, i think
> the
> > >> main
> > >> >> benefit is that:
> > >> >> 1) You can still run your existing custom Mesos frameworks as it is
> > >> >> today, but start to provide your users some k8s API experiences
> > >> >> 2) Scalability. Mesos is inherently more scalable than k8s because
> it
> > >> >> takes different trade-offs. You can run multiple copies of the same
> > >> >> frameworks (similar to marathon on marathon) to reach large scale
> if
> > >> the
> > >> >> k8s framework itself cannot scale beyond certain limit.
> > >> >>
> > >> >> ** Why putting this framework in Mesos repo?*
> > >> >>
> > >> >> Historically, the problem with Mesos community is fragmentation.
> > People
> > >> >> create different solutions for the same set of problems. Having a
> > >> "blessed"
> > >> >> solution in the Mesos repo has the following benefits:
> > >> >> 1) License and ownership. It's under Apache already.
> > >> >> 2) Attract contributions. Less fragmentation.
> > >> >> 3) Established high quality in the repository.
> > >> >>
> > >> >> **** What's my suggestion for next steps? ****
> > >> >>
> > >> >> I suggest we create a working group for this. Any other PMC that
> > likes
> > >> >> this idea, please chime in here.
> > >> >>
> > >> >> - Jie
> > >> >>
> > >> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <meagleglass@icloud.com
> .invalid>
> > >> >> wrote:
> > >> >>
> > >> >>>
> > >> >>>
> > >> >>> 发自我的 iPhone
> > >> >>>
> > >> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
> > >> >>> >
> > >> >>> > I'm in favour of the proposal, Cameron. Building a bridge
> between
> > >> >>> Mesos and
> > >> >>> > Kubernetes will be beneficial for both communities. Virtual
> > kubelet
> > >> >>> effort
> > >> >>> > looks promising indeed and is definitely a worthwhile approach
> to
> > >> >>> build the
> > >> >>> > bridge.
> > >> >>> >
> > >> >>> > While we will need some sort of a scheduler when implementing a
> > >> >>> provider
> > >> >>> > for mesos, we don't need to implement and use a "default" one: a
> > >> simple
> > >> >>> > mesos-go based scheduler will be fine for the start. We can of
> > >> course
> > >> >>> > consider building a default scheduler, but this will
> significantly
> > >> >>> increase
> > >> >>> > the size of the project.
> > >> >>> >
> > >> >>> > An exercise we will have to do here is determine which parts of
> a
> > >> >>> > kubernetes task specification can be "converted" and hence
> > launched
> > >> on
> > >> >>> a
> > >> >>> > Mesos cluster. Once we have a working prototype we can start
> > testing
> > >> >>> and
> > >> >>> > collecting data.
> > >> >>> >
> > >> >>> > Do you want to come up with a plan and maybe a more detailed
> > >> proposal?
> > >> >>> >
> > >> >>> > Best,
> > >> >>> > Alex
> > >> >>>
> > >> >>
> > >>
> > >
> > >
> > > --
> > > BR,
> > > Michał Łowicki
> > >
> > >
> > >
> >
>

Re: Propose to create a Kubernetes framework for Mesos

Posted by Jie Yu <yu...@gmail.com>.
Thanks for the discussion so far! Looks like folks are pretty interested in
this, which is great!

I created a Slack channel in Apache Mesos slack (#virtual-kubelet). Please
join the channel for further discussions! (see this instruction
<http://mesos.apache.org/community/#slack> for joining Apache Mesos slack)

Given folks interested in this spread across the world, the working group
will be coordinated asynchronously in the Slack channel.

- Jie

On Mon, Dec 10, 2018 at 11:20 AM Cameron Chen <yi...@gmail.com> wrote:

> We now have both Mesos and Kubernetes(not running on Mesos) running in
> production.As Jie mentioned,with this proposal,I mainly want to solve the
> static partition issue.I agree to explore the ideas in a WG.
>
>
> Cameron
>
> Jie Yu <yu...@gmail.com> 于2018年12月6日周四 上午9:48写道:
>
> > I'd like to get some feedback on what Mesos users want. I can potentially
> > see two major use cases:
> >
> > (1) I just want k8s to run on Mesos, along with other Mesos frameworks,
> > sharing the same resources pool. I don't really care about nodeless.
> > Ideally, i'd like to run upstream k8s (include kubelet). The original k8s
> > on mesos framework has been retired, and the new Mesosphere MKE is not
> open
> > source, and only runs on Mesosphere DC/OS. I need one open source
> solution
> > here.
> > (2) I want nodeless because I believe it has a tighter integration with
> > Mesos, as compared to (2), and can solve the static partition issue. (1)
> is
> > more like a k8s installer, and you can do that without Mesos.
> >
> > *Can folks chime in here?*
> >
> > However, I'm not sure if re-implementing k8s-scheduler as a Mesos
> framework
> > > is the right approach. I imagine k8s scheduler is significant piece of
> > > code  which we need to re-implement and on top of it as new API objects
> > are
> > > added to k8s API, we need to keep pace with k8s scheduler for parity.
> The
> > > approach we (in the community) took with Spark (and Jenkins to some
> > extent)
> > > was for the scheduling innovation happen in Spark community and we just
> > let
> > > Spark launch spark executors via Mesos and let Spark launch its tasks
> out
> > > of band of Mesos. We used to have a version of Spark framework (fine
> > > grained mode?) where spark tasks were launched via Mesos offers but
> that
> > > was deprecated, partly because of maintainability. Will this k8s
> > framework
> > > have similar problem? Sounds like one of the problems with the existing
> > k8s
> > > framework implementations it the pre-launching of kubelets; can we use
> > the
> > > k8s autoscaler to solve that problem?
> >
> >
> > This is a good concern. It's around 17k lines of code in k8s scheduler.
> >
> > Jies-MacBook-Pro:scheduler jie$ pwd
> > /Users/jie/workspace/kubernetes/pkg/scheduler
> > Jies-MacBook-Pro:scheduler jie$ loc --exclude .*_test.go
> >
> >
> --------------------------------------------------------------------------------
> >  Language             Files        Lines        Blank      Comment
> >  Code
> >
> >
> --------------------------------------------------------------------------------
> >  Go                      83        17429         2165         3798
> > 11466
> >
> >
> --------------------------------------------------------------------------------
> >  Total                   83        17429         2165         3798
> > 11466
> >
> >
> --------------------------------------------------------------------------------
> >
> > Also, I think (I might be wrong) most k8s users are not directly creating
> > > pods via the API but rather using higher level abstractions like
> replica
> > > sets, stateful sets, daemon sets etc. How will that fit into this
> > > architecture? Will the framework need to re-implement those controllers
> > as
> > > well?
> >
> >
> > This is not true. You can re-use most of the controllers. Those
> controllers
> > will create pods as you said, and the mesos framework will be responsible
> > for scheduling those pods created.
> >
> > - Jie
> >
> > On Mon, Dec 3, 2018 at 9:56 AM Cecile, Adam <Ad...@hitec.lu>
> wrote:
> >
> > > On 12/3/18 5:40 PM, Michał Łowicki wrote:
> > >
> > >
> > >
> > > On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org>
> wrote:
> > >
> > >> Cameron and Michal: I would love to understand your motivations and
> use
> > >> cases for a k8s Mesos framework in a bit more detail. Looks like you
> are
> > >> willing to rewrite your existing app definitions into k8s API spec. At
> > >> this
> > >> point, why are you still interested in Mesos as a CAAS backend? Is it
> > >> because of scalability / reliability? Or is it because you still want
> to
> > >> run non-k8s workloads/frameworks in this world? What are these
> > workloads?
> > >>
> > >
> > > Mesos with its scalability and ability to run many frameworks (like
> > > cron-like jobs, spark, proprietary) gives more flexibility in the long
> > run.
> > > Right now we're at the stage where Marathon UI in public version isn't
> > > maintained so looking to have something with better community support.
> > > Having entity like k8s-compliant scheduler maybe could help with
> adopting
> > > other community-driven solutions but I also think that going into that
> > > direction should be well thought and planned process.
> > >
> > > We're sharing the exact same feeling. My next project will probably go
> > > full k8s because I don't feel confident in mesos future as an
> opensource
> > > project.
> > >
> > > Marathon UI still not supporting GPUs (even in JSON mode, thanks to
> > > marshaling) is the tip of the iceberg. I reported the issue ages ago
> and
> > I
> > > can understand nobody cares because DC/OS comes with a different
> > > (closed-source I bet) UI.
> > >
> > >
> > >
> > >>
> > >> In general, I'm in favor of Mesos coming shipped with a default
> > scheduler.
> > >> I think it might help with the adoption similar to what happened with
> > the
> > >> command/default executor. In hindsight, we should've done this a long
> > time
> > >> ago. But, oh well, we were too optimistic that a single "default"
> > >> scheduler
> > >> will rule in the ecosystem which didn't quite pan out.
> > >>
> > >> However, I'm not sure if re-implementing k8s-scheduler as a Mesos
> > >> framework
> > >> is the right approach. I imagine k8s scheduler is significant piece of
> > >> code  which we need to re-implement and on top of it as new API
> objects
> > >> are
> > >> added to k8s API, we need to keep pace with k8s scheduler for parity.
> > The
> > >> approach we (in the community) took with Spark (and Jenkins to some
> > >> extent)
> > >> was for the scheduling innovation happen in Spark community and we
> just
> > >> let
> > >> Spark launch spark executors via Mesos and let Spark launch its tasks
> > out
> > >> of band of Mesos. We used to have a version of Spark framework (fine
> > >> grained mode?) where spark tasks were launched via Mesos offers but
> that
> > >> was deprecated, partly because of maintainability. Will this k8s
> > framework
> > >> have similar problem? Sounds like one of the problems with the
> existing
> > >> k8s
> > >> framework implementations it the pre-launching of kubelets; can we use
> > the
> > >> k8s autoscaler to solve that problem?
> > >>
> > >> Also, I think (I might be wrong) most k8s users are not directly
> > creating
> > >> pods via the API but rather using higher level abstractions like
> replica
> > >> sets, stateful sets, daemon sets etc. How will that fit into this
> > >> architecture? Will the framework need to re-implement those
> controllers
> > as
> > >> well?
> > >>
> > >> Is there an integration point in k8s ecosystem where we can reuse the
> > >> existing k8s schedulers and controllers but run the pods with mesos
> > >> container runtime?
> > >>
> > >> All, in all, I'm +1 to explore the ideas in a WG.
> > >>
> > >>
> > >> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io>
> > wrote:
> > >>
> > >> > Hello all,
> > >> >
> > >> > As a Kubernetes fan, I am excited about this proposal.
> > >> > However, I would challenge this community to think more abstractly
> > about
> > >> > the problem you want to address and any solution requirements before
> > >> > discussing implementation details, such as adopting VK.
> > >> >
> > >> > Don't take me wrong, VK is a great concept: a Kubernetes node that
> > >> > delegates container management to someone else.
> > >> > But allow me to clarify a few things about it:
> > >> >
> > >> > - VK simply provides a very limited subset of the kubelet
> > functionality,
> > >> > namely the Kubernetes node registration and the observation of Pods
> > that
> > >> > have been assigned to it. It doesn't do pod (intra or inter)
> > networking
> > >> nor
> > >> > delegates to CNI, doesn't do volume mounting, and so on.
> > >> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
> > >> > understand anything else than a Pod and its dependencies (e.g.
> > >> ConfigMap or
> > >> > Secret), meaning other primitives, such as DaemonSet, Deployment,
> > >> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
> > >> > - While the kubelet manages containers through CRI API (Container
> > >> Runtime
> > >> > Interface), the VK does it through its own Provider API.
> > >> > - kubelet translates from Kubernetes primitives to CRI primitives,
> so
> > >> CRI
> > >> > implementations only need to understand CRI. However, the VK does no
> > >> > translation and passes Kubernetes primitives directly to a provider,
> > >> > requiring the VK provider to understand Kubernetes primitives.
> > >> > - kubelet talks to CRI implementations through a gRPC socket. VK
> talks
> > >> to
> > >> > providers in-process and is highly-opinionated about the fact a
> > provider
> > >> > has no lifecycle (there's no _start_ or _stop_, as there would be
> for
> > a
> > >> > framework). There are talks about having Provide API over gRPC but
> > it's
> > >> not
> > >> > trivial to decide[2].
> > >> >
> > >> > Now, if you are still thinking about implementation details, and
> > having
> > >> > some experience trying to create a VK provider for Mesos[1], I can
> > tell
> > >> you
> > >> > the VK, as is today, is not a seamless fit.
> > >> > That said, I am willing to help you figure out the design and pick
> the
> > >> > right pieces to execute, if this is indeed something you want to do.
> > >> >
> > >> > 1 -
> > >> >
> > >>
> >
> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
> > >> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
> > >> >
> > >> > Cheers,
> > >> > Pires
> > >> >
> > >> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
> > >> >
> > >> >> + user list as well to hear more feedback from Mesos users.
> > >> >>
> > >> >> I am +1 on this proposal to create a Mesos framework that exposes
> k8s
> > >> >> API, and provide nodeless
> > >> >> <
> > >>
> >
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> > >> >
> > >> >> experience to users.
> > >> >>
> > >> >> Creating Mesos framework that provides k8s API is not a new idea.
> For
> > >> >> instance, the following are the two prior attempts:
> > >> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
> > >> >> 2. https://mesosphere.com/product/kubernetes-engine/
> > >> >>
> > >> >> Both of the above solutions will run unmodified kubelets for
> > workloads
> > >> >> (i.e., pods). Some users might prefer that way, and we should not
> > >> preclude
> > >> >> that on Mesos. However, the reason this nodeless (aka, virtual
> > kubelet)
> > >> >> idea got me very excited is because it provides us an opportunity
> to
> > >> create
> > >> >> a truly integrated solution to bridge k8s and Mesos.
> > >> >>
> > >> >> K8s gets popular for reasons. IMO, the followings are the key:
> > >> >> (1) API machinery. This includes API extension mechanism (CRD
> > >> >> <
> > >>
> >
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> > >> >),
> > >> >> simple-to-program client, versioning, authn/authz, etc.
> > >> >> (2) It expose basic scheduling primitives, and let users/vendors
> > focus
> > >> on
> > >> >> orchestration (i.e., Operators). In contrast, Mesos framework is
> > >> >> significantly harder to program due to the need for doing
> scheduling
> > >> also.
> > >> >> Although we have scheduling libraries like Fenzo
> > >> >> <https://github.com/Netflix/Fenzo>, the whole community suffers
> from
> > >> >> fragmentation because there's no "default" solution.
> > >> >>
> > >> >> ** Why this proposal is more integrated than prior solutions?*
> > >> >>
> > >> >> This is because prior solutions are more like installer for k8s.
> You
> > >> >> either need to pre-reserve resources
> > >> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet,
> or
> > >> fork
> > >> >> k8s scheduler to bring up kubelet on demand
> > >> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
> > >> Complexity
> > >> >> is definitely a concern since both systems are involved. In
> contrast,
> > >> the
> > >> >> proposal propose to run k8s workloads (pods) directly on Mesos by
> > >> >> translating pod spec to tasks/executors in Mesos. It's just another
> > >> Mesos
> > >> >> framework, but you can extend that framework behavior using k8s API
> > >> >> extension mechanism (CRD and Operator)!
> > >> >>
> > >> >> ** Compare to just using k8s?*
> > >> >>
> > >> >> First of all, IMO, k8s is just an API spec. Any implementation that
> > >> >> passes conformance tests is vanilla k8s experience. I understand
> that
> > >> by
> > >> >> going nodeless, some of the concepts in k8s no longer applies
> (e.g.,
> > >> >> NodeAffinity, NodeSelector). I am actually less worried about this
> > for
> > >> two
> > >> >> reasons: 1) Big stakeholders are behind nodeless, including
> > Microsoft,
> > >> AWS,
> > >> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use
> > cases
> > >> >> (e.g., in public clouds).
> > >> >>
> > >> >> In fact, we can also choose to implement those k8s APIs that make
> the
> > >> >> most sense first, and maybe define our own APIs, leveraging the
> > >> >> extensibility of the k8s API machinery!
> > >> >>
> > >> >> If we do want to compare to upstream k8s implementation, i think
> the
> > >> main
> > >> >> benefit is that:
> > >> >> 1) You can still run your existing custom Mesos frameworks as it is
> > >> >> today, but start to provide your users some k8s API experiences
> > >> >> 2) Scalability. Mesos is inherently more scalable than k8s because
> it
> > >> >> takes different trade-offs. You can run multiple copies of the same
> > >> >> frameworks (similar to marathon on marathon) to reach large scale
> if
> > >> the
> > >> >> k8s framework itself cannot scale beyond certain limit.
> > >> >>
> > >> >> ** Why putting this framework in Mesos repo?*
> > >> >>
> > >> >> Historically, the problem with Mesos community is fragmentation.
> > People
> > >> >> create different solutions for the same set of problems. Having a
> > >> "blessed"
> > >> >> solution in the Mesos repo has the following benefits:
> > >> >> 1) License and ownership. It's under Apache already.
> > >> >> 2) Attract contributions. Less fragmentation.
> > >> >> 3) Established high quality in the repository.
> > >> >>
> > >> >> **** What's my suggestion for next steps? ****
> > >> >>
> > >> >> I suggest we create a working group for this. Any other PMC that
> > likes
> > >> >> this idea, please chime in here.
> > >> >>
> > >> >> - Jie
> > >> >>
> > >> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <meagleglass@icloud.com
> .invalid>
> > >> >> wrote:
> > >> >>
> > >> >>>
> > >> >>>
> > >> >>> 发自我的 iPhone
> > >> >>>
> > >> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
> > >> >>> >
> > >> >>> > I'm in favour of the proposal, Cameron. Building a bridge
> between
> > >> >>> Mesos and
> > >> >>> > Kubernetes will be beneficial for both communities. Virtual
> > kubelet
> > >> >>> effort
> > >> >>> > looks promising indeed and is definitely a worthwhile approach
> to
> > >> >>> build the
> > >> >>> > bridge.
> > >> >>> >
> > >> >>> > While we will need some sort of a scheduler when implementing a
> > >> >>> provider
> > >> >>> > for mesos, we don't need to implement and use a "default" one: a
> > >> simple
> > >> >>> > mesos-go based scheduler will be fine for the start. We can of
> > >> course
> > >> >>> > consider building a default scheduler, but this will
> significantly
> > >> >>> increase
> > >> >>> > the size of the project.
> > >> >>> >
> > >> >>> > An exercise we will have to do here is determine which parts of
> a
> > >> >>> > kubernetes task specification can be "converted" and hence
> > launched
> > >> on
> > >> >>> a
> > >> >>> > Mesos cluster. Once we have a working prototype we can start
> > testing
> > >> >>> and
> > >> >>> > collecting data.
> > >> >>> >
> > >> >>> > Do you want to come up with a plan and maybe a more detailed
> > >> proposal?
> > >> >>> >
> > >> >>> > Best,
> > >> >>> > Alex
> > >> >>>
> > >> >>
> > >>
> > >
> > >
> > > --
> > > BR,
> > > Michał Łowicki
> > >
> > >
> > >
> >
>

Re: Propose to create a Kubernetes framework for Mesos

Posted by Cameron Chen <yi...@gmail.com>.
We now have both Mesos and Kubernetes(not running on Mesos) running in
production.As Jie mentioned,with this proposal,I mainly want to solve the
static partition issue.I agree to explore the ideas in a WG.


Cameron

Jie Yu <yu...@gmail.com> 于2018年12月6日周四 上午9:48写道:

> I'd like to get some feedback on what Mesos users want. I can potentially
> see two major use cases:
>
> (1) I just want k8s to run on Mesos, along with other Mesos frameworks,
> sharing the same resources pool. I don't really care about nodeless.
> Ideally, i'd like to run upstream k8s (include kubelet). The original k8s
> on mesos framework has been retired, and the new Mesosphere MKE is not open
> source, and only runs on Mesosphere DC/OS. I need one open source solution
> here.
> (2) I want nodeless because I believe it has a tighter integration with
> Mesos, as compared to (2), and can solve the static partition issue. (1) is
> more like a k8s installer, and you can do that without Mesos.
>
> *Can folks chime in here?*
>
> However, I'm not sure if re-implementing k8s-scheduler as a Mesos framework
> > is the right approach. I imagine k8s scheduler is significant piece of
> > code  which we need to re-implement and on top of it as new API objects
> are
> > added to k8s API, we need to keep pace with k8s scheduler for parity. The
> > approach we (in the community) took with Spark (and Jenkins to some
> extent)
> > was for the scheduling innovation happen in Spark community and we just
> let
> > Spark launch spark executors via Mesos and let Spark launch its tasks out
> > of band of Mesos. We used to have a version of Spark framework (fine
> > grained mode?) where spark tasks were launched via Mesos offers but that
> > was deprecated, partly because of maintainability. Will this k8s
> framework
> > have similar problem? Sounds like one of the problems with the existing
> k8s
> > framework implementations it the pre-launching of kubelets; can we use
> the
> > k8s autoscaler to solve that problem?
>
>
> This is a good concern. It's around 17k lines of code in k8s scheduler.
>
> Jies-MacBook-Pro:scheduler jie$ pwd
> /Users/jie/workspace/kubernetes/pkg/scheduler
> Jies-MacBook-Pro:scheduler jie$ loc --exclude .*_test.go
>
> --------------------------------------------------------------------------------
>  Language             Files        Lines        Blank      Comment
>  Code
>
> --------------------------------------------------------------------------------
>  Go                      83        17429         2165         3798
> 11466
>
> --------------------------------------------------------------------------------
>  Total                   83        17429         2165         3798
> 11466
>
> --------------------------------------------------------------------------------
>
> Also, I think (I might be wrong) most k8s users are not directly creating
> > pods via the API but rather using higher level abstractions like replica
> > sets, stateful sets, daemon sets etc. How will that fit into this
> > architecture? Will the framework need to re-implement those controllers
> as
> > well?
>
>
> This is not true. You can re-use most of the controllers. Those controllers
> will create pods as you said, and the mesos framework will be responsible
> for scheduling those pods created.
>
> - Jie
>
> On Mon, Dec 3, 2018 at 9:56 AM Cecile, Adam <Ad...@hitec.lu> wrote:
>
> > On 12/3/18 5:40 PM, Michał Łowicki wrote:
> >
> >
> >
> > On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org> wrote:
> >
> >> Cameron and Michal: I would love to understand your motivations and use
> >> cases for a k8s Mesos framework in a bit more detail. Looks like you are
> >> willing to rewrite your existing app definitions into k8s API spec. At
> >> this
> >> point, why are you still interested in Mesos as a CAAS backend? Is it
> >> because of scalability / reliability? Or is it because you still want to
> >> run non-k8s workloads/frameworks in this world? What are these
> workloads?
> >>
> >
> > Mesos with its scalability and ability to run many frameworks (like
> > cron-like jobs, spark, proprietary) gives more flexibility in the long
> run.
> > Right now we're at the stage where Marathon UI in public version isn't
> > maintained so looking to have something with better community support.
> > Having entity like k8s-compliant scheduler maybe could help with adopting
> > other community-driven solutions but I also think that going into that
> > direction should be well thought and planned process.
> >
> > We're sharing the exact same feeling. My next project will probably go
> > full k8s because I don't feel confident in mesos future as an opensource
> > project.
> >
> > Marathon UI still not supporting GPUs (even in JSON mode, thanks to
> > marshaling) is the tip of the iceberg. I reported the issue ages ago and
> I
> > can understand nobody cares because DC/OS comes with a different
> > (closed-source I bet) UI.
> >
> >
> >
> >>
> >> In general, I'm in favor of Mesos coming shipped with a default
> scheduler.
> >> I think it might help with the adoption similar to what happened with
> the
> >> command/default executor. In hindsight, we should've done this a long
> time
> >> ago. But, oh well, we were too optimistic that a single "default"
> >> scheduler
> >> will rule in the ecosystem which didn't quite pan out.
> >>
> >> However, I'm not sure if re-implementing k8s-scheduler as a Mesos
> >> framework
> >> is the right approach. I imagine k8s scheduler is significant piece of
> >> code  which we need to re-implement and on top of it as new API objects
> >> are
> >> added to k8s API, we need to keep pace with k8s scheduler for parity.
> The
> >> approach we (in the community) took with Spark (and Jenkins to some
> >> extent)
> >> was for the scheduling innovation happen in Spark community and we just
> >> let
> >> Spark launch spark executors via Mesos and let Spark launch its tasks
> out
> >> of band of Mesos. We used to have a version of Spark framework (fine
> >> grained mode?) where spark tasks were launched via Mesos offers but that
> >> was deprecated, partly because of maintainability. Will this k8s
> framework
> >> have similar problem? Sounds like one of the problems with the existing
> >> k8s
> >> framework implementations it the pre-launching of kubelets; can we use
> the
> >> k8s autoscaler to solve that problem?
> >>
> >> Also, I think (I might be wrong) most k8s users are not directly
> creating
> >> pods via the API but rather using higher level abstractions like replica
> >> sets, stateful sets, daemon sets etc. How will that fit into this
> >> architecture? Will the framework need to re-implement those controllers
> as
> >> well?
> >>
> >> Is there an integration point in k8s ecosystem where we can reuse the
> >> existing k8s schedulers and controllers but run the pods with mesos
> >> container runtime?
> >>
> >> All, in all, I'm +1 to explore the ideas in a WG.
> >>
> >>
> >> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io>
> wrote:
> >>
> >> > Hello all,
> >> >
> >> > As a Kubernetes fan, I am excited about this proposal.
> >> > However, I would challenge this community to think more abstractly
> about
> >> > the problem you want to address and any solution requirements before
> >> > discussing implementation details, such as adopting VK.
> >> >
> >> > Don't take me wrong, VK is a great concept: a Kubernetes node that
> >> > delegates container management to someone else.
> >> > But allow me to clarify a few things about it:
> >> >
> >> > - VK simply provides a very limited subset of the kubelet
> functionality,
> >> > namely the Kubernetes node registration and the observation of Pods
> that
> >> > have been assigned to it. It doesn't do pod (intra or inter)
> networking
> >> nor
> >> > delegates to CNI, doesn't do volume mounting, and so on.
> >> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
> >> > understand anything else than a Pod and its dependencies (e.g.
> >> ConfigMap or
> >> > Secret), meaning other primitives, such as DaemonSet, Deployment,
> >> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
> >> > - While the kubelet manages containers through CRI API (Container
> >> Runtime
> >> > Interface), the VK does it through its own Provider API.
> >> > - kubelet translates from Kubernetes primitives to CRI primitives, so
> >> CRI
> >> > implementations only need to understand CRI. However, the VK does no
> >> > translation and passes Kubernetes primitives directly to a provider,
> >> > requiring the VK provider to understand Kubernetes primitives.
> >> > - kubelet talks to CRI implementations through a gRPC socket. VK talks
> >> to
> >> > providers in-process and is highly-opinionated about the fact a
> provider
> >> > has no lifecycle (there's no _start_ or _stop_, as there would be for
> a
> >> > framework). There are talks about having Provide API over gRPC but
> it's
> >> not
> >> > trivial to decide[2].
> >> >
> >> > Now, if you are still thinking about implementation details, and
> having
> >> > some experience trying to create a VK provider for Mesos[1], I can
> tell
> >> you
> >> > the VK, as is today, is not a seamless fit.
> >> > That said, I am willing to help you figure out the design and pick the
> >> > right pieces to execute, if this is indeed something you want to do.
> >> >
> >> > 1 -
> >> >
> >>
> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
> >> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
> >> >
> >> > Cheers,
> >> > Pires
> >> >
> >> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
> >> >
> >> >> + user list as well to hear more feedback from Mesos users.
> >> >>
> >> >> I am +1 on this proposal to create a Mesos framework that exposes k8s
> >> >> API, and provide nodeless
> >> >> <
> >>
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> >> >
> >> >> experience to users.
> >> >>
> >> >> Creating Mesos framework that provides k8s API is not a new idea. For
> >> >> instance, the following are the two prior attempts:
> >> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
> >> >> 2. https://mesosphere.com/product/kubernetes-engine/
> >> >>
> >> >> Both of the above solutions will run unmodified kubelets for
> workloads
> >> >> (i.e., pods). Some users might prefer that way, and we should not
> >> preclude
> >> >> that on Mesos. However, the reason this nodeless (aka, virtual
> kubelet)
> >> >> idea got me very excited is because it provides us an opportunity to
> >> create
> >> >> a truly integrated solution to bridge k8s and Mesos.
> >> >>
> >> >> K8s gets popular for reasons. IMO, the followings are the key:
> >> >> (1) API machinery. This includes API extension mechanism (CRD
> >> >> <
> >>
> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
> >> >),
> >> >> simple-to-program client, versioning, authn/authz, etc.
> >> >> (2) It expose basic scheduling primitives, and let users/vendors
> focus
> >> on
> >> >> orchestration (i.e., Operators). In contrast, Mesos framework is
> >> >> significantly harder to program due to the need for doing scheduling
> >> also.
> >> >> Although we have scheduling libraries like Fenzo
> >> >> <https://github.com/Netflix/Fenzo>, the whole community suffers from
> >> >> fragmentation because there's no "default" solution.
> >> >>
> >> >> ** Why this proposal is more integrated than prior solutions?*
> >> >>
> >> >> This is because prior solutions are more like installer for k8s. You
> >> >> either need to pre-reserve resources
> >> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet, or
> >> fork
> >> >> k8s scheduler to bring up kubelet on demand
> >> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
> >> Complexity
> >> >> is definitely a concern since both systems are involved. In contrast,
> >> the
> >> >> proposal propose to run k8s workloads (pods) directly on Mesos by
> >> >> translating pod spec to tasks/executors in Mesos. It's just another
> >> Mesos
> >> >> framework, but you can extend that framework behavior using k8s API
> >> >> extension mechanism (CRD and Operator)!
> >> >>
> >> >> ** Compare to just using k8s?*
> >> >>
> >> >> First of all, IMO, k8s is just an API spec. Any implementation that
> >> >> passes conformance tests is vanilla k8s experience. I understand that
> >> by
> >> >> going nodeless, some of the concepts in k8s no longer applies (e.g.,
> >> >> NodeAffinity, NodeSelector). I am actually less worried about this
> for
> >> two
> >> >> reasons: 1) Big stakeholders are behind nodeless, including
> Microsoft,
> >> AWS,
> >> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use
> cases
> >> >> (e.g., in public clouds).
> >> >>
> >> >> In fact, we can also choose to implement those k8s APIs that make the
> >> >> most sense first, and maybe define our own APIs, leveraging the
> >> >> extensibility of the k8s API machinery!
> >> >>
> >> >> If we do want to compare to upstream k8s implementation, i think the
> >> main
> >> >> benefit is that:
> >> >> 1) You can still run your existing custom Mesos frameworks as it is
> >> >> today, but start to provide your users some k8s API experiences
> >> >> 2) Scalability. Mesos is inherently more scalable than k8s because it
> >> >> takes different trade-offs. You can run multiple copies of the same
> >> >> frameworks (similar to marathon on marathon) to reach large scale if
> >> the
> >> >> k8s framework itself cannot scale beyond certain limit.
> >> >>
> >> >> ** Why putting this framework in Mesos repo?*
> >> >>
> >> >> Historically, the problem with Mesos community is fragmentation.
> People
> >> >> create different solutions for the same set of problems. Having a
> >> "blessed"
> >> >> solution in the Mesos repo has the following benefits:
> >> >> 1) License and ownership. It's under Apache already.
> >> >> 2) Attract contributions. Less fragmentation.
> >> >> 3) Established high quality in the repository.
> >> >>
> >> >> **** What's my suggestion for next steps? ****
> >> >>
> >> >> I suggest we create a working group for this. Any other PMC that
> likes
> >> >> this idea, please chime in here.
> >> >>
> >> >> - Jie
> >> >>
> >> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <me...@icloud.com.invalid>
> >> >> wrote:
> >> >>
> >> >>>
> >> >>>
> >> >>> 发自我的 iPhone
> >> >>>
> >> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
> >> >>> >
> >> >>> > I'm in favour of the proposal, Cameron. Building a bridge between
> >> >>> Mesos and
> >> >>> > Kubernetes will be beneficial for both communities. Virtual
> kubelet
> >> >>> effort
> >> >>> > looks promising indeed and is definitely a worthwhile approach to
> >> >>> build the
> >> >>> > bridge.
> >> >>> >
> >> >>> > While we will need some sort of a scheduler when implementing a
> >> >>> provider
> >> >>> > for mesos, we don't need to implement and use a "default" one: a
> >> simple
> >> >>> > mesos-go based scheduler will be fine for the start. We can of
> >> course
> >> >>> > consider building a default scheduler, but this will significantly
> >> >>> increase
> >> >>> > the size of the project.
> >> >>> >
> >> >>> > An exercise we will have to do here is determine which parts of a
> >> >>> > kubernetes task specification can be "converted" and hence
> launched
> >> on
> >> >>> a
> >> >>> > Mesos cluster. Once we have a working prototype we can start
> testing
> >> >>> and
> >> >>> > collecting data.
> >> >>> >
> >> >>> > Do you want to come up with a plan and maybe a more detailed
> >> proposal?
> >> >>> >
> >> >>> > Best,
> >> >>> > Alex
> >> >>>
> >> >>
> >>
> >
> >
> > --
> > BR,
> > Michał Łowicki
> >
> >
> >
>

Re: Propose to create a Kubernetes framework for Mesos

Posted by Jie Yu <yu...@gmail.com>.
I'd like to get some feedback on what Mesos users want. I can potentially
see two major use cases:

(1) I just want k8s to run on Mesos, along with other Mesos frameworks,
sharing the same resources pool. I don't really care about nodeless.
Ideally, i'd like to run upstream k8s (include kubelet). The original k8s
on mesos framework has been retired, and the new Mesosphere MKE is not open
source, and only runs on Mesosphere DC/OS. I need one open source solution
here.
(2) I want nodeless because I believe it has a tighter integration with
Mesos, as compared to (2), and can solve the static partition issue. (1) is
more like a k8s installer, and you can do that without Mesos.

*Can folks chime in here?*

However, I'm not sure if re-implementing k8s-scheduler as a Mesos framework
> is the right approach. I imagine k8s scheduler is significant piece of
> code  which we need to re-implement and on top of it as new API objects are
> added to k8s API, we need to keep pace with k8s scheduler for parity. The
> approach we (in the community) took with Spark (and Jenkins to some extent)
> was for the scheduling innovation happen in Spark community and we just let
> Spark launch spark executors via Mesos and let Spark launch its tasks out
> of band of Mesos. We used to have a version of Spark framework (fine
> grained mode?) where spark tasks were launched via Mesos offers but that
> was deprecated, partly because of maintainability. Will this k8s framework
> have similar problem? Sounds like one of the problems with the existing k8s
> framework implementations it the pre-launching of kubelets; can we use the
> k8s autoscaler to solve that problem?


This is a good concern. It's around 17k lines of code in k8s scheduler.

Jies-MacBook-Pro:scheduler jie$ pwd
/Users/jie/workspace/kubernetes/pkg/scheduler
Jies-MacBook-Pro:scheduler jie$ loc --exclude .*_test.go
--------------------------------------------------------------------------------
 Language             Files        Lines        Blank      Comment
 Code
--------------------------------------------------------------------------------
 Go                      83        17429         2165         3798
11466
--------------------------------------------------------------------------------
 Total                   83        17429         2165         3798
11466
--------------------------------------------------------------------------------

Also, I think (I might be wrong) most k8s users are not directly creating
> pods via the API but rather using higher level abstractions like replica
> sets, stateful sets, daemon sets etc. How will that fit into this
> architecture? Will the framework need to re-implement those controllers as
> well?


This is not true. You can re-use most of the controllers. Those controllers
will create pods as you said, and the mesos framework will be responsible
for scheduling those pods created.

- Jie

On Mon, Dec 3, 2018 at 9:56 AM Cecile, Adam <Ad...@hitec.lu> wrote:

> On 12/3/18 5:40 PM, Michał Łowicki wrote:
>
>
>
> On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org> wrote:
>
>> Cameron and Michal: I would love to understand your motivations and use
>> cases for a k8s Mesos framework in a bit more detail. Looks like you are
>> willing to rewrite your existing app definitions into k8s API spec. At
>> this
>> point, why are you still interested in Mesos as a CAAS backend? Is it
>> because of scalability / reliability? Or is it because you still want to
>> run non-k8s workloads/frameworks in this world? What are these workloads?
>>
>
> Mesos with its scalability and ability to run many frameworks (like
> cron-like jobs, spark, proprietary) gives more flexibility in the long run.
> Right now we're at the stage where Marathon UI in public version isn't
> maintained so looking to have something with better community support.
> Having entity like k8s-compliant scheduler maybe could help with adopting
> other community-driven solutions but I also think that going into that
> direction should be well thought and planned process.
>
> We're sharing the exact same feeling. My next project will probably go
> full k8s because I don't feel confident in mesos future as an opensource
> project.
>
> Marathon UI still not supporting GPUs (even in JSON mode, thanks to
> marshaling) is the tip of the iceberg. I reported the issue ages ago and I
> can understand nobody cares because DC/OS comes with a different
> (closed-source I bet) UI.
>
>
>
>>
>> In general, I'm in favor of Mesos coming shipped with a default scheduler.
>> I think it might help with the adoption similar to what happened with the
>> command/default executor. In hindsight, we should've done this a long time
>> ago. But, oh well, we were too optimistic that a single "default"
>> scheduler
>> will rule in the ecosystem which didn't quite pan out.
>>
>> However, I'm not sure if re-implementing k8s-scheduler as a Mesos
>> framework
>> is the right approach. I imagine k8s scheduler is significant piece of
>> code  which we need to re-implement and on top of it as new API objects
>> are
>> added to k8s API, we need to keep pace with k8s scheduler for parity. The
>> approach we (in the community) took with Spark (and Jenkins to some
>> extent)
>> was for the scheduling innovation happen in Spark community and we just
>> let
>> Spark launch spark executors via Mesos and let Spark launch its tasks out
>> of band of Mesos. We used to have a version of Spark framework (fine
>> grained mode?) where spark tasks were launched via Mesos offers but that
>> was deprecated, partly because of maintainability. Will this k8s framework
>> have similar problem? Sounds like one of the problems with the existing
>> k8s
>> framework implementations it the pre-launching of kubelets; can we use the
>> k8s autoscaler to solve that problem?
>>
>> Also, I think (I might be wrong) most k8s users are not directly creating
>> pods via the API but rather using higher level abstractions like replica
>> sets, stateful sets, daemon sets etc. How will that fit into this
>> architecture? Will the framework need to re-implement those controllers as
>> well?
>>
>> Is there an integration point in k8s ecosystem where we can reuse the
>> existing k8s schedulers and controllers but run the pods with mesos
>> container runtime?
>>
>> All, in all, I'm +1 to explore the ideas in a WG.
>>
>>
>> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io> wrote:
>>
>> > Hello all,
>> >
>> > As a Kubernetes fan, I am excited about this proposal.
>> > However, I would challenge this community to think more abstractly about
>> > the problem you want to address and any solution requirements before
>> > discussing implementation details, such as adopting VK.
>> >
>> > Don't take me wrong, VK is a great concept: a Kubernetes node that
>> > delegates container management to someone else.
>> > But allow me to clarify a few things about it:
>> >
>> > - VK simply provides a very limited subset of the kubelet functionality,
>> > namely the Kubernetes node registration and the observation of Pods that
>> > have been assigned to it. It doesn't do pod (intra or inter) networking
>> nor
>> > delegates to CNI, doesn't do volume mounting, and so on.
>> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
>> > understand anything else than a Pod and its dependencies (e.g.
>> ConfigMap or
>> > Secret), meaning other primitives, such as DaemonSet, Deployment,
>> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
>> > - While the kubelet manages containers through CRI API (Container
>> Runtime
>> > Interface), the VK does it through its own Provider API.
>> > - kubelet translates from Kubernetes primitives to CRI primitives, so
>> CRI
>> > implementations only need to understand CRI. However, the VK does no
>> > translation and passes Kubernetes primitives directly to a provider,
>> > requiring the VK provider to understand Kubernetes primitives.
>> > - kubelet talks to CRI implementations through a gRPC socket. VK talks
>> to
>> > providers in-process and is highly-opinionated about the fact a provider
>> > has no lifecycle (there's no _start_ or _stop_, as there would be for a
>> > framework). There are talks about having Provide API over gRPC but it's
>> not
>> > trivial to decide[2].
>> >
>> > Now, if you are still thinking about implementation details, and having
>> > some experience trying to create a VK provider for Mesos[1], I can tell
>> you
>> > the VK, as is today, is not a seamless fit.
>> > That said, I am willing to help you figure out the design and pick the
>> > right pieces to execute, if this is indeed something you want to do.
>> >
>> > 1 -
>> >
>> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
>> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
>> >
>> > Cheers,
>> > Pires
>> >
>> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
>> >
>> >> + user list as well to hear more feedback from Mesos users.
>> >>
>> >> I am +1 on this proposal to create a Mesos framework that exposes k8s
>> >> API, and provide nodeless
>> >> <
>> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
>> >
>> >> experience to users.
>> >>
>> >> Creating Mesos framework that provides k8s API is not a new idea. For
>> >> instance, the following are the two prior attempts:
>> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
>> >> 2. https://mesosphere.com/product/kubernetes-engine/
>> >>
>> >> Both of the above solutions will run unmodified kubelets for workloads
>> >> (i.e., pods). Some users might prefer that way, and we should not
>> preclude
>> >> that on Mesos. However, the reason this nodeless (aka, virtual kubelet)
>> >> idea got me very excited is because it provides us an opportunity to
>> create
>> >> a truly integrated solution to bridge k8s and Mesos.
>> >>
>> >> K8s gets popular for reasons. IMO, the followings are the key:
>> >> (1) API machinery. This includes API extension mechanism (CRD
>> >> <
>> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
>> >),
>> >> simple-to-program client, versioning, authn/authz, etc.
>> >> (2) It expose basic scheduling primitives, and let users/vendors focus
>> on
>> >> orchestration (i.e., Operators). In contrast, Mesos framework is
>> >> significantly harder to program due to the need for doing scheduling
>> also.
>> >> Although we have scheduling libraries like Fenzo
>> >> <https://github.com/Netflix/Fenzo>, the whole community suffers from
>> >> fragmentation because there's no "default" solution.
>> >>
>> >> ** Why this proposal is more integrated than prior solutions?*
>> >>
>> >> This is because prior solutions are more like installer for k8s. You
>> >> either need to pre-reserve resources
>> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet, or
>> fork
>> >> k8s scheduler to bring up kubelet on demand
>> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
>> Complexity
>> >> is definitely a concern since both systems are involved. In contrast,
>> the
>> >> proposal propose to run k8s workloads (pods) directly on Mesos by
>> >> translating pod spec to tasks/executors in Mesos. It's just another
>> Mesos
>> >> framework, but you can extend that framework behavior using k8s API
>> >> extension mechanism (CRD and Operator)!
>> >>
>> >> ** Compare to just using k8s?*
>> >>
>> >> First of all, IMO, k8s is just an API spec. Any implementation that
>> >> passes conformance tests is vanilla k8s experience. I understand that
>> by
>> >> going nodeless, some of the concepts in k8s no longer applies (e.g.,
>> >> NodeAffinity, NodeSelector). I am actually less worried about this for
>> two
>> >> reasons: 1) Big stakeholders are behind nodeless, including Microsoft,
>> AWS,
>> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use cases
>> >> (e.g., in public clouds).
>> >>
>> >> In fact, we can also choose to implement those k8s APIs that make the
>> >> most sense first, and maybe define our own APIs, leveraging the
>> >> extensibility of the k8s API machinery!
>> >>
>> >> If we do want to compare to upstream k8s implementation, i think the
>> main
>> >> benefit is that:
>> >> 1) You can still run your existing custom Mesos frameworks as it is
>> >> today, but start to provide your users some k8s API experiences
>> >> 2) Scalability. Mesos is inherently more scalable than k8s because it
>> >> takes different trade-offs. You can run multiple copies of the same
>> >> frameworks (similar to marathon on marathon) to reach large scale if
>> the
>> >> k8s framework itself cannot scale beyond certain limit.
>> >>
>> >> ** Why putting this framework in Mesos repo?*
>> >>
>> >> Historically, the problem with Mesos community is fragmentation. People
>> >> create different solutions for the same set of problems. Having a
>> "blessed"
>> >> solution in the Mesos repo has the following benefits:
>> >> 1) License and ownership. It's under Apache already.
>> >> 2) Attract contributions. Less fragmentation.
>> >> 3) Established high quality in the repository.
>> >>
>> >> **** What's my suggestion for next steps? ****
>> >>
>> >> I suggest we create a working group for this. Any other PMC that likes
>> >> this idea, please chime in here.
>> >>
>> >> - Jie
>> >>
>> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <me...@icloud.com.invalid>
>> >> wrote:
>> >>
>> >>>
>> >>>
>> >>> 发自我的 iPhone
>> >>>
>> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
>> >>> >
>> >>> > I'm in favour of the proposal, Cameron. Building a bridge between
>> >>> Mesos and
>> >>> > Kubernetes will be beneficial for both communities. Virtual kubelet
>> >>> effort
>> >>> > looks promising indeed and is definitely a worthwhile approach to
>> >>> build the
>> >>> > bridge.
>> >>> >
>> >>> > While we will need some sort of a scheduler when implementing a
>> >>> provider
>> >>> > for mesos, we don't need to implement and use a "default" one: a
>> simple
>> >>> > mesos-go based scheduler will be fine for the start. We can of
>> course
>> >>> > consider building a default scheduler, but this will significantly
>> >>> increase
>> >>> > the size of the project.
>> >>> >
>> >>> > An exercise we will have to do here is determine which parts of a
>> >>> > kubernetes task specification can be "converted" and hence launched
>> on
>> >>> a
>> >>> > Mesos cluster. Once we have a working prototype we can start testing
>> >>> and
>> >>> > collecting data.
>> >>> >
>> >>> > Do you want to come up with a plan and maybe a more detailed
>> proposal?
>> >>> >
>> >>> > Best,
>> >>> > Alex
>> >>>
>> >>
>>
>
>
> --
> BR,
> Michał Łowicki
>
>
>

Re: Propose to create a Kubernetes framework for Mesos

Posted by Jie Yu <yu...@gmail.com>.
I'd like to get some feedback on what Mesos users want. I can potentially
see two major use cases:

(1) I just want k8s to run on Mesos, along with other Mesos frameworks,
sharing the same resources pool. I don't really care about nodeless.
Ideally, i'd like to run upstream k8s (include kubelet). The original k8s
on mesos framework has been retired, and the new Mesosphere MKE is not open
source, and only runs on Mesosphere DC/OS. I need one open source solution
here.
(2) I want nodeless because I believe it has a tighter integration with
Mesos, as compared to (2), and can solve the static partition issue. (1) is
more like a k8s installer, and you can do that without Mesos.

*Can folks chime in here?*

However, I'm not sure if re-implementing k8s-scheduler as a Mesos framework
> is the right approach. I imagine k8s scheduler is significant piece of
> code  which we need to re-implement and on top of it as new API objects are
> added to k8s API, we need to keep pace with k8s scheduler for parity. The
> approach we (in the community) took with Spark (and Jenkins to some extent)
> was for the scheduling innovation happen in Spark community and we just let
> Spark launch spark executors via Mesos and let Spark launch its tasks out
> of band of Mesos. We used to have a version of Spark framework (fine
> grained mode?) where spark tasks were launched via Mesos offers but that
> was deprecated, partly because of maintainability. Will this k8s framework
> have similar problem? Sounds like one of the problems with the existing k8s
> framework implementations it the pre-launching of kubelets; can we use the
> k8s autoscaler to solve that problem?


This is a good concern. It's around 17k lines of code in k8s scheduler.

Jies-MacBook-Pro:scheduler jie$ pwd
/Users/jie/workspace/kubernetes/pkg/scheduler
Jies-MacBook-Pro:scheduler jie$ loc --exclude .*_test.go
--------------------------------------------------------------------------------
 Language             Files        Lines        Blank      Comment
 Code
--------------------------------------------------------------------------------
 Go                      83        17429         2165         3798
11466
--------------------------------------------------------------------------------
 Total                   83        17429         2165         3798
11466
--------------------------------------------------------------------------------

Also, I think (I might be wrong) most k8s users are not directly creating
> pods via the API but rather using higher level abstractions like replica
> sets, stateful sets, daemon sets etc. How will that fit into this
> architecture? Will the framework need to re-implement those controllers as
> well?


This is not true. You can re-use most of the controllers. Those controllers
will create pods as you said, and the mesos framework will be responsible
for scheduling those pods created.

- Jie

On Mon, Dec 3, 2018 at 9:56 AM Cecile, Adam <Ad...@hitec.lu> wrote:

> On 12/3/18 5:40 PM, Michał Łowicki wrote:
>
>
>
> On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org> wrote:
>
>> Cameron and Michal: I would love to understand your motivations and use
>> cases for a k8s Mesos framework in a bit more detail. Looks like you are
>> willing to rewrite your existing app definitions into k8s API spec. At
>> this
>> point, why are you still interested in Mesos as a CAAS backend? Is it
>> because of scalability / reliability? Or is it because you still want to
>> run non-k8s workloads/frameworks in this world? What are these workloads?
>>
>
> Mesos with its scalability and ability to run many frameworks (like
> cron-like jobs, spark, proprietary) gives more flexibility in the long run.
> Right now we're at the stage where Marathon UI in public version isn't
> maintained so looking to have something with better community support.
> Having entity like k8s-compliant scheduler maybe could help with adopting
> other community-driven solutions but I also think that going into that
> direction should be well thought and planned process.
>
> We're sharing the exact same feeling. My next project will probably go
> full k8s because I don't feel confident in mesos future as an opensource
> project.
>
> Marathon UI still not supporting GPUs (even in JSON mode, thanks to
> marshaling) is the tip of the iceberg. I reported the issue ages ago and I
> can understand nobody cares because DC/OS comes with a different
> (closed-source I bet) UI.
>
>
>
>>
>> In general, I'm in favor of Mesos coming shipped with a default scheduler.
>> I think it might help with the adoption similar to what happened with the
>> command/default executor. In hindsight, we should've done this a long time
>> ago. But, oh well, we were too optimistic that a single "default"
>> scheduler
>> will rule in the ecosystem which didn't quite pan out.
>>
>> However, I'm not sure if re-implementing k8s-scheduler as a Mesos
>> framework
>> is the right approach. I imagine k8s scheduler is significant piece of
>> code  which we need to re-implement and on top of it as new API objects
>> are
>> added to k8s API, we need to keep pace with k8s scheduler for parity. The
>> approach we (in the community) took with Spark (and Jenkins to some
>> extent)
>> was for the scheduling innovation happen in Spark community and we just
>> let
>> Spark launch spark executors via Mesos and let Spark launch its tasks out
>> of band of Mesos. We used to have a version of Spark framework (fine
>> grained mode?) where spark tasks were launched via Mesos offers but that
>> was deprecated, partly because of maintainability. Will this k8s framework
>> have similar problem? Sounds like one of the problems with the existing
>> k8s
>> framework implementations it the pre-launching of kubelets; can we use the
>> k8s autoscaler to solve that problem?
>>
>> Also, I think (I might be wrong) most k8s users are not directly creating
>> pods via the API but rather using higher level abstractions like replica
>> sets, stateful sets, daemon sets etc. How will that fit into this
>> architecture? Will the framework need to re-implement those controllers as
>> well?
>>
>> Is there an integration point in k8s ecosystem where we can reuse the
>> existing k8s schedulers and controllers but run the pods with mesos
>> container runtime?
>>
>> All, in all, I'm +1 to explore the ideas in a WG.
>>
>>
>> On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io> wrote:
>>
>> > Hello all,
>> >
>> > As a Kubernetes fan, I am excited about this proposal.
>> > However, I would challenge this community to think more abstractly about
>> > the problem you want to address and any solution requirements before
>> > discussing implementation details, such as adopting VK.
>> >
>> > Don't take me wrong, VK is a great concept: a Kubernetes node that
>> > delegates container management to someone else.
>> > But allow me to clarify a few things about it:
>> >
>> > - VK simply provides a very limited subset of the kubelet functionality,
>> > namely the Kubernetes node registration and the observation of Pods that
>> > have been assigned to it. It doesn't do pod (intra or inter) networking
>> nor
>> > delegates to CNI, doesn't do volume mounting, and so on.
>> > - Like the kubelet, VK doesn't implement scheduling. It also doesn't
>> > understand anything else than a Pod and its dependencies (e.g.
>> ConfigMap or
>> > Secret), meaning other primitives, such as DaemonSet, Deployment,
>> > StatefulSet, or extensions, such as CRDs are unknown to the VK.
>> > - While the kubelet manages containers through CRI API (Container
>> Runtime
>> > Interface), the VK does it through its own Provider API.
>> > - kubelet translates from Kubernetes primitives to CRI primitives, so
>> CRI
>> > implementations only need to understand CRI. However, the VK does no
>> > translation and passes Kubernetes primitives directly to a provider,
>> > requiring the VK provider to understand Kubernetes primitives.
>> > - kubelet talks to CRI implementations through a gRPC socket. VK talks
>> to
>> > providers in-process and is highly-opinionated about the fact a provider
>> > has no lifecycle (there's no _start_ or _stop_, as there would be for a
>> > framework). There are talks about having Provide API over gRPC but it's
>> not
>> > trivial to decide[2].
>> >
>> > Now, if you are still thinking about implementation details, and having
>> > some experience trying to create a VK provider for Mesos[1], I can tell
>> you
>> > the VK, as is today, is not a seamless fit.
>> > That said, I am willing to help you figure out the design and pick the
>> > right pieces to execute, if this is indeed something you want to do.
>> >
>> > 1 -
>> >
>> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
>> > 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
>> >
>> > Cheers,
>> > Pires
>> >
>> > On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com> wrote:
>> >
>> >> + user list as well to hear more feedback from Mesos users.
>> >>
>> >> I am +1 on this proposal to create a Mesos framework that exposes k8s
>> >> API, and provide nodeless
>> >> <
>> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
>> >
>> >> experience to users.
>> >>
>> >> Creating Mesos framework that provides k8s API is not a new idea. For
>> >> instance, the following are the two prior attempts:
>> >> 1. https://github.com/kubernetes-retired/kube-mesos-framework
>> >> 2. https://mesosphere.com/product/kubernetes-engine/
>> >>
>> >> Both of the above solutions will run unmodified kubelets for workloads
>> >> (i.e., pods). Some users might prefer that way, and we should not
>> preclude
>> >> that on Mesos. However, the reason this nodeless (aka, virtual kubelet)
>> >> idea got me very excited is because it provides us an opportunity to
>> create
>> >> a truly integrated solution to bridge k8s and Mesos.
>> >>
>> >> K8s gets popular for reasons. IMO, the followings are the key:
>> >> (1) API machinery. This includes API extension mechanism (CRD
>> >> <
>> https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit
>> >),
>> >> simple-to-program client, versioning, authn/authz, etc.
>> >> (2) It expose basic scheduling primitives, and let users/vendors focus
>> on
>> >> orchestration (i.e., Operators). In contrast, Mesos framework is
>> >> significantly harder to program due to the need for doing scheduling
>> also.
>> >> Although we have scheduling libraries like Fenzo
>> >> <https://github.com/Netflix/Fenzo>, the whole community suffers from
>> >> fragmentation because there's no "default" solution.
>> >>
>> >> ** Why this proposal is more integrated than prior solutions?*
>> >>
>> >> This is because prior solutions are more like installer for k8s. You
>> >> either need to pre-reserve resources
>> >> <https://mesosphere.com/product/kubernetes-engine/> for kubelet, or
>> fork
>> >> k8s scheduler to bring up kubelet on demand
>> >> <https://github.com/kubernetes-retired/kube-mesos-framework>.
>> Complexity
>> >> is definitely a concern since both systems are involved. In contrast,
>> the
>> >> proposal propose to run k8s workloads (pods) directly on Mesos by
>> >> translating pod spec to tasks/executors in Mesos. It's just another
>> Mesos
>> >> framework, but you can extend that framework behavior using k8s API
>> >> extension mechanism (CRD and Operator)!
>> >>
>> >> ** Compare to just using k8s?*
>> >>
>> >> First of all, IMO, k8s is just an API spec. Any implementation that
>> >> passes conformance tests is vanilla k8s experience. I understand that
>> by
>> >> going nodeless, some of the concepts in k8s no longer applies (e.g.,
>> >> NodeAffinity, NodeSelector). I am actually less worried about this for
>> two
>> >> reasons: 1) Big stakeholders are behind nodeless, including Microsoft,
>> AWS,
>> >> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use cases
>> >> (e.g., in public clouds).
>> >>
>> >> In fact, we can also choose to implement those k8s APIs that make the
>> >> most sense first, and maybe define our own APIs, leveraging the
>> >> extensibility of the k8s API machinery!
>> >>
>> >> If we do want to compare to upstream k8s implementation, i think the
>> main
>> >> benefit is that:
>> >> 1) You can still run your existing custom Mesos frameworks as it is
>> >> today, but start to provide your users some k8s API experiences
>> >> 2) Scalability. Mesos is inherently more scalable than k8s because it
>> >> takes different trade-offs. You can run multiple copies of the same
>> >> frameworks (similar to marathon on marathon) to reach large scale if
>> the
>> >> k8s framework itself cannot scale beyond certain limit.
>> >>
>> >> ** Why putting this framework in Mesos repo?*
>> >>
>> >> Historically, the problem with Mesos community is fragmentation. People
>> >> create different solutions for the same set of problems. Having a
>> "blessed"
>> >> solution in the Mesos repo has the following benefits:
>> >> 1) License and ownership. It's under Apache already.
>> >> 2) Attract contributions. Less fragmentation.
>> >> 3) Established high quality in the repository.
>> >>
>> >> **** What's my suggestion for next steps? ****
>> >>
>> >> I suggest we create a working group for this. Any other PMC that likes
>> >> this idea, please chime in here.
>> >>
>> >> - Jie
>> >>
>> >> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <me...@icloud.com.invalid>
>> >> wrote:
>> >>
>> >>>
>> >>>
>> >>> 发自我的 iPhone
>> >>>
>> >>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com> 写道:
>> >>> >
>> >>> > I'm in favour of the proposal, Cameron. Building a bridge between
>> >>> Mesos and
>> >>> > Kubernetes will be beneficial for both communities. Virtual kubelet
>> >>> effort
>> >>> > looks promising indeed and is definitely a worthwhile approach to
>> >>> build the
>> >>> > bridge.
>> >>> >
>> >>> > While we will need some sort of a scheduler when implementing a
>> >>> provider
>> >>> > for mesos, we don't need to implement and use a "default" one: a
>> simple
>> >>> > mesos-go based scheduler will be fine for the start. We can of
>> course
>> >>> > consider building a default scheduler, but this will significantly
>> >>> increase
>> >>> > the size of the project.
>> >>> >
>> >>> > An exercise we will have to do here is determine which parts of a
>> >>> > kubernetes task specification can be "converted" and hence launched
>> on
>> >>> a
>> >>> > Mesos cluster. Once we have a working prototype we can start testing
>> >>> and
>> >>> > collecting data.
>> >>> >
>> >>> > Do you want to come up with a plan and maybe a more detailed
>> proposal?
>> >>> >
>> >>> > Best,
>> >>> > Alex
>> >>>
>> >>
>>
>
>
> --
> BR,
> Michał Łowicki
>
>
>

Re: Propose to create a Kubernetes framework for Mesos

Posted by "Cecile, Adam" <Ad...@hitec.lu>.
On 12/3/18 5:40 PM, Michał Łowicki wrote:


On Thu, Nov 29, 2018 at 1:22 AM Vinod Kone <vi...@apache.org>> wrote:
Cameron and Michal: I would love to understand your motivations and use
cases for a k8s Mesos framework in a bit more detail. Looks like you are
willing to rewrite your existing app definitions into k8s API spec. At this
point, why are you still interested in Mesos as a CAAS backend? Is it
because of scalability / reliability? Or is it because you still want to
run non-k8s workloads/frameworks in this world? What are these workloads?

Mesos with its scalability and ability to run many frameworks (like cron-like jobs, spark, proprietary) gives more flexibility in the long run.
Right now we're at the stage where Marathon UI in public version isn't maintained so looking to have something with better community support.
Having entity like k8s-compliant scheduler maybe could help with adopting other community-driven solutions but I also think that going into that direction should be well thought and planned process.

We're sharing the exact same feeling. My next project will probably go full k8s because I don't feel confident in mesos future as an opensource project.

Marathon UI still not supporting GPUs (even in JSON mode, thanks to marshaling) is the tip of the iceberg. I reported the issue ages ago and I can understand nobody cares because DC/OS comes with a different (closed-source I bet) UI.



In general, I'm in favor of Mesos coming shipped with a default scheduler.
I think it might help with the adoption similar to what happened with the
command/default executor. In hindsight, we should've done this a long time
ago. But, oh well, we were too optimistic that a single "default" scheduler
will rule in the ecosystem which didn't quite pan out.

However, I'm not sure if re-implementing k8s-scheduler as a Mesos framework
is the right approach. I imagine k8s scheduler is significant piece of
code  which we need to re-implement and on top of it as new API objects are
added to k8s API, we need to keep pace with k8s scheduler for parity. The
approach we (in the community) took with Spark (and Jenkins to some extent)
was for the scheduling innovation happen in Spark community and we just let
Spark launch spark executors via Mesos and let Spark launch its tasks out
of band of Mesos. We used to have a version of Spark framework (fine
grained mode?) where spark tasks were launched via Mesos offers but that
was deprecated, partly because of maintainability. Will this k8s framework
have similar problem? Sounds like one of the problems with the existing k8s
framework implementations it the pre-launching of kubelets; can we use the
k8s autoscaler to solve that problem?

Also, I think (I might be wrong) most k8s users are not directly creating
pods via the API but rather using higher level abstractions like replica
sets, stateful sets, daemon sets etc. How will that fit into this
architecture? Will the framework need to re-implement those controllers as
well?

Is there an integration point in k8s ecosystem where we can reuse the
existing k8s schedulers and controllers but run the pods with mesos
container runtime?

All, in all, I'm +1 to explore the ideas in a WG.


On Wed, Nov 28, 2018 at 2:05 PM Paulo Pires <pi...@mesosphere.io>> wrote:

> Hello all,
>
> As a Kubernetes fan, I am excited about this proposal.
> However, I would challenge this community to think more abstractly about
> the problem you want to address and any solution requirements before
> discussing implementation details, such as adopting VK.
>
> Don't take me wrong, VK is a great concept: a Kubernetes node that
> delegates container management to someone else.
> But allow me to clarify a few things about it:
>
> - VK simply provides a very limited subset of the kubelet functionality,
> namely the Kubernetes node registration and the observation of Pods that
> have been assigned to it. It doesn't do pod (intra or inter) networking nor
> delegates to CNI, doesn't do volume mounting, and so on.
> - Like the kubelet, VK doesn't implement scheduling. It also doesn't
> understand anything else than a Pod and its dependencies (e.g. ConfigMap or
> Secret), meaning other primitives, such as DaemonSet, Deployment,
> StatefulSet, or extensions, such as CRDs are unknown to the VK.
> - While the kubelet manages containers through CRI API (Container Runtime
> Interface), the VK does it through its own Provider API.
> - kubelet translates from Kubernetes primitives to CRI primitives, so CRI
> implementations only need to understand CRI. However, the VK does no
> translation and passes Kubernetes primitives directly to a provider,
> requiring the VK provider to understand Kubernetes primitives.
> - kubelet talks to CRI implementations through a gRPC socket. VK talks to
> providers in-process and is highly-opinionated about the fact a provider
> has no lifecycle (there's no _start_ or _stop_, as there would be for a
> framework). There are talks about having Provide API over gRPC but it's not
> trivial to decide[2].
>
> Now, if you are still thinking about implementation details, and having
> some experience trying to create a VK provider for Mesos[1], I can tell you
> the VK, as is today, is not a seamless fit.
> That said, I am willing to help you figure out the design and pick the
> right pieces to execute, if this is indeed something you want to do.
>
> 1 -
> https://github.com/pires/virtual-kubelet/tree/mesos_integration/providers/mesos
> 2 - https://github.com/virtual-kubelet/virtual-kubelet/issues/160
>
> Cheers,
> Pires
>
> On Wed, Nov 28, 2018 at 5:38 AM Jie Yu <yu...@gmail.com>> wrote:
>
>> + user list as well to hear more feedback from Mesos users.
>>
>> I am +1 on this proposal to create a Mesos framework that exposes k8s
>> API, and provide nodeless
>> <https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit>
>> experience to users.
>>
>> Creating Mesos framework that provides k8s API is not a new idea. For
>> instance, the following are the two prior attempts:
>> 1. https://github.com/kubernetes-retired/kube-mesos-framework
>> 2. https://mesosphere.com/product/kubernetes-engine/
>>
>> Both of the above solutions will run unmodified kubelets for workloads
>> (i.e., pods). Some users might prefer that way, and we should not preclude
>> that on Mesos. However, the reason this nodeless (aka, virtual kubelet)
>> idea got me very excited is because it provides us an opportunity to create
>> a truly integrated solution to bridge k8s and Mesos.
>>
>> K8s gets popular for reasons. IMO, the followings are the key:
>> (1) API machinery. This includes API extension mechanism (CRD
>> <https://docs.google.com/document/d/1Y1GEKOIB1u5P06YeQJYl9WVaUqxrq3fO8GZ7K6MUGms/edit>),
>> simple-to-program client, versioning, authn/authz, etc.
>> (2) It expose basic scheduling primitives, and let users/vendors focus on
>> orchestration (i.e., Operators). In contrast, Mesos framework is
>> significantly harder to program due to the need for doing scheduling also.
>> Although we have scheduling libraries like Fenzo
>> <https://github.com/Netflix/Fenzo>, the whole community suffers from
>> fragmentation because there's no "default" solution.
>>
>> ** Why this proposal is more integrated than prior solutions?*
>>
>> This is because prior solutions are more like installer for k8s. You
>> either need to pre-reserve resources
>> <https://mesosphere.com/product/kubernetes-engine/> for kubelet, or fork
>> k8s scheduler to bring up kubelet on demand
>> <https://github.com/kubernetes-retired/kube-mesos-framework>. Complexity
>> is definitely a concern since both systems are involved. In contrast, the
>> proposal propose to run k8s workloads (pods) directly on Mesos by
>> translating pod spec to tasks/executors in Mesos. It's just another Mesos
>> framework, but you can extend that framework behavior using k8s API
>> extension mechanism (CRD and Operator)!
>>
>> ** Compare to just using k8s?*
>>
>> First of all, IMO, k8s is just an API spec. Any implementation that
>> passes conformance tests is vanilla k8s experience. I understand that by
>> going nodeless, some of the concepts in k8s no longer applies (e.g.,
>> NodeAffinity, NodeSelector). I am actually less worried about this for two
>> reasons: 1) Big stakeholders are behind nodeless, including Microsoft, AWS,
>> Alicloud, etc; 2) K8s API is evolving, and nodeless has real use cases
>> (e.g., in public clouds).
>>
>> In fact, we can also choose to implement those k8s APIs that make the
>> most sense first, and maybe define our own APIs, leveraging the
>> extensibility of the k8s API machinery!
>>
>> If we do want to compare to upstream k8s implementation, i think the main
>> benefit is that:
>> 1) You can still run your existing custom Mesos frameworks as it is
>> today, but start to provide your users some k8s API experiences
>> 2) Scalability. Mesos is inherently more scalable than k8s because it
>> takes different trade-offs. You can run multiple copies of the same
>> frameworks (similar to marathon on marathon) to reach large scale if the
>> k8s framework itself cannot scale beyond certain limit.
>>
>> ** Why putting this framework in Mesos repo?*
>>
>> Historically, the problem with Mesos community is fragmentation. People
>> create different solutions for the same set of problems. Having a "blessed"
>> solution in the Mesos repo has the following benefits:
>> 1) License and ownership. It's under Apache already.
>> 2) Attract contributions. Less fragmentation.
>> 3) Established high quality in the repository.
>>
>> **** What's my suggestion for next steps? ****
>>
>> I suggest we create a working group for this. Any other PMC that likes
>> this idea, please chime in here.
>>
>> - Jie
>>
>> On Fri, Nov 23, 2018 at 5:24 AM 张冬冬 <me...@icloud.com>.invalid>
>> wrote:
>>
>>>
>>>
>>> 发自我的 iPhone
>>>
>>> > 在 2018年11月23日,20:37,Alex Rukletsov <al...@mesosphere.com>> 写道:
>>> >
>>> > I'm in favour of the proposal, Cameron. Building a bridge between
>>> Mesos and
>>> > Kubernetes will be beneficial for both communities. Virtual kubelet
>>> effort
>>> > looks promising indeed and is definitely a worthwhile approach to
>>> build the
>>> > bridge.
>>> >
>>> > While we will need some sort of a scheduler when implementing a
>>> provider
>>> > for mesos, we don't need to implement and use a "default" one: a simple
>>> > mesos-go based scheduler will be fine for the start. We can of course
>>> > consider building a default scheduler, but this will significantly
>>> increase
>>> > the size of the project.
>>> >
>>> > An exercise we will have to do here is determine which parts of a
>>> > kubernetes task specification can be "converted" and hence launched on
>>> a
>>> > Mesos cluster. Once we have a working prototype we can start testing
>>> and
>>> > collecting data.
>>> >
>>> > Do you want to come up with a plan and maybe a more detailed proposal?
>>> >
>>> > Best,
>>> > Alex
>>>
>>


--
BR,
Michał Łowicki