You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zookeeper.apache.org by Camille Fournier <ca...@apache.org> on 2016/08/17 17:56:48 UTC

Re: ZooKeeper release validation cluster access

An update:

I have the OK to get access to this cluster for the ZK project. I've put in
the application and will start looking to figure out next steps once it's
approved

On Tue, Jul 12, 2016 at 12:43 AM, Raúl Gutiérrez Segalés <
rgs@itevenworks.net> wrote:

> A bit late, but fwiw this was the 2015 flavor of my (personal) validation
> process:
>
> http://itevenworks.net/zk-releases
>
> -rgs
> On Jul 8, 2016 11:18 AM, "Michael Han" <ha...@cloudera.com> wrote:
>
> > Sure, I will post an early version next week.
> >
> > On Fri, Jul 8, 2016 at 10:55 AM, Camille Fournier <ca...@apache.org>
> > wrote:
> >
> > > That's great Michael do you think you could share any of your work on
> > > github to get us started?
> > > On Jul 8, 2016 1:41 PM, "Michael Han" <ha...@cloudera.com> wrote:
> > >
> > > > Regarding ZK deployment automation, I am building an automation that
> is
> > > > very similar to the requirement here - the motivation is to save me
> > > > sometime on validation / regression test of ZOOKEEPER-1045 and I
> could
> > > see
> > > > same automation being used to validate a release as well.
> > > > It might not be ready for this release given limited time I have but
> > I'll
> > > > share it once it's done. It's based on Ansible [1] and what it could
> > do:
> > > > - Provision a cluster on AWS with specified topology given
> subscription
> > > > credential. Support on provisioning on GCP or Azure should be
> possible
> > as
> > > > well. This step is optional, as the cluster can be provisioned
> > > separately,
> > > > or using different approach (e.g. a docker cluster instead of VMs),
> and
> > > if
> > > > the cluster is already provisioned, the list of IPs will be passed to
> > the
> > > > tool.
> > > > - Install a pre-configured ZK cluster on all nodes of the cluster,
> > > > including all dependencies.
> > > > - Support easy update / change state of the cluster both at cluster
> > level
> > > > (e.g. kill nodes) and at ZK level (e.g. update zoo.cfg and restart
> ZK).
> > > >
> > > > We do have some tools internally to provision and deploy ZK cluster
> but
> > > > those tools have dependencies that can't be made public and also they
> > > tied
> > > > to specific version of ZK and is not exactly what I want (for 1045
> > > > validation at least).
> > > >
> > > > [1] http://docs.ansible.com/ansible/index.html
> > > >
> > > >
> > > > On Fri, Jul 8, 2016 at 6:58 AM, Flavio Junqueira <fp...@apache.org>
> > wrote:
> > > >
> > > > > We could consider using vagrant and something like ducktape (for
> > > Kafka):
> > > > >
> > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/
> tutorial+-+set+up+and+run+Kafka+system+tests+with+ducktape
> > > > > <
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/
> tutorial+-+set+up+and+run+Kafka+system+tests+with+ducktape
> > > > > >
> > > > >
> > > > > As for resources, ASF committers can have an msn subscription and
> we
> > > > could
> > > > > use some Azure resources for tests. I have actually recently shared
> > one
> > > > > Azure VM to show Michael a test failure in that setting. The
> > > subscription
> > > > > doesn't give access to a lot of resources as the monthly credit is
> > > small,
> > > > > but should be good enough for some distributed tests:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> > http://mail-archives.apache.org/mod_mbox/www-community/201402.mbox/%
> 3CCAB56zCWvv5wMh99XaRULdi2fVxMUJ6R1AXE66uL-aFpo3CV+Gg@mail.gmail.com%3E
> > > > > <
> > > > >
> > > >
> > >
> > http://mail-archives.apache.org/mod_mbox/www-community/201402.mbox/%
> 3CCAB56zCWvv5wMh99XaRULdi2fVxMUJ6R1AXE66uL-aFpo3CV+Gg@mail.gmail.com%3E
> > > > > >
> > > > >
> > > > > -Flavio
> > > > >
> > > > > > On 08 Jul 2016, at 14:41, Camille Fournier <ca...@apache.org>
> > > wrote:
> > > > > >
> > > > > > These are all good questions.
> > > > > >
> > > > > > My ultimate hope here is that we can actually get something set
> up
> > so
> > > > > that
> > > > > > for each release we want to do, we can easily run a standard
> smoke
> > > test
> > > > > on
> > > > > > this cluster, and potentially leave the cluster available for a
> few
> > > > days
> > > > > > for additional testing by the community, to help those of us who
> do
> > > not
> > > > > > have an easy way to spin up a ZK cluster at our companies for
> > testing
> > > > to
> > > > > > feel good about vetting a release.
> > > > > >
> > > > > > I think that a good first step would be to create something that
> > can
> > > > > > actually deploy a configured ZK on top of this cluster, and then
> > > deploy
> > > > > > some sort of basic test script on machines that executes, eg,
> pat's
> > > > smoke
> > > > > > test as Flavio pointed out. I will admit to being a bit perplexed
> > as
> > > to
> > > > > > what the state of the nodes of this cluster look like when you
> get
> > > > access
> > > > > > to them and what needs to be installed on top of them to actually
> > do
> > > > this
> > > > > > sort of automation. It looks like you can provision the nodes
> with
> > an
> > > > OS
> > > > > on
> > > > > > them, but beyond that it looks like we may need to create the
> > > > automation
> > > > > to
> > > > > > install java first, then ZK. There's some very bare bones details
> > > here:
> > > > > >
> > > > > > https://github.com/cncf/cluster
> > > > > >
> > > > > > *For those of you who have internal processes for vetting ZK, if
> > any
> > > of
> > > > > you
> > > > > > have automation for spinning up new ZK servers in a cloud env
> > > hopefully
> > > > > we
> > > > > > can use that to start. Does anyone have that? *I think that's
> where
> > > we
> > > > > need
> > > > > > to begin. We can then move on to what a good smoke test might
> look
> > > > like.
> > > > > >
> > > > > > Thanks,
> > > > > > C
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 7, 2016 at 4:48 PM, Michael Han <ha...@cloudera.com>
> > > wrote:
> > > > > >
> > > > > >> I'll help validating the release on our internal cluster using
> > > > zk-smoke
> > > > > and
> > > > > >> manual testing. Do we have a standard protocol on what should be
> > > > > validated
> > > > > >> and how? Also do we perform integration tests (e.g. with HBase /
> > > > Kafka)
> > > > > as
> > > > > >> part of release validation?
> > > > > >>
> > > > > >>
> > > > > >> On Thu, Jul 7, 2016 at 12:25 PM, Flavio P JUNQUEIRA <
> > fpj@apache.org
> > > >
> > > > > >> wrote:
> > > > > >>
> > > > > >>> And thanks for doing this, Camile, this is great.
> > > > > >>>
> > > > > >>> -Flavio
> > > > > >>> On 7 Jul 2016 8:25 p.m., "Flavio P JUNQUEIRA" <fp...@apache.org>
> > > > wrote:
> > > > > >>>
> > > > > >>>> Perhaps start from phunt's smoke test?
> > > > > >>>>
> > > > > >>>> https://github.com/phunt/zk-smoketest
> > > > > >>>>
> > > > > >>>> -Flavio
> > > > > >>>> On 7 Jul 2016 7:04 p.m., "Camille Fournier" <
> camille@apache.org
> > >
> > > > > >> wrote:
> > > > > >>>>
> > > > > >>>>> So I'm working with the CNCF to see about getting cluster
> > access
> > > to
> > > > > >> spin
> > > > > >>>>> up
> > > > > >>>>> ZK clusters for release validation. I was wondering if anyone
> > has
> > > > > >>> scripts
> > > > > >>>>> for deploying and configuring ZK clusters quickly for stress
> > > > testing
> > > > > >>> that
> > > > > >>>>> we could use if we get this access? I'm sure some of you in
> the
> > > > > >>> community
> > > > > >>>>> must do some of this internally.
> > > > > >>>>>
> > > > > >>>>> I would also very much appreciate any volunteers to
> contribute
> > to
> > > > > >> making
> > > > > >>>>> better public release validation work, if possible, since
> it's
> > > > > unclear
> > > > > >>> how
> > > > > >>>>> much time I personally will be able to dedicate to this
> effort
> > > > beyond
> > > > > >>>>> getting access to the cluster.
> > > > > >>>>>
> > > > > >>>>> Thanks,
> > > > > >>>>> C
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Cheers
> > > > > >> Michael.
> > > > > >>
> > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Cheers
> > > > Michael.
> > > >
> > >
> >
> >
> >
> > --
> > Cheers
> > Michael.
> >
>