You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bloodhound.apache.org by John Chambers <ch...@apache.org> on 2017/12/13 22:36:20 UTC

Re: Public Bloodhound is down again

As prompted (Thanks Gary 😉 ) a quick update on my progress with getting
the public instance back online.

I have just committed my code changes to the Vagrant and salt files to
enable us to provision an instance of 0.8 and I have created a backup of
the live tickets and wiki for 0.8.
I am not sure if I should post that backup anyway. Any ideas let me know. I
can also provide the steps to restore from the backup if anyone is
interested in trying these changes out locally for themselves. Just let me
know.

The next step I think is to contact INFRA to see what is actually required
to provision the new VM. We have salt code currently but I am thinking that
INFRA may require puppet.

The other outstanding question is what to do about the users and
permissions. I have the original htdigest file and the permissions set in
the database, but I am wondering if we should just reset the users and
permissions and start again. Maybe we can discuss this once the live
instance has been sorted.

Cheers,
  John.

On 14 November 2017 at 19:05, Gary <ga...@physics.org> wrote:

> That is an interesting question.
>
> One thing we should really be considering is whether we can be using a
> common set of users to the apache instance of jira through ldap or
> however it works. It may introduce a small barrier to access to ask
> people to register through jira but it might be nice to be able to avail
> ourselves of responsibility of working out who the real users are.
>
> I expect we should be able to work with our own permissions against an
> external ldap, for instance. I have not yet tried such a setup of
> course. This may be something that is delayed for further down the line
> if it would delay recovery too much. It may be that we can use multiple
> sources for users too. Possibly worth checking.
>
> Anyway, the immediate need should be considered to be getting anyone who
> needs access signed up so the questions around this need to be
> considered by others here are:
>
>  * would those who already have accounts mind if they needed to sign up
>  again?
>  * can we have access to register accounts opened for now or do we want
>  spam control in place from the start?
>  * would a longer term plan to have accounts 'shared' with another
>  apache issue tracker instance bother you?
>
> Cheers,
>   Gary
>
> On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> > Thanks Gary. I will take a look at the backup you provided later today.
> > Hopefully as you say it will make the restore process much easier.
> >
> > I think once I have the restore process to a copy of the latest
> > Bloodhound
> > release sorted, we can start a discussion with INFRA on the best way
> > forward.
> >
> > One question I did have though is this. Should I be looking to restore
> > the
> > current user base?
> >
> > We may also need to discuss solving the issues of spam users and posts
> > etc
> > which caused some issues previously.
> >
> > Will keep you all updated with progress.
> >
> > Cheers.
> >
> > John.
> >
> > On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> >
> > > Hi John,
> > >
> > > Just a quick note about the upgrade. I suspect that the backup you are
> > > testing on has not got the proper environment. It seems a little
> > > surprising that the attachments are in the wrong place for a 0.4
> > > installation. I've found the backup that I made for the upgrade work
> and
> > > passed it on to you. Assuming that is the right stuff, that might make
> > > life easier!
> > >
> > > Perhaps we can look at puppet again shortly. It may be that the
> problems
> > > that I had were smaller than they looked. I would expect it to be fine
> > > to install without but have a commitment to get the setup properly
> > > puppetised later. Though obviously that is something to clear with
> INFRA
> > > again.
> > >
> > > Cheers,
> > >   Gary
> > >
> > >
> > > On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > > > Hi all,
> > > >
> > > > I just wanted to give a quick status update on my progress with
> getting
> > > > the
> > > > live issue tracker and wiki back online.
> > > >
> > > > I have managed to use the existing vagrant/salt files to provision a
> vm
> > > > with the 0.4 version installed.
> > > > I have also got a fix for the issue where apache was unable to serve
> > > > bloodhound. I have managed to use the existing backup of the live
> site to
> > > > restore the database.
> > > > However because the live site wasn't a standard 0.4 installation, I
> think
> > > > it was still using trac 0.13dev for some reason, I was unable to just
> > > > restore the ticket attachments.
> > > > So rather than waste time investigating this I just went through and
> > > > reattached the files manually. So I now have a working version of the
> > > > live
> > > > site at version 0.4 with ticket attachments.
> > > > What I don't have in my backup is the wiki attachments. So unless
> anyone
> > > > else has a backup of them I would have to get INFRA to restart the
> old
> > > > VM.
> > > > I am reluctant to do this unless I really have to.
> > > >
> > > > What I have planned next are:
> > > >
> > > >    - Create new backup from 0.4 using trac-admin hotcopy which will
> clean
> > > >    and restore the database.
> > > >    - Test that I can consistently rebuild with just vagrant/salt and
> > > >    backup
> > > >    file.
> > > >    - Build new 0.8 vm using vagrant/salt.
> > > >    - Restore from my 0.4 backup.
> > > >    - Upgrade if necessary.
> > > >    - Test everything is working.
> > > >    - Create new backup for version 0.8
> > > >    - Test that I can consistently rebuild this version with just
> > > >    vagrant/salt and backup file.
> > > >    - Commit my changes to the vagrant/salt files to trunk and
> publish my
> > > >    restore instructions here, also commit my live 0.8 backup file to
> the
> > > >    private svn repository.
> > > >
> > > > The next stage after that will be to work with INFRA to create puppet
> > > > scripts to match the vagrant/salt ones we have to provision and
> setup the
> > > > new live VM.
> > > > Maybe this is work that others could look at whilst I complete the
> above.
> > > > If someone wants to start this work let me know and I will commit my
> fix
> > > > for the apache issue and changes to provision the live vm to trunk
> for
> > > > you
> > > > to make use of.
> > > >
> > > > Cheers
> > > >
> > > > John.
> > > >
> > > > On 7 November 2017 at 18:50, Dammina Sahabandu <
> dmsahabandu@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 for deploying 0.8 release which is the latest.
> > > > >
> > > > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org>
> wrote:
> > > > >
> > > > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > > > Hello John
> > > > > > >
> > > > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > > > Hi Olemis,
> > > > > > > >
> > > > > > > > The plan that has been discussed before was to migrate to an
> 0.8
> > > > > > instance.
> > > > > > > > So we can make use of full multi-product support. I still
> think
> > > this
> > > > > is
> > > > > > > > possible.
> > > > > > > > Having multiple instances could cause some confusion in my
> > > opinion
> > > > > so I
> > > > > > > > would look to avoid that if possible.
> > > > > > > >
> > > > > > > [...]
> > > > > > >
> > > > > > > There is a reason why b.a.o was not upgraded to 0.8 before .
> That's
> > > > > > > why I suggested keeping both VMs during the time 0.8 evolves to
> > > become
> > > > > > > stable (which I think should be the next step, cmiiw). IMHO
> having
> > > a
> > > > > > > stable multi-product version should be a precondition to shut
> down
> > > 0.4
> > > > > > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > > > > > >
> > > > > > > I'll also plan for bringing back to life blood-hound.net
> > > multi-product
> > > > > > > instance. In the process I might get some inspiration to write
> a
> > > > > > > Docker file . These days I only tend to deploy services in
> > > containers.
> > > > > > > Nonetheless , like I just said , looking forward I'd recommend
> > > doing a
> > > > > > > (major?) architectural upgrade in BH code base.
> > > > > > >
> > > > > >
> > > > > > If we can prove that the 0.8 will work well enough, let's just
> use
> > > that.
> > > > > > I don't think an intermediate situation where we are running two
> or
> > > more
> > > > > > is feasible. We should be using the latest version we can and
> > > preferably
> > > > > > the latest release.
> > > > > >
> > > > > > Cheers,
> > > > > >     Gary
> > > > > >
> > > > > --
> > > > > Dammina Sahabandu
> > > > > Associate Tech Lead, AdroitLogic
> > > > > Committer, Apache Software Foundation
> > > > > AMIE (SL)
> > > > > Bsc Eng Hons (Moratuwa)
> > > > > +94716422775
> > > > >
> > >
>

Re: Public Bloodhound is down again

Posted by Greg Stein <gs...@gmail.com>.
Infra can provide a bare Ubuntu 16.04 VM via puppet, and then you can
finish the install from there. No skin off Infra's back. We encourage
project VMs to use puppet to easily restore their VM should something go
sideways. But if you can easily rebuild another way... hey, fine.

Cheers,
-g


On Wed, Dec 13, 2017 at 4:36 PM, John Chambers <ch...@apache.org> wrote:

> As prompted (Thanks Gary 😉 ) a quick update on my progress with getting
> the public instance back online.
>
> I have just committed my code changes to the Vagrant and salt files to
> enable us to provision an instance of 0.8 and I have created a backup of
> the live tickets and wiki for 0.8.
> I am not sure if I should post that backup anyway. Any ideas let me know. I
> can also provide the steps to restore from the backup if anyone is
> interested in trying these changes out locally for themselves. Just let me
> know.
>
> The next step I think is to contact INFRA to see what is actually required
> to provision the new VM. We have salt code currently but I am thinking that
> INFRA may require puppet.
>
> The other outstanding question is what to do about the users and
> permissions. I have the original htdigest file and the permissions set in
> the database, but I am wondering if we should just reset the users and
> permissions and start again. Maybe we can discuss this once the live
> instance has been sorted.
>
> Cheers,
>   John.
>
> On 14 November 2017 at 19:05, Gary <ga...@physics.org> wrote:
>
> > That is an interesting question.
> >
> > One thing we should really be considering is whether we can be using a
> > common set of users to the apache instance of jira through ldap or
> > however it works. It may introduce a small barrier to access to ask
> > people to register through jira but it might be nice to be able to avail
> > ourselves of responsibility of working out who the real users are.
> >
> > I expect we should be able to work with our own permissions against an
> > external ldap, for instance. I have not yet tried such a setup of
> > course. This may be something that is delayed for further down the line
> > if it would delay recovery too much. It may be that we can use multiple
> > sources for users too. Possibly worth checking.
> >
> > Anyway, the immediate need should be considered to be getting anyone who
> > needs access signed up so the questions around this need to be
> > considered by others here are:
> >
> >  * would those who already have accounts mind if they needed to sign up
> >  again?
> >  * can we have access to register accounts opened for now or do we want
> >  spam control in place from the start?
> >  * would a longer term plan to have accounts 'shared' with another
> >  apache issue tracker instance bother you?
> >
> > Cheers,
> >   Gary
> >
> > On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> > > Thanks Gary. I will take a look at the backup you provided later today.
> > > Hopefully as you say it will make the restore process much easier.
> > >
> > > I think once I have the restore process to a copy of the latest
> > > Bloodhound
> > > release sorted, we can start a discussion with INFRA on the best way
> > > forward.
> > >
> > > One question I did have though is this. Should I be looking to restore
> > > the
> > > current user base?
> > >
> > > We may also need to discuss solving the issues of spam users and posts
> > > etc
> > > which caused some issues previously.
> > >
> > > Will keep you all updated with progress.
> > >
> > > Cheers.
> > >
> > > John.
> > >
> > > On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> > >
> > > > Hi John,
> > > >
> > > > Just a quick note about the upgrade. I suspect that the backup you
> are
> > > > testing on has not got the proper environment. It seems a little
> > > > surprising that the attachments are in the wrong place for a 0.4
> > > > installation. I've found the backup that I made for the upgrade work
> > and
> > > > passed it on to you. Assuming that is the right stuff, that might
> make
> > > > life easier!
> > > >
> > > > Perhaps we can look at puppet again shortly. It may be that the
> > problems
> > > > that I had were smaller than they looked. I would expect it to be
> fine
> > > > to install without but have a commitment to get the setup properly
> > > > puppetised later. Though obviously that is something to clear with
> > INFRA
> > > > again.
> > > >
> > > > Cheers,
> > > >   Gary
> > > >
> > > >
> > > > On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > > > > Hi all,
> > > > >
> > > > > I just wanted to give a quick status update on my progress with
> > getting
> > > > > the
> > > > > live issue tracker and wiki back online.
> > > > >
> > > > > I have managed to use the existing vagrant/salt files to provision
> a
> > vm
> > > > > with the 0.4 version installed.
> > > > > I have also got a fix for the issue where apache was unable to
> serve
> > > > > bloodhound. I have managed to use the existing backup of the live
> > site to
> > > > > restore the database.
> > > > > However because the live site wasn't a standard 0.4 installation, I
> > think
> > > > > it was still using trac 0.13dev for some reason, I was unable to
> just
> > > > > restore the ticket attachments.
> > > > > So rather than waste time investigating this I just went through
> and
> > > > > reattached the files manually. So I now have a working version of
> the
> > > > > live
> > > > > site at version 0.4 with ticket attachments.
> > > > > What I don't have in my backup is the wiki attachments. So unless
> > anyone
> > > > > else has a backup of them I would have to get INFRA to restart the
> > old
> > > > > VM.
> > > > > I am reluctant to do this unless I really have to.
> > > > >
> > > > > What I have planned next are:
> > > > >
> > > > >    - Create new backup from 0.4 using trac-admin hotcopy which will
> > clean
> > > > >    and restore the database.
> > > > >    - Test that I can consistently rebuild with just vagrant/salt
> and
> > > > >    backup
> > > > >    file.
> > > > >    - Build new 0.8 vm using vagrant/salt.
> > > > >    - Restore from my 0.4 backup.
> > > > >    - Upgrade if necessary.
> > > > >    - Test everything is working.
> > > > >    - Create new backup for version 0.8
> > > > >    - Test that I can consistently rebuild this version with just
> > > > >    vagrant/salt and backup file.
> > > > >    - Commit my changes to the vagrant/salt files to trunk and
> > publish my
> > > > >    restore instructions here, also commit my live 0.8 backup file
> to
> > the
> > > > >    private svn repository.
> > > > >
> > > > > The next stage after that will be to work with INFRA to create
> puppet
> > > > > scripts to match the vagrant/salt ones we have to provision and
> > setup the
> > > > > new live VM.
> > > > > Maybe this is work that others could look at whilst I complete the
> > above.
> > > > > If someone wants to start this work let me know and I will commit
> my
> > fix
> > > > > for the apache issue and changes to provision the live vm to trunk
> > for
> > > > > you
> > > > > to make use of.
> > > > >
> > > > > Cheers
> > > > >
> > > > > John.
> > > > >
> > > > > On 7 November 2017 at 18:50, Dammina Sahabandu <
> > dmsahabandu@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 for deploying 0.8 release which is the latest.
> > > > > >
> > > > > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org>
> > wrote:
> > > > > >
> > > > > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > > > > Hello John
> > > > > > > >
> > > > > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > > > > Hi Olemis,
> > > > > > > > >
> > > > > > > > > The plan that has been discussed before was to migrate to
> an
> > 0.8
> > > > > > > instance.
> > > > > > > > > So we can make use of full multi-product support. I still
> > think
> > > > this
> > > > > > is
> > > > > > > > > possible.
> > > > > > > > > Having multiple instances could cause some confusion in my
> > > > opinion
> > > > > > so I
> > > > > > > > > would look to avoid that if possible.
> > > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > There is a reason why b.a.o was not upgraded to 0.8 before .
> > That's
> > > > > > > > why I suggested keeping both VMs during the time 0.8 evolves
> to
> > > > become
> > > > > > > > stable (which I think should be the next step, cmiiw). IMHO
> > having
> > > > a
> > > > > > > > stable multi-product version should be a precondition to shut
> > down
> > > > 0.4
> > > > > > > > instance and release a new version (be it 0.8.x , 0.9 or
> ...).
> > > > > > > >
> > > > > > > > I'll also plan for bringing back to life blood-hound.net
> > > > multi-product
> > > > > > > > instance. In the process I might get some inspiration to
> write
> > a
> > > > > > > > Docker file . These days I only tend to deploy services in
> > > > containers.
> > > > > > > > Nonetheless , like I just said , looking forward I'd
> recommend
> > > > doing a
> > > > > > > > (major?) architectural upgrade in BH code base.
> > > > > > > >
> > > > > > >
> > > > > > > If we can prove that the 0.8 will work well enough, let's just
> > use
> > > > that.
> > > > > > > I don't think an intermediate situation where we are running
> two
> > or
> > > > more
> > > > > > > is feasible. We should be using the latest version we can and
> > > > preferably
> > > > > > > the latest release.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >     Gary
> > > > > > >
> > > > > > --
> > > > > > Dammina Sahabandu
> > > > > > Associate Tech Lead, AdroitLogic
> > > > > > Committer, Apache Software Foundation
> > > > > > AMIE (SL)
> > > > > > Bsc Eng Hons (Moratuwa)
> > > > > > +94716422775
> > > > > >
> > > >
> >
>