You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@bloodhound.apache.org by Dammina Sahabandu <dm...@gmail.com> on 2017/10/23 06:59:29 UTC

Public Bloodhound is down again

Hi All,

The public hosted instance of bloodhound is down again for some time now.
We need to come up with a sustainable methodology to keep this up and
running. At least we should activate a different health check monitoring
service.

As I remember last time we have reported this to infra team and get
resolved. Is this the method that we have following all along? Or else do
we have any control to the VM that this instance is running.

I would like to take the responsibility of keeping the instance up and
running in the future, but first I need some guidance from our senior
members.

Thanks,
Dammina

-- 
Dammina Sahabandu
Associate Tech Lead, AdroitLogic
Committer, Apache Software Foundation
AMIE (SL)
Bsc Eng Hons (Moratuwa)
+94716422775

Re: Public Bloodhound is down again

Posted by Dammina Sahabandu <dm...@gmail.com>.
Hi John,

Thank you very much for the effort and let me know if you need any help.

Thanks,
Dammina

On Sun, Nov 5, 2017 at 3:35 PM, John Chambers <ch...@apache.org> wrote:

> Hi Greg/Dammina,
>
> I have been looking at how to get a working version of the Bloodhound
> instance locally from a backup file I made on August 1st 2017. I have asked
> Gary for some help with instructions on the correct way to do this. However
> if anyone else has the details then please let me know. I am unsure at the
> moment if the backup I have has everything in it needed to restore the
> issues and wiki to the state it was previously. Also I am not aware of any
> other backups of the system.
>
> My plan is to create a working 0.4 version of Bloodhound from this backup
> and then upgrade it to 0.8. Then take another backup which can then be used
> to create our new instance of the official site on a new VM. Once I can get
> to this point I will publish the instructions here along with the backup
> file for others to try. Then once the process has been verified I will put
> my name forward as the second person to manage the new official VM and work
> with Dammina to get it operational and maintained going forward.
>
> I will try and report my progress here as often as I can.
>
> Cheers
>
> John.
>
>
>
> On 23 October 2017 at 10:55, Dammina Sahabandu <dm...@gmail.com>
> wrote:
>
> > Hi Greg,
> >
> > I’m very glad to here that we have an infra member among us. As I
> mentioned
> > earlier I will take the responsibility of taking care of VMS. However, at
> > start I might need some guidance from you. I would like to learn how this
> > works rather than just raising an ticket to infra.
> >
> > So if it is possible I would like to have a Skype call with you where
> other
> > interested contributors are also welcome to join and learn.
> >
> > Please let me know how to proceed.
> >
> > Thanks,
> > Dammina
> >
> > On Mon, Oct 23, 2017 at 1:28 PM Greg Stein <gs...@gmail.com> wrote:
> >
> > > Hey Dammina,
> > >
> > > Short answer: Infra turned off the service because it wasn't being
> > > maintained. More below:
> > >
> > > On Mon, Oct 23, 2017 at 1:59 AM, Dammina Sahabandu <
> > dmsahabandu@gmail.com>
> > > wrote:
> > > >...
> > >
> > > > Hi All,
> > > >
> > > > The public hosted instance of bloodhound is down again for some time
> > now.
> > > > We need to come up with a sustainable methodology to keep this up and
> > > > running. At least we should activate a different health check
> > monitoring
> > > > service.
> > > >
> > >
> > > Infra has ample monitoring. No worries there. But a couple things got
> > > broken on it, and never fixed. There were also a couple VMs running for
> > > Bloodhound demos, but those weren't worked on either. ... So Infra just
> > > shut it all down.
> > >
> > >
> > > > As I remember last time we have reported this to infra team and get
> > > > resolved. Is this the method that we have following all along? Or
> else
> > do
> > > > we have any control to the VM that this instance is running.
> > > >
> > > > I would like to take the responsibility of keeping the instance up
> and
> > > > running in the future, but first I need some guidance from our senior
> > > > members.
> > > >
> > >
> > > Speaking for Infra now: we would like an additional person to make a
> > > similar commitment, before we spin up a VM for Apache Bloodhound. We've
> > > been going back/forth on these VMs for a while now, and have yet to
> > > succeed.
> > >
> > > I'm reachable here (as a PMC Member) or on users@infra or via a Jira
> > > ticket. Happy to help, but Infra needs some assurances of assistance
> for
> > > your VM(s).
> > >
> > > Cheers,
> > > Greg Stein
> > > Infrastructure Administrator, ASF
> > > (and a Bloodhound PMC member)
> > >
> > --
> > Dammina Sahabandu
> > Associate Tech Lead, AdroitLogic
> > Committer, Apache Software Foundation
> > AMIE (SL)
> > Bsc Eng Hons (Moratuwa)
> > +94716422775
> >
>



-- 
Dammina Sahabandu
Associate Tech Lead, AdroitLogic
Committer, Apache Software Foundation
AMIE (SL)
Bsc Eng Hons (Moratuwa)
+94716422775

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@apache.org>.
Hi Greg,

I have been ill for a few days but I will be trying to get this complete
over the weekend. It appears that Gary had got the migration mostly
finished but I just need to make some changes and do some testing before we
can get INFRA to switch https://issues.apache.org/bloodhound over to the
new instance. I am just wanting to get the site back up in a reasonably
state without too much effort and then work on any improvements should the
project become more active.

I will try to post more regular updates.

Cheers

John


On 9 February 2018 at 05:59, Greg Stein <gs...@gmail.com> wrote:

> Hi Dammina, John,
>
> What is the status of the new VM? (per https://issues.apache.
> org/jira/browse/INFRA-13255)
>
> Dammina: I see that you're not "watching" that ticket. Please do so. These
> Bloodhound VMs have been off/on/ignored for far too long, and I was hoping
> that you and John would get bloodhound-vm3 up and running. I had asked for
> *two* volunteers.
>
> Can Infrastructure delete bloodhound-vm and bloodhound-vm2? I presume
> there is no data to copy from those?
>
> Thanks,
> Greg Stein
> Infrastructure Administrator, ASF
> (and Bloodhound PMC Member)
>
>
> On Sun, Nov 5, 2017 at 7:07 PM, Greg Stein <gs...@gmail.com> wrote:
>
>> On Sun, Nov 5, 2017 at 4:05 AM, John Chambers <ch...@apache.org> wrote:
>>
>>> Hi Greg/Dammina,
>>>
>>> I have been looking at how to get a working version of the Bloodhound
>>> instance locally from a backup file I made on August 1st 2017. I have
>>> asked
>>>
>>
>> That should be the latest, as the site wasn't running after that date, so
>> no further changes could have been made.
>>
>> Infra should have backups (and I believe the VM is still around, but
>> turned off).
>>
>>
>>> Gary for some help with instructions on the correct way to do this.
>>> However
>>> if anyone else has the details then please let me know. I am unsure at
>>> the
>>> moment if the backup I have has everything in it needed to restore the
>>> issues and wiki to the state it was previously. Also I am not aware of
>>> any
>>> other backups of the system.
>>>
>>> My plan is to create a working 0.4 version of Bloodhound from this backup
>>> and then upgrade it to 0.8. Then take another backup which can then be
>>> used
>>> to create our new instance of the official site on a new VM. Once I can
>>> get
>>> to this point I will publish the instructions here along with the backup
>>> file for others to try. Then once the process has been verified I will
>>> put
>>> my name forward as the second person to manage the new official VM and
>>> work
>>> with Dammina to get it operational and maintained going forward.
>>>
>>
>> Great! With you and Dammina, we can reopen INFRA-13255 and y'all can help
>> set up and manage a new VM. That can either be the issues.apache.org
>> live VM, or a demo, as you wish.
>>
>> Cheers,
>> -g
>>
>>
>

Re: Public Bloodhound is down again

Posted by Greg Stein <gs...@gmail.com>.
Hi Dammina, John,

What is the status of the new VM? (per
https://issues.apache.org/jira/browse/INFRA-13255)

Dammina: I see that you're not "watching" that ticket. Please do so. These
Bloodhound VMs have been off/on/ignored for far too long, and I was hoping
that you and John would get bloodhound-vm3 up and running. I had asked for
*two* volunteers.

Can Infrastructure delete bloodhound-vm and bloodhound-vm2? I presume there
is no data to copy from those?

Thanks,
Greg Stein
Infrastructure Administrator, ASF
(and Bloodhound PMC Member)


On Sun, Nov 5, 2017 at 7:07 PM, Greg Stein <gs...@gmail.com> wrote:

> On Sun, Nov 5, 2017 at 4:05 AM, John Chambers <ch...@apache.org> wrote:
>
>> Hi Greg/Dammina,
>>
>> I have been looking at how to get a working version of the Bloodhound
>> instance locally from a backup file I made on August 1st 2017. I have
>> asked
>>
>
> That should be the latest, as the site wasn't running after that date, so
> no further changes could have been made.
>
> Infra should have backups (and I believe the VM is still around, but
> turned off).
>
>
>> Gary for some help with instructions on the correct way to do this.
>> However
>> if anyone else has the details then please let me know. I am unsure at the
>> moment if the backup I have has everything in it needed to restore the
>> issues and wiki to the state it was previously. Also I am not aware of any
>> other backups of the system.
>>
>> My plan is to create a working 0.4 version of Bloodhound from this backup
>> and then upgrade it to 0.8. Then take another backup which can then be
>> used
>> to create our new instance of the official site on a new VM. Once I can
>> get
>> to this point I will publish the instructions here along with the backup
>> file for others to try. Then once the process has been verified I will put
>> my name forward as the second person to manage the new official VM and
>> work
>> with Dammina to get it operational and maintained going forward.
>>
>
> Great! With you and Dammina, we can reopen INFRA-13255 and y'all can help
> set up and manage a new VM. That can either be the issues.apache.org live
> VM, or a demo, as you wish.
>
> Cheers,
> -g
>
>

Re: Public Bloodhound is down again

Posted by Gary <ga...@physics.org>.
Hi Daniel,

Like all offers to help, this is really awesome and very much
appreciated. I realise that we do have to capture all this enthusiasm
while it lasts so sorry this is still being a bit of a long process.

I'll try to ensure that we get you and others who are interested the
ability to edit the wiki and raise tickets as quickly as possible.

Any expertise you have with LDAP may become useful at some point
although I think we are unlikely to be bringing that up too early. We'll
need to discuss with INFRA whether that is something that can be done.
As I mentioned before, it would be nice if we didn't have to worry about
us introducing spam users!

From the point of view of a longer future, with a move to Django we may
gain the ability to make use of auth middleware for ldap. This will need
a bit of investigation to check alternatives. We will not be wanting to
implement something like this for ourselves!

At the moment I am sure there will be plenty of areas to look at getting
involved with along with an overall discussion to influence.

Cheers,
    Gary


On Mon, 18 Dec 2017, at 10:17 AM, Daniel Brownridge wrote:
> Hi Gary,
> 
> Just catching up with all the mails after a busy work period.
> 
> I've never had any kind of account but would like one to be able to
> contribute.
> 
> If there is some kind of list, sign me up please!
> 
> Should note I also have some (very rusty) experience with LDAP so can
> help with that if necessary.
> 
> Thanks,
> 
> Daniel
> 
> On 14/11/17 19:05, Gary wrote:
> > That is an interesting question.
> >
> > One thing we should really be considering is whether we can be using a
> > common set of users to the apache instance of jira through ldap or
> > however it works. It may introduce a small barrier to access to ask
> > people to register through jira but it might be nice to be able to avail
> > ourselves of responsibility of working out who the real users are.
> >
> > I expect we should be able to work with our own permissions against an
> > external ldap, for instance. I have not yet tried such a setup of
> > course. This may be something that is delayed for further down the line
> > if it would delay recovery too much. It may be that we can use multiple
> > sources for users too. Possibly worth checking.
> >
> > Anyway, the immediate need should be considered to be getting anyone who
> > needs access signed up so the questions around this need to be
> > considered by others here are:
> >
> >  * would those who already have accounts mind if they needed to sign up
> >  again?
> >  * can we have access to register accounts opened for now or do we want
> >  spam control in place from the start?
> >  * would a longer term plan to have accounts 'shared' with another
> >  apache issue tracker instance bother you?
> >
> > Cheers,
> >   Gary
> >
> > On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> >> Thanks Gary. I will take a look at the backup you provided later today.
> >> Hopefully as you say it will make the restore process much easier.
> >>
> >> I think once I have the restore process to a copy of the latest
> >> Bloodhound
> >> release sorted, we can start a discussion with INFRA on the best way
> >> forward.
> >>
> >> One question I did have though is this. Should I be looking to restore
> >> the
> >> current user base?
> >>
> >> We may also need to discuss solving the issues of spam users and posts
> >> etc
> >> which caused some issues previously.
> >>
> >> Will keep you all updated with progress.
> >>
> >> Cheers.
> >>
> >> John.
> >>
> >> On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> >>
> >>> Hi John,
> >>>
> >>> Just a quick note about the upgrade. I suspect that the backup you are
> >>> testing on has not got the proper environment. It seems a little
> >>> surprising that the attachments are in the wrong place for a 0.4
> >>> installation. I've found the backup that I made for the upgrade work and
> >>> passed it on to you. Assuming that is the right stuff, that might make
> >>> life easier!
> >>>
> >>> Perhaps we can look at puppet again shortly. It may be that the problems
> >>> that I had were smaller than they looked. I would expect it to be fine
> >>> to install without but have a commitment to get the setup properly
> >>> puppetised later. Though obviously that is something to clear with INFRA
> >>> again.
> >>>
> >>> Cheers,
> >>>   Gary
> >>>
> >>>
> >>> On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> >>>> Hi all,
> >>>>
> >>>> I just wanted to give a quick status update on my progress with getting
> >>>> the
> >>>> live issue tracker and wiki back online.
> >>>>
> >>>> I have managed to use the existing vagrant/salt files to provision a vm
> >>>> with the 0.4 version installed.
> >>>> I have also got a fix for the issue where apache was unable to serve
> >>>> bloodhound. I have managed to use the existing backup of the live site to
> >>>> restore the database.
> >>>> However because the live site wasn't a standard 0.4 installation, I think
> >>>> it was still using trac 0.13dev for some reason, I was unable to just
> >>>> restore the ticket attachments.
> >>>> So rather than waste time investigating this I just went through and
> >>>> reattached the files manually. So I now have a working version of the
> >>>> live
> >>>> site at version 0.4 with ticket attachments.
> >>>> What I don't have in my backup is the wiki attachments. So unless anyone
> >>>> else has a backup of them I would have to get INFRA to restart the old
> >>>> VM.
> >>>> I am reluctant to do this unless I really have to.
> >>>>
> >>>> What I have planned next are:
> >>>>
> >>>>    - Create new backup from 0.4 using trac-admin hotcopy which will clean
> >>>>    and restore the database.
> >>>>    - Test that I can consistently rebuild with just vagrant/salt and
> >>>>    backup
> >>>>    file.
> >>>>    - Build new 0.8 vm using vagrant/salt.
> >>>>    - Restore from my 0.4 backup.
> >>>>    - Upgrade if necessary.
> >>>>    - Test everything is working.
> >>>>    - Create new backup for version 0.8
> >>>>    - Test that I can consistently rebuild this version with just
> >>>>    vagrant/salt and backup file.
> >>>>    - Commit my changes to the vagrant/salt files to trunk and publish my
> >>>>    restore instructions here, also commit my live 0.8 backup file to the
> >>>>    private svn repository.
> >>>>
> >>>> The next stage after that will be to work with INFRA to create puppet
> >>>> scripts to match the vagrant/salt ones we have to provision and setup the
> >>>> new live VM.
> >>>> Maybe this is work that others could look at whilst I complete the above.
> >>>> If someone wants to start this work let me know and I will commit my fix
> >>>> for the apache issue and changes to provision the live vm to trunk for
> >>>> you
> >>>> to make use of.
> >>>>
> >>>> Cheers
> >>>>
> >>>> John.
> >>>>
> >>>> On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
> >>>> wrote:
> >>>>
> >>>>> +1 for deploying 0.8 release which is the latest.
> >>>>>
> >>>>> On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
> >>>>>
> >>>>>> On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> >>>>>>> Hello John
> >>>>>>>
> >>>>>>> On 11/6/17, John Chambers <ch...@apache.org> wrote:
> >>>>>>>> Hi Olemis,
> >>>>>>>>
> >>>>>>>> The plan that has been discussed before was to migrate to an 0.8
> >>>>>> instance.
> >>>>>>>> So we can make use of full multi-product support. I still think
> >>> this
> >>>>> is
> >>>>>>>> possible.
> >>>>>>>> Having multiple instances could cause some confusion in my
> >>> opinion
> >>>>> so I
> >>>>>>>> would look to avoid that if possible.
> >>>>>>>>
> >>>>>>> [...]
> >>>>>>>
> >>>>>>> There is a reason why b.a.o was not upgraded to 0.8 before . That's
> >>>>>>> why I suggested keeping both VMs during the time 0.8 evolves to
> >>> become
> >>>>>>> stable (which I think should be the next step, cmiiw). IMHO having
> >>> a
> >>>>>>> stable multi-product version should be a precondition to shut down
> >>> 0.4
> >>>>>>> instance and release a new version (be it 0.8.x , 0.9 or ...).
> >>>>>>>
> >>>>>>> I'll also plan for bringing back to life blood-hound.net
> >>> multi-product
> >>>>>>> instance. In the process I might get some inspiration to write a
> >>>>>>> Docker file . These days I only tend to deploy services in
> >>> containers.
> >>>>>>> Nonetheless , like I just said , looking forward I'd recommend
> >>> doing a
> >>>>>>> (major?) architectural upgrade in BH code base.
> >>>>>>>
> >>>>>> If we can prove that the 0.8 will work well enough, let's just use
> >>> that.
> >>>>>> I don't think an intermediate situation where we are running two or
> >>> more
> >>>>>> is feasible. We should be using the latest version we can and
> >>> preferably
> >>>>>> the latest release.
> >>>>>>
> >>>>>> Cheers,
> >>>>>>     Gary
> >>>>>>
> >>>>> --
> >>>>> Dammina Sahabandu
> >>>>> Associate Tech Lead, AdroitLogic
> >>>>> Committer, Apache Software Foundation
> >>>>> AMIE (SL)
> >>>>> Bsc Eng Hons (Moratuwa)
> >>>>> +94716422775
> >>>>>
> 

Re: Public Bloodhound is down again

Posted by Daniel Brownridge <da...@gmail.com>.
Hi Gary,

Just catching up with all the mails after a busy work period.

I've never had any kind of account but would like one to be able to
contribute.

If there is some kind of list, sign me up please!

Should note I also have some (very rusty) experience with LDAP so can
help with that if necessary.

Thanks,

Daniel

On 14/11/17 19:05, Gary wrote:
> That is an interesting question.
>
> One thing we should really be considering is whether we can be using a
> common set of users to the apache instance of jira through ldap or
> however it works. It may introduce a small barrier to access to ask
> people to register through jira but it might be nice to be able to avail
> ourselves of responsibility of working out who the real users are.
>
> I expect we should be able to work with our own permissions against an
> external ldap, for instance. I have not yet tried such a setup of
> course. This may be something that is delayed for further down the line
> if it would delay recovery too much. It may be that we can use multiple
> sources for users too. Possibly worth checking.
>
> Anyway, the immediate need should be considered to be getting anyone who
> needs access signed up so the questions around this need to be
> considered by others here are:
>
>  * would those who already have accounts mind if they needed to sign up
>  again?
>  * can we have access to register accounts opened for now or do we want
>  spam control in place from the start?
>  * would a longer term plan to have accounts 'shared' with another
>  apache issue tracker instance bother you?
>
> Cheers,
>   Gary
>
> On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
>> Thanks Gary. I will take a look at the backup you provided later today.
>> Hopefully as you say it will make the restore process much easier.
>>
>> I think once I have the restore process to a copy of the latest
>> Bloodhound
>> release sorted, we can start a discussion with INFRA on the best way
>> forward.
>>
>> One question I did have though is this. Should I be looking to restore
>> the
>> current user base?
>>
>> We may also need to discuss solving the issues of spam users and posts
>> etc
>> which caused some issues previously.
>>
>> Will keep you all updated with progress.
>>
>> Cheers.
>>
>> John.
>>
>> On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
>>
>>> Hi John,
>>>
>>> Just a quick note about the upgrade. I suspect that the backup you are
>>> testing on has not got the proper environment. It seems a little
>>> surprising that the attachments are in the wrong place for a 0.4
>>> installation. I've found the backup that I made for the upgrade work and
>>> passed it on to you. Assuming that is the right stuff, that might make
>>> life easier!
>>>
>>> Perhaps we can look at puppet again shortly. It may be that the problems
>>> that I had were smaller than they looked. I would expect it to be fine
>>> to install without but have a commitment to get the setup properly
>>> puppetised later. Though obviously that is something to clear with INFRA
>>> again.
>>>
>>> Cheers,
>>>   Gary
>>>
>>>
>>> On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
>>>> Hi all,
>>>>
>>>> I just wanted to give a quick status update on my progress with getting
>>>> the
>>>> live issue tracker and wiki back online.
>>>>
>>>> I have managed to use the existing vagrant/salt files to provision a vm
>>>> with the 0.4 version installed.
>>>> I have also got a fix for the issue where apache was unable to serve
>>>> bloodhound. I have managed to use the existing backup of the live site to
>>>> restore the database.
>>>> However because the live site wasn't a standard 0.4 installation, I think
>>>> it was still using trac 0.13dev for some reason, I was unable to just
>>>> restore the ticket attachments.
>>>> So rather than waste time investigating this I just went through and
>>>> reattached the files manually. So I now have a working version of the
>>>> live
>>>> site at version 0.4 with ticket attachments.
>>>> What I don't have in my backup is the wiki attachments. So unless anyone
>>>> else has a backup of them I would have to get INFRA to restart the old
>>>> VM.
>>>> I am reluctant to do this unless I really have to.
>>>>
>>>> What I have planned next are:
>>>>
>>>>    - Create new backup from 0.4 using trac-admin hotcopy which will clean
>>>>    and restore the database.
>>>>    - Test that I can consistently rebuild with just vagrant/salt and
>>>>    backup
>>>>    file.
>>>>    - Build new 0.8 vm using vagrant/salt.
>>>>    - Restore from my 0.4 backup.
>>>>    - Upgrade if necessary.
>>>>    - Test everything is working.
>>>>    - Create new backup for version 0.8
>>>>    - Test that I can consistently rebuild this version with just
>>>>    vagrant/salt and backup file.
>>>>    - Commit my changes to the vagrant/salt files to trunk and publish my
>>>>    restore instructions here, also commit my live 0.8 backup file to the
>>>>    private svn repository.
>>>>
>>>> The next stage after that will be to work with INFRA to create puppet
>>>> scripts to match the vagrant/salt ones we have to provision and setup the
>>>> new live VM.
>>>> Maybe this is work that others could look at whilst I complete the above.
>>>> If someone wants to start this work let me know and I will commit my fix
>>>> for the apache issue and changes to provision the live vm to trunk for
>>>> you
>>>> to make use of.
>>>>
>>>> Cheers
>>>>
>>>> John.
>>>>
>>>> On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
>>>> wrote:
>>>>
>>>>> +1 for deploying 0.8 release which is the latest.
>>>>>
>>>>> On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
>>>>>
>>>>>> On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
>>>>>>> Hello John
>>>>>>>
>>>>>>> On 11/6/17, John Chambers <ch...@apache.org> wrote:
>>>>>>>> Hi Olemis,
>>>>>>>>
>>>>>>>> The plan that has been discussed before was to migrate to an 0.8
>>>>>> instance.
>>>>>>>> So we can make use of full multi-product support. I still think
>>> this
>>>>> is
>>>>>>>> possible.
>>>>>>>> Having multiple instances could cause some confusion in my
>>> opinion
>>>>> so I
>>>>>>>> would look to avoid that if possible.
>>>>>>>>
>>>>>>> [...]
>>>>>>>
>>>>>>> There is a reason why b.a.o was not upgraded to 0.8 before . That's
>>>>>>> why I suggested keeping both VMs during the time 0.8 evolves to
>>> become
>>>>>>> stable (which I think should be the next step, cmiiw). IMHO having
>>> a
>>>>>>> stable multi-product version should be a precondition to shut down
>>> 0.4
>>>>>>> instance and release a new version (be it 0.8.x , 0.9 or ...).
>>>>>>>
>>>>>>> I'll also plan for bringing back to life blood-hound.net
>>> multi-product
>>>>>>> instance. In the process I might get some inspiration to write a
>>>>>>> Docker file . These days I only tend to deploy services in
>>> containers.
>>>>>>> Nonetheless , like I just said , looking forward I'd recommend
>>> doing a
>>>>>>> (major?) architectural upgrade in BH code base.
>>>>>>>
>>>>>> If we can prove that the 0.8 will work well enough, let's just use
>>> that.
>>>>>> I don't think an intermediate situation where we are running two or
>>> more
>>>>>> is feasible. We should be using the latest version we can and
>>> preferably
>>>>>> the latest release.
>>>>>>
>>>>>> Cheers,
>>>>>>     Gary
>>>>>>
>>>>> --
>>>>> Dammina Sahabandu
>>>>> Associate Tech Lead, AdroitLogic
>>>>> Committer, Apache Software Foundation
>>>>> AMIE (SL)
>>>>> Bsc Eng Hons (Moratuwa)
>>>>> +94716422775
>>>>>


Re: Public Bloodhound is down again

Posted by Greg Stein <gs...@gmail.com>.
Infra can provide a bare Ubuntu 16.04 VM via puppet, and then you can
finish the install from there. No skin off Infra's back. We encourage
project VMs to use puppet to easily restore their VM should something go
sideways. But if you can easily rebuild another way... hey, fine.

Cheers,
-g


On Wed, Dec 13, 2017 at 4:36 PM, John Chambers <ch...@apache.org> wrote:

> As prompted (Thanks Gary 😉 ) a quick update on my progress with getting
> the public instance back online.
>
> I have just committed my code changes to the Vagrant and salt files to
> enable us to provision an instance of 0.8 and I have created a backup of
> the live tickets and wiki for 0.8.
> I am not sure if I should post that backup anyway. Any ideas let me know. I
> can also provide the steps to restore from the backup if anyone is
> interested in trying these changes out locally for themselves. Just let me
> know.
>
> The next step I think is to contact INFRA to see what is actually required
> to provision the new VM. We have salt code currently but I am thinking that
> INFRA may require puppet.
>
> The other outstanding question is what to do about the users and
> permissions. I have the original htdigest file and the permissions set in
> the database, but I am wondering if we should just reset the users and
> permissions and start again. Maybe we can discuss this once the live
> instance has been sorted.
>
> Cheers,
>   John.
>
> On 14 November 2017 at 19:05, Gary <ga...@physics.org> wrote:
>
> > That is an interesting question.
> >
> > One thing we should really be considering is whether we can be using a
> > common set of users to the apache instance of jira through ldap or
> > however it works. It may introduce a small barrier to access to ask
> > people to register through jira but it might be nice to be able to avail
> > ourselves of responsibility of working out who the real users are.
> >
> > I expect we should be able to work with our own permissions against an
> > external ldap, for instance. I have not yet tried such a setup of
> > course. This may be something that is delayed for further down the line
> > if it would delay recovery too much. It may be that we can use multiple
> > sources for users too. Possibly worth checking.
> >
> > Anyway, the immediate need should be considered to be getting anyone who
> > needs access signed up so the questions around this need to be
> > considered by others here are:
> >
> >  * would those who already have accounts mind if they needed to sign up
> >  again?
> >  * can we have access to register accounts opened for now or do we want
> >  spam control in place from the start?
> >  * would a longer term plan to have accounts 'shared' with another
> >  apache issue tracker instance bother you?
> >
> > Cheers,
> >   Gary
> >
> > On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> > > Thanks Gary. I will take a look at the backup you provided later today.
> > > Hopefully as you say it will make the restore process much easier.
> > >
> > > I think once I have the restore process to a copy of the latest
> > > Bloodhound
> > > release sorted, we can start a discussion with INFRA on the best way
> > > forward.
> > >
> > > One question I did have though is this. Should I be looking to restore
> > > the
> > > current user base?
> > >
> > > We may also need to discuss solving the issues of spam users and posts
> > > etc
> > > which caused some issues previously.
> > >
> > > Will keep you all updated with progress.
> > >
> > > Cheers.
> > >
> > > John.
> > >
> > > On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> > >
> > > > Hi John,
> > > >
> > > > Just a quick note about the upgrade. I suspect that the backup you
> are
> > > > testing on has not got the proper environment. It seems a little
> > > > surprising that the attachments are in the wrong place for a 0.4
> > > > installation. I've found the backup that I made for the upgrade work
> > and
> > > > passed it on to you. Assuming that is the right stuff, that might
> make
> > > > life easier!
> > > >
> > > > Perhaps we can look at puppet again shortly. It may be that the
> > problems
> > > > that I had were smaller than they looked. I would expect it to be
> fine
> > > > to install without but have a commitment to get the setup properly
> > > > puppetised later. Though obviously that is something to clear with
> > INFRA
> > > > again.
> > > >
> > > > Cheers,
> > > >   Gary
> > > >
> > > >
> > > > On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > > > > Hi all,
> > > > >
> > > > > I just wanted to give a quick status update on my progress with
> > getting
> > > > > the
> > > > > live issue tracker and wiki back online.
> > > > >
> > > > > I have managed to use the existing vagrant/salt files to provision
> a
> > vm
> > > > > with the 0.4 version installed.
> > > > > I have also got a fix for the issue where apache was unable to
> serve
> > > > > bloodhound. I have managed to use the existing backup of the live
> > site to
> > > > > restore the database.
> > > > > However because the live site wasn't a standard 0.4 installation, I
> > think
> > > > > it was still using trac 0.13dev for some reason, I was unable to
> just
> > > > > restore the ticket attachments.
> > > > > So rather than waste time investigating this I just went through
> and
> > > > > reattached the files manually. So I now have a working version of
> the
> > > > > live
> > > > > site at version 0.4 with ticket attachments.
> > > > > What I don't have in my backup is the wiki attachments. So unless
> > anyone
> > > > > else has a backup of them I would have to get INFRA to restart the
> > old
> > > > > VM.
> > > > > I am reluctant to do this unless I really have to.
> > > > >
> > > > > What I have planned next are:
> > > > >
> > > > >    - Create new backup from 0.4 using trac-admin hotcopy which will
> > clean
> > > > >    and restore the database.
> > > > >    - Test that I can consistently rebuild with just vagrant/salt
> and
> > > > >    backup
> > > > >    file.
> > > > >    - Build new 0.8 vm using vagrant/salt.
> > > > >    - Restore from my 0.4 backup.
> > > > >    - Upgrade if necessary.
> > > > >    - Test everything is working.
> > > > >    - Create new backup for version 0.8
> > > > >    - Test that I can consistently rebuild this version with just
> > > > >    vagrant/salt and backup file.
> > > > >    - Commit my changes to the vagrant/salt files to trunk and
> > publish my
> > > > >    restore instructions here, also commit my live 0.8 backup file
> to
> > the
> > > > >    private svn repository.
> > > > >
> > > > > The next stage after that will be to work with INFRA to create
> puppet
> > > > > scripts to match the vagrant/salt ones we have to provision and
> > setup the
> > > > > new live VM.
> > > > > Maybe this is work that others could look at whilst I complete the
> > above.
> > > > > If someone wants to start this work let me know and I will commit
> my
> > fix
> > > > > for the apache issue and changes to provision the live vm to trunk
> > for
> > > > > you
> > > > > to make use of.
> > > > >
> > > > > Cheers
> > > > >
> > > > > John.
> > > > >
> > > > > On 7 November 2017 at 18:50, Dammina Sahabandu <
> > dmsahabandu@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 for deploying 0.8 release which is the latest.
> > > > > >
> > > > > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org>
> > wrote:
> > > > > >
> > > > > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > > > > Hello John
> > > > > > > >
> > > > > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > > > > Hi Olemis,
> > > > > > > > >
> > > > > > > > > The plan that has been discussed before was to migrate to
> an
> > 0.8
> > > > > > > instance.
> > > > > > > > > So we can make use of full multi-product support. I still
> > think
> > > > this
> > > > > > is
> > > > > > > > > possible.
> > > > > > > > > Having multiple instances could cause some confusion in my
> > > > opinion
> > > > > > so I
> > > > > > > > > would look to avoid that if possible.
> > > > > > > > >
> > > > > > > > [...]
> > > > > > > >
> > > > > > > > There is a reason why b.a.o was not upgraded to 0.8 before .
> > That's
> > > > > > > > why I suggested keeping both VMs during the time 0.8 evolves
> to
> > > > become
> > > > > > > > stable (which I think should be the next step, cmiiw). IMHO
> > having
> > > > a
> > > > > > > > stable multi-product version should be a precondition to shut
> > down
> > > > 0.4
> > > > > > > > instance and release a new version (be it 0.8.x , 0.9 or
> ...).
> > > > > > > >
> > > > > > > > I'll also plan for bringing back to life blood-hound.net
> > > > multi-product
> > > > > > > > instance. In the process I might get some inspiration to
> write
> > a
> > > > > > > > Docker file . These days I only tend to deploy services in
> > > > containers.
> > > > > > > > Nonetheless , like I just said , looking forward I'd
> recommend
> > > > doing a
> > > > > > > > (major?) architectural upgrade in BH code base.
> > > > > > > >
> > > > > > >
> > > > > > > If we can prove that the 0.8 will work well enough, let's just
> > use
> > > > that.
> > > > > > > I don't think an intermediate situation where we are running
> two
> > or
> > > > more
> > > > > > > is feasible. We should be using the latest version we can and
> > > > preferably
> > > > > > > the latest release.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >     Gary
> > > > > > >
> > > > > > --
> > > > > > Dammina Sahabandu
> > > > > > Associate Tech Lead, AdroitLogic
> > > > > > Committer, Apache Software Foundation
> > > > > > AMIE (SL)
> > > > > > Bsc Eng Hons (Moratuwa)
> > > > > > +94716422775
> > > > > >
> > > >
> >
>

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@apache.org>.
As prompted (Thanks Gary 😉 ) a quick update on my progress with getting
the public instance back online.

I have just committed my code changes to the Vagrant and salt files to
enable us to provision an instance of 0.8 and I have created a backup of
the live tickets and wiki for 0.8.
I am not sure if I should post that backup anyway. Any ideas let me know. I
can also provide the steps to restore from the backup if anyone is
interested in trying these changes out locally for themselves. Just let me
know.

The next step I think is to contact INFRA to see what is actually required
to provision the new VM. We have salt code currently but I am thinking that
INFRA may require puppet.

The other outstanding question is what to do about the users and
permissions. I have the original htdigest file and the permissions set in
the database, but I am wondering if we should just reset the users and
permissions and start again. Maybe we can discuss this once the live
instance has been sorted.

Cheers,
  John.

On 14 November 2017 at 19:05, Gary <ga...@physics.org> wrote:

> That is an interesting question.
>
> One thing we should really be considering is whether we can be using a
> common set of users to the apache instance of jira through ldap or
> however it works. It may introduce a small barrier to access to ask
> people to register through jira but it might be nice to be able to avail
> ourselves of responsibility of working out who the real users are.
>
> I expect we should be able to work with our own permissions against an
> external ldap, for instance. I have not yet tried such a setup of
> course. This may be something that is delayed for further down the line
> if it would delay recovery too much. It may be that we can use multiple
> sources for users too. Possibly worth checking.
>
> Anyway, the immediate need should be considered to be getting anyone who
> needs access signed up so the questions around this need to be
> considered by others here are:
>
>  * would those who already have accounts mind if they needed to sign up
>  again?
>  * can we have access to register accounts opened for now or do we want
>  spam control in place from the start?
>  * would a longer term plan to have accounts 'shared' with another
>  apache issue tracker instance bother you?
>
> Cheers,
>   Gary
>
> On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> > Thanks Gary. I will take a look at the backup you provided later today.
> > Hopefully as you say it will make the restore process much easier.
> >
> > I think once I have the restore process to a copy of the latest
> > Bloodhound
> > release sorted, we can start a discussion with INFRA on the best way
> > forward.
> >
> > One question I did have though is this. Should I be looking to restore
> > the
> > current user base?
> >
> > We may also need to discuss solving the issues of spam users and posts
> > etc
> > which caused some issues previously.
> >
> > Will keep you all updated with progress.
> >
> > Cheers.
> >
> > John.
> >
> > On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> >
> > > Hi John,
> > >
> > > Just a quick note about the upgrade. I suspect that the backup you are
> > > testing on has not got the proper environment. It seems a little
> > > surprising that the attachments are in the wrong place for a 0.4
> > > installation. I've found the backup that I made for the upgrade work
> and
> > > passed it on to you. Assuming that is the right stuff, that might make
> > > life easier!
> > >
> > > Perhaps we can look at puppet again shortly. It may be that the
> problems
> > > that I had were smaller than they looked. I would expect it to be fine
> > > to install without but have a commitment to get the setup properly
> > > puppetised later. Though obviously that is something to clear with
> INFRA
> > > again.
> > >
> > > Cheers,
> > >   Gary
> > >
> > >
> > > On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > > > Hi all,
> > > >
> > > > I just wanted to give a quick status update on my progress with
> getting
> > > > the
> > > > live issue tracker and wiki back online.
> > > >
> > > > I have managed to use the existing vagrant/salt files to provision a
> vm
> > > > with the 0.4 version installed.
> > > > I have also got a fix for the issue where apache was unable to serve
> > > > bloodhound. I have managed to use the existing backup of the live
> site to
> > > > restore the database.
> > > > However because the live site wasn't a standard 0.4 installation, I
> think
> > > > it was still using trac 0.13dev for some reason, I was unable to just
> > > > restore the ticket attachments.
> > > > So rather than waste time investigating this I just went through and
> > > > reattached the files manually. So I now have a working version of the
> > > > live
> > > > site at version 0.4 with ticket attachments.
> > > > What I don't have in my backup is the wiki attachments. So unless
> anyone
> > > > else has a backup of them I would have to get INFRA to restart the
> old
> > > > VM.
> > > > I am reluctant to do this unless I really have to.
> > > >
> > > > What I have planned next are:
> > > >
> > > >    - Create new backup from 0.4 using trac-admin hotcopy which will
> clean
> > > >    and restore the database.
> > > >    - Test that I can consistently rebuild with just vagrant/salt and
> > > >    backup
> > > >    file.
> > > >    - Build new 0.8 vm using vagrant/salt.
> > > >    - Restore from my 0.4 backup.
> > > >    - Upgrade if necessary.
> > > >    - Test everything is working.
> > > >    - Create new backup for version 0.8
> > > >    - Test that I can consistently rebuild this version with just
> > > >    vagrant/salt and backup file.
> > > >    - Commit my changes to the vagrant/salt files to trunk and
> publish my
> > > >    restore instructions here, also commit my live 0.8 backup file to
> the
> > > >    private svn repository.
> > > >
> > > > The next stage after that will be to work with INFRA to create puppet
> > > > scripts to match the vagrant/salt ones we have to provision and
> setup the
> > > > new live VM.
> > > > Maybe this is work that others could look at whilst I complete the
> above.
> > > > If someone wants to start this work let me know and I will commit my
> fix
> > > > for the apache issue and changes to provision the live vm to trunk
> for
> > > > you
> > > > to make use of.
> > > >
> > > > Cheers
> > > >
> > > > John.
> > > >
> > > > On 7 November 2017 at 18:50, Dammina Sahabandu <
> dmsahabandu@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 for deploying 0.8 release which is the latest.
> > > > >
> > > > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org>
> wrote:
> > > > >
> > > > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > > > Hello John
> > > > > > >
> > > > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > > > Hi Olemis,
> > > > > > > >
> > > > > > > > The plan that has been discussed before was to migrate to an
> 0.8
> > > > > > instance.
> > > > > > > > So we can make use of full multi-product support. I still
> think
> > > this
> > > > > is
> > > > > > > > possible.
> > > > > > > > Having multiple instances could cause some confusion in my
> > > opinion
> > > > > so I
> > > > > > > > would look to avoid that if possible.
> > > > > > > >
> > > > > > > [...]
> > > > > > >
> > > > > > > There is a reason why b.a.o was not upgraded to 0.8 before .
> That's
> > > > > > > why I suggested keeping both VMs during the time 0.8 evolves to
> > > become
> > > > > > > stable (which I think should be the next step, cmiiw). IMHO
> having
> > > a
> > > > > > > stable multi-product version should be a precondition to shut
> down
> > > 0.4
> > > > > > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > > > > > >
> > > > > > > I'll also plan for bringing back to life blood-hound.net
> > > multi-product
> > > > > > > instance. In the process I might get some inspiration to write
> a
> > > > > > > Docker file . These days I only tend to deploy services in
> > > containers.
> > > > > > > Nonetheless , like I just said , looking forward I'd recommend
> > > doing a
> > > > > > > (major?) architectural upgrade in BH code base.
> > > > > > >
> > > > > >
> > > > > > If we can prove that the 0.8 will work well enough, let's just
> use
> > > that.
> > > > > > I don't think an intermediate situation where we are running two
> or
> > > more
> > > > > > is feasible. We should be using the latest version we can and
> > > preferably
> > > > > > the latest release.
> > > > > >
> > > > > > Cheers,
> > > > > >     Gary
> > > > > >
> > > > > --
> > > > > Dammina Sahabandu
> > > > > Associate Tech Lead, AdroitLogic
> > > > > Committer, Apache Software Foundation
> > > > > AMIE (SL)
> > > > > Bsc Eng Hons (Moratuwa)
> > > > > +94716422775
> > > > >
> > >
>

Re: Public Bloodhound is down again

Posted by Gary <ga...@physics.org>.
That is an interesting question.

One thing we should really be considering is whether we can be using a
common set of users to the apache instance of jira through ldap or
however it works. It may introduce a small barrier to access to ask
people to register through jira but it might be nice to be able to avail
ourselves of responsibility of working out who the real users are.

I expect we should be able to work with our own permissions against an
external ldap, for instance. I have not yet tried such a setup of
course. This may be something that is delayed for further down the line
if it would delay recovery too much. It may be that we can use multiple
sources for users too. Possibly worth checking.

Anyway, the immediate need should be considered to be getting anyone who
needs access signed up so the questions around this need to be
considered by others here are:

 * would those who already have accounts mind if they needed to sign up
 again?
 * can we have access to register accounts opened for now or do we want
 spam control in place from the start?
 * would a longer term plan to have accounts 'shared' with another
 apache issue tracker instance bother you?

Cheers,
  Gary

On Mon, 13 Nov 2017, at 12:56 PM, John Chambers wrote:
> Thanks Gary. I will take a look at the backup you provided later today.
> Hopefully as you say it will make the restore process much easier.
> 
> I think once I have the restore process to a copy of the latest
> Bloodhound
> release sorted, we can start a discussion with INFRA on the best way
> forward.
> 
> One question I did have though is this. Should I be looking to restore
> the
> current user base?
> 
> We may also need to discuss solving the issues of spam users and posts
> etc
> which caused some issues previously.
> 
> Will keep you all updated with progress.
> 
> Cheers.
> 
> John.
> 
> On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:
> 
> > Hi John,
> >
> > Just a quick note about the upgrade. I suspect that the backup you are
> > testing on has not got the proper environment. It seems a little
> > surprising that the attachments are in the wrong place for a 0.4
> > installation. I've found the backup that I made for the upgrade work and
> > passed it on to you. Assuming that is the right stuff, that might make
> > life easier!
> >
> > Perhaps we can look at puppet again shortly. It may be that the problems
> > that I had were smaller than they looked. I would expect it to be fine
> > to install without but have a commitment to get the setup properly
> > puppetised later. Though obviously that is something to clear with INFRA
> > again.
> >
> > Cheers,
> >   Gary
> >
> >
> > On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > > Hi all,
> > >
> > > I just wanted to give a quick status update on my progress with getting
> > > the
> > > live issue tracker and wiki back online.
> > >
> > > I have managed to use the existing vagrant/salt files to provision a vm
> > > with the 0.4 version installed.
> > > I have also got a fix for the issue where apache was unable to serve
> > > bloodhound. I have managed to use the existing backup of the live site to
> > > restore the database.
> > > However because the live site wasn't a standard 0.4 installation, I think
> > > it was still using trac 0.13dev for some reason, I was unable to just
> > > restore the ticket attachments.
> > > So rather than waste time investigating this I just went through and
> > > reattached the files manually. So I now have a working version of the
> > > live
> > > site at version 0.4 with ticket attachments.
> > > What I don't have in my backup is the wiki attachments. So unless anyone
> > > else has a backup of them I would have to get INFRA to restart the old
> > > VM.
> > > I am reluctant to do this unless I really have to.
> > >
> > > What I have planned next are:
> > >
> > >    - Create new backup from 0.4 using trac-admin hotcopy which will clean
> > >    and restore the database.
> > >    - Test that I can consistently rebuild with just vagrant/salt and
> > >    backup
> > >    file.
> > >    - Build new 0.8 vm using vagrant/salt.
> > >    - Restore from my 0.4 backup.
> > >    - Upgrade if necessary.
> > >    - Test everything is working.
> > >    - Create new backup for version 0.8
> > >    - Test that I can consistently rebuild this version with just
> > >    vagrant/salt and backup file.
> > >    - Commit my changes to the vagrant/salt files to trunk and publish my
> > >    restore instructions here, also commit my live 0.8 backup file to the
> > >    private svn repository.
> > >
> > > The next stage after that will be to work with INFRA to create puppet
> > > scripts to match the vagrant/salt ones we have to provision and setup the
> > > new live VM.
> > > Maybe this is work that others could look at whilst I complete the above.
> > > If someone wants to start this work let me know and I will commit my fix
> > > for the apache issue and changes to provision the live vm to trunk for
> > > you
> > > to make use of.
> > >
> > > Cheers
> > >
> > > John.
> > >
> > > On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
> > > wrote:
> > >
> > > > +1 for deploying 0.8 release which is the latest.
> > > >
> > > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
> > > >
> > > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > > Hello John
> > > > > >
> > > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > > Hi Olemis,
> > > > > > >
> > > > > > > The plan that has been discussed before was to migrate to an 0.8
> > > > > instance.
> > > > > > > So we can make use of full multi-product support. I still think
> > this
> > > > is
> > > > > > > possible.
> > > > > > > Having multiple instances could cause some confusion in my
> > opinion
> > > > so I
> > > > > > > would look to avoid that if possible.
> > > > > > >
> > > > > > [...]
> > > > > >
> > > > > > There is a reason why b.a.o was not upgraded to 0.8 before . That's
> > > > > > why I suggested keeping both VMs during the time 0.8 evolves to
> > become
> > > > > > stable (which I think should be the next step, cmiiw). IMHO having
> > a
> > > > > > stable multi-product version should be a precondition to shut down
> > 0.4
> > > > > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > > > > >
> > > > > > I'll also plan for bringing back to life blood-hound.net
> > multi-product
> > > > > > instance. In the process I might get some inspiration to write a
> > > > > > Docker file . These days I only tend to deploy services in
> > containers.
> > > > > > Nonetheless , like I just said , looking forward I'd recommend
> > doing a
> > > > > > (major?) architectural upgrade in BH code base.
> > > > > >
> > > > >
> > > > > If we can prove that the 0.8 will work well enough, let's just use
> > that.
> > > > > I don't think an intermediate situation where we are running two or
> > more
> > > > > is feasible. We should be using the latest version we can and
> > preferably
> > > > > the latest release.
> > > > >
> > > > > Cheers,
> > > > >     Gary
> > > > >
> > > > --
> > > > Dammina Sahabandu
> > > > Associate Tech Lead, AdroitLogic
> > > > Committer, Apache Software Foundation
> > > > AMIE (SL)
> > > > Bsc Eng Hons (Moratuwa)
> > > > +94716422775
> > > >
> >

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@gmail.com>.
Thanks Gary. I will take a look at the backup you provided later today.
Hopefully as you say it will make the restore process much easier.

I think once I have the restore process to a copy of the latest Bloodhound
release sorted, we can start a discussion with INFRA on the best way
forward.

One question I did have though is this. Should I be looking to restore the
current user base?

We may also need to discuss solving the issues of spam users and posts etc
which caused some issues previously.

Will keep you all updated with progress.

Cheers.

John.

On 13 Nov 2017 11:21, "Gary" <ga...@physics.org> wrote:

> Hi John,
>
> Just a quick note about the upgrade. I suspect that the backup you are
> testing on has not got the proper environment. It seems a little
> surprising that the attachments are in the wrong place for a 0.4
> installation. I've found the backup that I made for the upgrade work and
> passed it on to you. Assuming that is the right stuff, that might make
> life easier!
>
> Perhaps we can look at puppet again shortly. It may be that the problems
> that I had were smaller than they looked. I would expect it to be fine
> to install without but have a commitment to get the setup properly
> puppetised later. Though obviously that is something to clear with INFRA
> again.
>
> Cheers,
>   Gary
>
>
> On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> > Hi all,
> >
> > I just wanted to give a quick status update on my progress with getting
> > the
> > live issue tracker and wiki back online.
> >
> > I have managed to use the existing vagrant/salt files to provision a vm
> > with the 0.4 version installed.
> > I have also got a fix for the issue where apache was unable to serve
> > bloodhound. I have managed to use the existing backup of the live site to
> > restore the database.
> > However because the live site wasn't a standard 0.4 installation, I think
> > it was still using trac 0.13dev for some reason, I was unable to just
> > restore the ticket attachments.
> > So rather than waste time investigating this I just went through and
> > reattached the files manually. So I now have a working version of the
> > live
> > site at version 0.4 with ticket attachments.
> > What I don't have in my backup is the wiki attachments. So unless anyone
> > else has a backup of them I would have to get INFRA to restart the old
> > VM.
> > I am reluctant to do this unless I really have to.
> >
> > What I have planned next are:
> >
> >    - Create new backup from 0.4 using trac-admin hotcopy which will clean
> >    and restore the database.
> >    - Test that I can consistently rebuild with just vagrant/salt and
> >    backup
> >    file.
> >    - Build new 0.8 vm using vagrant/salt.
> >    - Restore from my 0.4 backup.
> >    - Upgrade if necessary.
> >    - Test everything is working.
> >    - Create new backup for version 0.8
> >    - Test that I can consistently rebuild this version with just
> >    vagrant/salt and backup file.
> >    - Commit my changes to the vagrant/salt files to trunk and publish my
> >    restore instructions here, also commit my live 0.8 backup file to the
> >    private svn repository.
> >
> > The next stage after that will be to work with INFRA to create puppet
> > scripts to match the vagrant/salt ones we have to provision and setup the
> > new live VM.
> > Maybe this is work that others could look at whilst I complete the above.
> > If someone wants to start this work let me know and I will commit my fix
> > for the apache issue and changes to provision the live vm to trunk for
> > you
> > to make use of.
> >
> > Cheers
> >
> > John.
> >
> > On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
> > wrote:
> >
> > > +1 for deploying 0.8 release which is the latest.
> > >
> > > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
> > >
> > > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > > Hello John
> > > > >
> > > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > > Hi Olemis,
> > > > > >
> > > > > > The plan that has been discussed before was to migrate to an 0.8
> > > > instance.
> > > > > > So we can make use of full multi-product support. I still think
> this
> > > is
> > > > > > possible.
> > > > > > Having multiple instances could cause some confusion in my
> opinion
> > > so I
> > > > > > would look to avoid that if possible.
> > > > > >
> > > > > [...]
> > > > >
> > > > > There is a reason why b.a.o was not upgraded to 0.8 before . That's
> > > > > why I suggested keeping both VMs during the time 0.8 evolves to
> become
> > > > > stable (which I think should be the next step, cmiiw). IMHO having
> a
> > > > > stable multi-product version should be a precondition to shut down
> 0.4
> > > > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > > > >
> > > > > I'll also plan for bringing back to life blood-hound.net
> multi-product
> > > > > instance. In the process I might get some inspiration to write a
> > > > > Docker file . These days I only tend to deploy services in
> containers.
> > > > > Nonetheless , like I just said , looking forward I'd recommend
> doing a
> > > > > (major?) architectural upgrade in BH code base.
> > > > >
> > > >
> > > > If we can prove that the 0.8 will work well enough, let's just use
> that.
> > > > I don't think an intermediate situation where we are running two or
> more
> > > > is feasible. We should be using the latest version we can and
> preferably
> > > > the latest release.
> > > >
> > > > Cheers,
> > > >     Gary
> > > >
> > > --
> > > Dammina Sahabandu
> > > Associate Tech Lead, AdroitLogic
> > > Committer, Apache Software Foundation
> > > AMIE (SL)
> > > Bsc Eng Hons (Moratuwa)
> > > +94716422775
> > >
>

Re: Public Bloodhound is down again

Posted by Gary <ga...@physics.org>.
Hi John,

Just a quick note about the upgrade. I suspect that the backup you are
testing on has not got the proper environment. It seems a little
surprising that the attachments are in the wrong place for a 0.4
installation. I've found the backup that I made for the upgrade work and
passed it on to you. Assuming that is the right stuff, that might make
life easier!

Perhaps we can look at puppet again shortly. It may be that the problems
that I had were smaller than they looked. I would expect it to be fine
to install without but have a commitment to get the setup properly
puppetised later. Though obviously that is something to clear with INFRA
again.

Cheers,
  Gary


On Sun, 12 Nov 2017, at 10:20 AM, John Chambers wrote:
> Hi all,
> 
> I just wanted to give a quick status update on my progress with getting
> the
> live issue tracker and wiki back online.
> 
> I have managed to use the existing vagrant/salt files to provision a vm
> with the 0.4 version installed.
> I have also got a fix for the issue where apache was unable to serve
> bloodhound. I have managed to use the existing backup of the live site to
> restore the database.
> However because the live site wasn't a standard 0.4 installation, I think
> it was still using trac 0.13dev for some reason, I was unable to just
> restore the ticket attachments.
> So rather than waste time investigating this I just went through and
> reattached the files manually. So I now have a working version of the
> live
> site at version 0.4 with ticket attachments.
> What I don't have in my backup is the wiki attachments. So unless anyone
> else has a backup of them I would have to get INFRA to restart the old
> VM.
> I am reluctant to do this unless I really have to.
> 
> What I have planned next are:
> 
>    - Create new backup from 0.4 using trac-admin hotcopy which will clean
>    and restore the database.
>    - Test that I can consistently rebuild with just vagrant/salt and
>    backup
>    file.
>    - Build new 0.8 vm using vagrant/salt.
>    - Restore from my 0.4 backup.
>    - Upgrade if necessary.
>    - Test everything is working.
>    - Create new backup for version 0.8
>    - Test that I can consistently rebuild this version with just
>    vagrant/salt and backup file.
>    - Commit my changes to the vagrant/salt files to trunk and publish my
>    restore instructions here, also commit my live 0.8 backup file to the
>    private svn repository.
> 
> The next stage after that will be to work with INFRA to create puppet
> scripts to match the vagrant/salt ones we have to provision and setup the
> new live VM.
> Maybe this is work that others could look at whilst I complete the above.
> If someone wants to start this work let me know and I will commit my fix
> for the apache issue and changes to provision the live vm to trunk for
> you
> to make use of.
> 
> Cheers
> 
> John.
> 
> On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
> wrote:
> 
> > +1 for deploying 0.8 release which is the latest.
> >
> > On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
> >
> > > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > > Hello John
> > > >
> > > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > > Hi Olemis,
> > > > >
> > > > > The plan that has been discussed before was to migrate to an 0.8
> > > instance.
> > > > > So we can make use of full multi-product support. I still think this
> > is
> > > > > possible.
> > > > > Having multiple instances could cause some confusion in my opinion
> > so I
> > > > > would look to avoid that if possible.
> > > > >
> > > > [...]
> > > >
> > > > There is a reason why b.a.o was not upgraded to 0.8 before . That's
> > > > why I suggested keeping both VMs during the time 0.8 evolves to become
> > > > stable (which I think should be the next step, cmiiw). IMHO having a
> > > > stable multi-product version should be a precondition to shut down 0.4
> > > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > > >
> > > > I'll also plan for bringing back to life blood-hound.net multi-product
> > > > instance. In the process I might get some inspiration to write a
> > > > Docker file . These days I only tend to deploy services in containers.
> > > > Nonetheless , like I just said , looking forward I'd recommend doing a
> > > > (major?) architectural upgrade in BH code base.
> > > >
> > >
> > > If we can prove that the 0.8 will work well enough, let's just use that.
> > > I don't think an intermediate situation where we are running two or more
> > > is feasible. We should be using the latest version we can and preferably
> > > the latest release.
> > >
> > > Cheers,
> > >     Gary
> > >
> > --
> > Dammina Sahabandu
> > Associate Tech Lead, AdroitLogic
> > Committer, Apache Software Foundation
> > AMIE (SL)
> > Bsc Eng Hons (Moratuwa)
> > +94716422775
> >

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@apache.org>.
Hi all,

I just wanted to give a quick status update on my progress with getting the
live issue tracker and wiki back online.

I have managed to use the existing vagrant/salt files to provision a vm
with the 0.4 version installed.
I have also got a fix for the issue where apache was unable to serve
bloodhound. I have managed to use the existing backup of the live site to
restore the database.
However because the live site wasn't a standard 0.4 installation, I think
it was still using trac 0.13dev for some reason, I was unable to just
restore the ticket attachments.
So rather than waste time investigating this I just went through and
reattached the files manually. So I now have a working version of the live
site at version 0.4 with ticket attachments.
What I don't have in my backup is the wiki attachments. So unless anyone
else has a backup of them I would have to get INFRA to restart the old VM.
I am reluctant to do this unless I really have to.

What I have planned next are:

   - Create new backup from 0.4 using trac-admin hotcopy which will clean
   and restore the database.
   - Test that I can consistently rebuild with just vagrant/salt and backup
   file.
   - Build new 0.8 vm using vagrant/salt.
   - Restore from my 0.4 backup.
   - Upgrade if necessary.
   - Test everything is working.
   - Create new backup for version 0.8
   - Test that I can consistently rebuild this version with just
   vagrant/salt and backup file.
   - Commit my changes to the vagrant/salt files to trunk and publish my
   restore instructions here, also commit my live 0.8 backup file to the
   private svn repository.

The next stage after that will be to work with INFRA to create puppet
scripts to match the vagrant/salt ones we have to provision and setup the
new live VM.
Maybe this is work that others could look at whilst I complete the above.
If someone wants to start this work let me know and I will commit my fix
for the apache issue and changes to provision the live vm to trunk for you
to make use of.

Cheers

John.

On 7 November 2017 at 18:50, Dammina Sahabandu <dm...@gmail.com>
wrote:

> +1 for deploying 0.8 release which is the latest.
>
> On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:
>
> > On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > > Hello John
> > >
> > > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > > Hi Olemis,
> > > >
> > > > The plan that has been discussed before was to migrate to an 0.8
> > instance.
> > > > So we can make use of full multi-product support. I still think this
> is
> > > > possible.
> > > > Having multiple instances could cause some confusion in my opinion
> so I
> > > > would look to avoid that if possible.
> > > >
> > > [...]
> > >
> > > There is a reason why b.a.o was not upgraded to 0.8 before . That's
> > > why I suggested keeping both VMs during the time 0.8 evolves to become
> > > stable (which I think should be the next step, cmiiw). IMHO having a
> > > stable multi-product version should be a precondition to shut down 0.4
> > > instance and release a new version (be it 0.8.x , 0.9 or ...).
> > >
> > > I'll also plan for bringing back to life blood-hound.net multi-product
> > > instance. In the process I might get some inspiration to write a
> > > Docker file . These days I only tend to deploy services in containers.
> > > Nonetheless , like I just said , looking forward I'd recommend doing a
> > > (major?) architectural upgrade in BH code base.
> > >
> >
> > If we can prove that the 0.8 will work well enough, let's just use that.
> > I don't think an intermediate situation where we are running two or more
> > is feasible. We should be using the latest version we can and preferably
> > the latest release.
> >
> > Cheers,
> >     Gary
> >
> --
> Dammina Sahabandu
> Associate Tech Lead, AdroitLogic
> Committer, Apache Software Foundation
> AMIE (SL)
> Bsc Eng Hons (Moratuwa)
> +94716422775
>

Re: Public Bloodhound is down again

Posted by Dammina Sahabandu <dm...@gmail.com>.
+1 for deploying 0.8 release which is the latest.

On Tue, Nov 7, 2017 at 6:11 PM Gary <ga...@physics.org> wrote:

> On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> > Hello John
> >
> > On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > > Hi Olemis,
> > >
> > > The plan that has been discussed before was to migrate to an 0.8
> instance.
> > > So we can make use of full multi-product support. I still think this is
> > > possible.
> > > Having multiple instances could cause some confusion in my opinion so I
> > > would look to avoid that if possible.
> > >
> > [...]
> >
> > There is a reason why b.a.o was not upgraded to 0.8 before . That's
> > why I suggested keeping both VMs during the time 0.8 evolves to become
> > stable (which I think should be the next step, cmiiw). IMHO having a
> > stable multi-product version should be a precondition to shut down 0.4
> > instance and release a new version (be it 0.8.x , 0.9 or ...).
> >
> > I'll also plan for bringing back to life blood-hound.net multi-product
> > instance. In the process I might get some inspiration to write a
> > Docker file . These days I only tend to deploy services in containers.
> > Nonetheless , like I just said , looking forward I'd recommend doing a
> > (major?) architectural upgrade in BH code base.
> >
>
> If we can prove that the 0.8 will work well enough, let's just use that.
> I don't think an intermediate situation where we are running two or more
> is feasible. We should be using the latest version we can and preferably
> the latest release.
>
> Cheers,
>     Gary
>
-- 
Dammina Sahabandu
Associate Tech Lead, AdroitLogic
Committer, Apache Software Foundation
AMIE (SL)
Bsc Eng Hons (Moratuwa)
+94716422775

Re: Public Bloodhound is down again

Posted by Gary <ga...@physics.org>.
On Mon, 6 Nov 2017, at 03:11 PM, Olemis Lang wrote:
> Hello John
> 
> On 11/6/17, John Chambers <ch...@apache.org> wrote:
> > Hi Olemis,
> >
> > The plan that has been discussed before was to migrate to an 0.8 instance.
> > So we can make use of full multi-product support. I still think this is
> > possible.
> > Having multiple instances could cause some confusion in my opinion so I
> > would look to avoid that if possible.
> >
> [...]
> 
> There is a reason why b.a.o was not upgraded to 0.8 before . That's
> why I suggested keeping both VMs during the time 0.8 evolves to become
> stable (which I think should be the next step, cmiiw). IMHO having a
> stable multi-product version should be a precondition to shut down 0.4
> instance and release a new version (be it 0.8.x , 0.9 or ...).
> 
> I'll also plan for bringing back to life blood-hound.net multi-product
> instance. In the process I might get some inspiration to write a
> Docker file . These days I only tend to deploy services in containers.
> Nonetheless , like I just said , looking forward I'd recommend doing a
> (major?) architectural upgrade in BH code base.
> 

If we can prove that the 0.8 will work well enough, let's just use that.
I don't think an intermediate situation where we are running two or more
is feasible. We should be using the latest version we can and preferably
the latest release.

Cheers,
    Gary

Re: Public Bloodhound is down again

Posted by Olemis Lang <ol...@gmail.com>.
Hello John

On 11/6/17, John Chambers <ch...@apache.org> wrote:
> Hi Olemis,
>
> The plan that has been discussed before was to migrate to an 0.8 instance.
> So we can make use of full multi-product support. I still think this is
> possible.
> Having multiple instances could cause some confusion in my opinion so I
> would look to avoid that if possible.
>
[...]

There is a reason why b.a.o was not upgraded to 0.8 before . That's
why I suggested keeping both VMs during the time 0.8 evolves to become
stable (which I think should be the next step, cmiiw). IMHO having a
stable multi-product version should be a precondition to shut down 0.4
instance and release a new version (be it 0.8.x , 0.9 or ...).

I'll also plan for bringing back to life blood-hound.net multi-product
instance. In the process I might get some inspiration to write a
Docker file . These days I only tend to deploy services in containers.
Nonetheless , like I just said , looking forward I'd recommend doing a
(major?) architectural upgrade in BH code base.

-- 
Regards,

Olemis - @olemislc

Apache™ Bloodhound contributor
http://issues.apache.org/bloodhound
http://blood-hound.net

Brython committer
http://brython.info
http://github.com/brython-dev/brython

SciPy Latin America - Cuban Ambassador
Chairman of SciPy LA 2017 - http://scipyla.org/conf/2017

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@apache.org>.
Hi Olemis,

The plan that has been discussed before was to migrate to an 0.8 instance.
So we can make use of full multi-product support. I still think this is
possible.
Having multiple instances could cause some confusion in my opinion so I
would look to avoid that if possible.

Cheers

John

On 6 November 2017 at 04:09, Olemis Lang <ol...@gmail.com> wrote:

> On 11/5/17, Greg Stein <gs...@gmail.com> wrote:
> > On Sun, Nov 5, 2017 at 4:05 AM, John Chambers <ch...@apache.org>
> wrote:
> >
> [...]
> >
> >> My plan is to create a working 0.4 version of Bloodhound from this
> backup
> >> and then upgrade it to 0.8.
> >
> [...]
>
> What shall we do after we have a public instance ? Indeed I'd
> recommend to have two instances, one with 0.4 (stable) and another
> with 0.8, (testing) .
>
> p.s. I'm afraid I can't be of much help with the VMs but if there's
> something else I can help with , let me know .
>
> --
> Regards,
>
> Olemis - @olemislc
>
> Apache™ Bloodhound contributor
> http://issues.apache.org/bloodhound
> http://blood-hound.net
>
> Brython committer
> http://brython.info
> http://github.com/brython-dev/brython
>
> SciPy Latin America - Cuban Ambassador
> Chairman of SciPy LA 2017 - http://scipyla.org/conf/2017
>
> Blog ES: http://simelo-es.blogspot.com/
> Blog EN: http://simelo-en.blogspot.com/
>
> Featured article:
>

Re: Public Bloodhound is down again

Posted by Olemis Lang <ol...@gmail.com>.
On 11/5/17, Greg Stein <gs...@gmail.com> wrote:
> On Sun, Nov 5, 2017 at 4:05 AM, John Chambers <ch...@apache.org> wrote:
>
[...]
>
>> My plan is to create a working 0.4 version of Bloodhound from this backup
>> and then upgrade it to 0.8.
>
[...]

What shall we do after we have a public instance ? Indeed I'd
recommend to have two instances, one with 0.4 (stable) and another
with 0.8, (testing) .

p.s. I'm afraid I can't be of much help with the VMs but if there's
something else I can help with , let me know .

-- 
Regards,

Olemis - @olemislc

Apache™ Bloodhound contributor
http://issues.apache.org/bloodhound
http://blood-hound.net

Brython committer
http://brython.info
http://github.com/brython-dev/brython

SciPy Latin America - Cuban Ambassador
Chairman of SciPy LA 2017 - http://scipyla.org/conf/2017

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:

Re: Public Bloodhound is down again

Posted by Greg Stein <gs...@gmail.com>.
On Sun, Nov 5, 2017 at 4:05 AM, John Chambers <ch...@apache.org> wrote:

> Hi Greg/Dammina,
>
> I have been looking at how to get a working version of the Bloodhound
> instance locally from a backup file I made on August 1st 2017. I have asked
>

That should be the latest, as the site wasn't running after that date, so
no further changes could have been made.

Infra should have backups (and I believe the VM is still around, but turned
off).


> Gary for some help with instructions on the correct way to do this. However
> if anyone else has the details then please let me know. I am unsure at the
> moment if the backup I have has everything in it needed to restore the
> issues and wiki to the state it was previously. Also I am not aware of any
> other backups of the system.
>
> My plan is to create a working 0.4 version of Bloodhound from this backup
> and then upgrade it to 0.8. Then take another backup which can then be used
> to create our new instance of the official site on a new VM. Once I can get
> to this point I will publish the instructions here along with the backup
> file for others to try. Then once the process has been verified I will put
> my name forward as the second person to manage the new official VM and work
> with Dammina to get it operational and maintained going forward.
>

Great! With you and Dammina, we can reopen INFRA-13255 and y'all can help
set up and manage a new VM. That can either be the issues.apache.org live
VM, or a demo, as you wish.

Cheers,
-g

Re: Public Bloodhound is down again

Posted by John Chambers <ch...@apache.org>.
Hi Greg/Dammina,

I have been looking at how to get a working version of the Bloodhound
instance locally from a backup file I made on August 1st 2017. I have asked
Gary for some help with instructions on the correct way to do this. However
if anyone else has the details then please let me know. I am unsure at the
moment if the backup I have has everything in it needed to restore the
issues and wiki to the state it was previously. Also I am not aware of any
other backups of the system.

My plan is to create a working 0.4 version of Bloodhound from this backup
and then upgrade it to 0.8. Then take another backup which can then be used
to create our new instance of the official site on a new VM. Once I can get
to this point I will publish the instructions here along with the backup
file for others to try. Then once the process has been verified I will put
my name forward as the second person to manage the new official VM and work
with Dammina to get it operational and maintained going forward.

I will try and report my progress here as often as I can.

Cheers

John.



On 23 October 2017 at 10:55, Dammina Sahabandu <dm...@gmail.com>
wrote:

> Hi Greg,
>
> I’m very glad to here that we have an infra member among us. As I mentioned
> earlier I will take the responsibility of taking care of VMS. However, at
> start I might need some guidance from you. I would like to learn how this
> works rather than just raising an ticket to infra.
>
> So if it is possible I would like to have a Skype call with you where other
> interested contributors are also welcome to join and learn.
>
> Please let me know how to proceed.
>
> Thanks,
> Dammina
>
> On Mon, Oct 23, 2017 at 1:28 PM Greg Stein <gs...@gmail.com> wrote:
>
> > Hey Dammina,
> >
> > Short answer: Infra turned off the service because it wasn't being
> > maintained. More below:
> >
> > On Mon, Oct 23, 2017 at 1:59 AM, Dammina Sahabandu <
> dmsahabandu@gmail.com>
> > wrote:
> > >...
> >
> > > Hi All,
> > >
> > > The public hosted instance of bloodhound is down again for some time
> now.
> > > We need to come up with a sustainable methodology to keep this up and
> > > running. At least we should activate a different health check
> monitoring
> > > service.
> > >
> >
> > Infra has ample monitoring. No worries there. But a couple things got
> > broken on it, and never fixed. There were also a couple VMs running for
> > Bloodhound demos, but those weren't worked on either. ... So Infra just
> > shut it all down.
> >
> >
> > > As I remember last time we have reported this to infra team and get
> > > resolved. Is this the method that we have following all along? Or else
> do
> > > we have any control to the VM that this instance is running.
> > >
> > > I would like to take the responsibility of keeping the instance up and
> > > running in the future, but first I need some guidance from our senior
> > > members.
> > >
> >
> > Speaking for Infra now: we would like an additional person to make a
> > similar commitment, before we spin up a VM for Apache Bloodhound. We've
> > been going back/forth on these VMs for a while now, and have yet to
> > succeed.
> >
> > I'm reachable here (as a PMC Member) or on users@infra or via a Jira
> > ticket. Happy to help, but Infra needs some assurances of assistance for
> > your VM(s).
> >
> > Cheers,
> > Greg Stein
> > Infrastructure Administrator, ASF
> > (and a Bloodhound PMC member)
> >
> --
> Dammina Sahabandu
> Associate Tech Lead, AdroitLogic
> Committer, Apache Software Foundation
> AMIE (SL)
> Bsc Eng Hons (Moratuwa)
> +94716422775
>

Re: Public Bloodhound is down again

Posted by Dammina Sahabandu <dm...@gmail.com>.
Hi Greg,

I’m very glad to here that we have an infra member among us. As I mentioned
earlier I will take the responsibility of taking care of VMS. However, at
start I might need some guidance from you. I would like to learn how this
works rather than just raising an ticket to infra.

So if it is possible I would like to have a Skype call with you where other
interested contributors are also welcome to join and learn.

Please let me know how to proceed.

Thanks,
Dammina

On Mon, Oct 23, 2017 at 1:28 PM Greg Stein <gs...@gmail.com> wrote:

> Hey Dammina,
>
> Short answer: Infra turned off the service because it wasn't being
> maintained. More below:
>
> On Mon, Oct 23, 2017 at 1:59 AM, Dammina Sahabandu <dm...@gmail.com>
> wrote:
> >...
>
> > Hi All,
> >
> > The public hosted instance of bloodhound is down again for some time now.
> > We need to come up with a sustainable methodology to keep this up and
> > running. At least we should activate a different health check monitoring
> > service.
> >
>
> Infra has ample monitoring. No worries there. But a couple things got
> broken on it, and never fixed. There were also a couple VMs running for
> Bloodhound demos, but those weren't worked on either. ... So Infra just
> shut it all down.
>
>
> > As I remember last time we have reported this to infra team and get
> > resolved. Is this the method that we have following all along? Or else do
> > we have any control to the VM that this instance is running.
> >
> > I would like to take the responsibility of keeping the instance up and
> > running in the future, but first I need some guidance from our senior
> > members.
> >
>
> Speaking for Infra now: we would like an additional person to make a
> similar commitment, before we spin up a VM for Apache Bloodhound. We've
> been going back/forth on these VMs for a while now, and have yet to
> succeed.
>
> I'm reachable here (as a PMC Member) or on users@infra or via a Jira
> ticket. Happy to help, but Infra needs some assurances of assistance for
> your VM(s).
>
> Cheers,
> Greg Stein
> Infrastructure Administrator, ASF
> (and a Bloodhound PMC member)
>
-- 
Dammina Sahabandu
Associate Tech Lead, AdroitLogic
Committer, Apache Software Foundation
AMIE (SL)
Bsc Eng Hons (Moratuwa)
+94716422775

Re: Public Bloodhound is down again

Posted by Greg Stein <gs...@gmail.com>.
Hey Dammina,

Short answer: Infra turned off the service because it wasn't being
maintained. More below:

On Mon, Oct 23, 2017 at 1:59 AM, Dammina Sahabandu <dm...@gmail.com>
wrote:
>...

> Hi All,
>
> The public hosted instance of bloodhound is down again for some time now.
> We need to come up with a sustainable methodology to keep this up and
> running. At least we should activate a different health check monitoring
> service.
>

Infra has ample monitoring. No worries there. But a couple things got
broken on it, and never fixed. There were also a couple VMs running for
Bloodhound demos, but those weren't worked on either. ... So Infra just
shut it all down.


> As I remember last time we have reported this to infra team and get
> resolved. Is this the method that we have following all along? Or else do
> we have any control to the VM that this instance is running.
>
> I would like to take the responsibility of keeping the instance up and
> running in the future, but first I need some guidance from our senior
> members.
>

Speaking for Infra now: we would like an additional person to make a
similar commitment, before we spin up a VM for Apache Bloodhound. We've
been going back/forth on these VMs for a while now, and have yet to succeed.

I'm reachable here (as a PMC Member) or on users@infra or via a Jira
ticket. Happy to help, but Infra needs some assurances of assistance for
your VM(s).

Cheers,
Greg Stein
Infrastructure Administrator, ASF
(and a Bloodhound PMC member)