You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Andrija Panic <an...@gmail.com> on 2017/09/29 11:18:13 UTC

Advice on converting zone-wide to cluster-wide storage

Hi all,

I was wondering if anyone have experience hacking DB and converting
zone-wide primary storage to cluster-wide.

We have:
1 x NFS primary storage, zone-wide
1 x CEPH primary storage, zone-wide
1 x SOLIDFIRE orimary storage, zone-wide
1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
storage (SS not relevant here).

I'm assuming few DB changes would do it  - storage_pool table / scope,
cluster_id, pod_id fileds), but have not yet had time to play with it
really.

Any advice if this is OK to be done in production environment, would be
very much appreciated.

We plan to expand to many more racks, so we might move from
single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
design Primary Storage accordingly.

Thanks !

-- 

Andrija Panić

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Andrija Panic <an...@gmail.com>.
Hi guys,

Thanks a lot for the good info!
Will take look on this soon!

Cheers,
Andrija

On Sep 30, 2017 14:26, "Tutkowski, Mike" <Mi...@netapp.com> wrote:

> Good points, Sateesh! Thanks for chiming in. :)
>
> On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi <sateesh.chodapuneedi@
> accelerite.com<ma...@accelerite.com>> wrote:
>
> Hi Andrija,
> I’ve converted cluster-wide NFS based storage pools to zone-wide in the
> past.
>
> Basically there are 2 steps for NFS and Ceph,
> 1. DB update
> 2. If there are more than 1 cluster in that zone, then do un-manage &
> manage all the clusters except the original cluster
>
> In addition to Mike’s suggestion, you need to do following,
> • Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table
>
> Example SQL looks like below, given that the hypervisor in my setup is
> VMware.
> mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL,
> hypervisor='VMware' where id=<STORAGE_POOL_ID>;
>
> With DB update, the changes would be reflected in UI as well.
>
> Post the DB update, it is important to un-manage, followed by manage
> clusters (except the original cluster to which this storage pool belongs
> to) so that all hosts in other clusters also to connect to this storage
> pool, making this pool as a full-fledged zone wide storage pool.
>
> Hope this helps you!
>
> Regards,
> Sateesh Ch,
> CloudStack Development, Accelerite,
> www.accelerite.com<http://www.accelerite.com>
> @accelerite
>
>
> -----Original Message-----
> From: "Tutkowski, Mike" <Mike.Tutkowski@netapp.com<mailto:
> Mike.Tutkowski@netapp.com>>
> Reply-To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> dev@cloudstack.apache.org<ma...@cloudstack.apache.org>>
> Date: Friday, 29 September 2017 at 6:57 PM
> To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> dev@cloudstack.apache.org<ma...@cloudstack.apache.org>>, "
> users@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> users@cloudstack.apache.org<ma...@cloudstack.apache.org>>
> Subject: Re: Advice on converting zone-wide to cluster-wide storage
>
>    Hi Andrija,
>
>    I just took a look at the SolidFire logic around adding primary storage
> at the zone level versus the cluster scope.
>
>    I recommend you try this in development prior to production, but it
> looks like you can make the following changes for SolidFire:
>
>    • In cloud.storage_pool, enter the applicable value for pod_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
>    • In cloud.storage_pool, enter the applicable value for cluster_id
> (this should be null when being used as zone-wide storage and an integer
> when being used as cluster-scoped storage).
>    • In cloud.storage_pool, change the hypervisor_type from Any to (in
> your case) KVM.
>
>    Talk to you later!
>    Mike
>
>    On 9/29/17, 5:18 AM, "Andrija Panic" <andrija.panic@gmail.com<mailto:
> andrija.panic@gmail.com>> wrote:
>
>        Hi all,
>
>        I was wondering if anyone have experience hacking DB and converting
>        zone-wide primary storage to cluster-wide.
>
>        We have:
>        1 x NFS primary storage, zone-wide
>        1 x CEPH primary storage, zone-wide
>        1 x SOLIDFIRE orimary storage, zone-wide
>        1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular
> secondary
>        storage (SS not relevant here).
>
>        I'm assuming few DB changes would do it  - storage_pool table /
> scope,
>        cluster_id, pod_id fileds), but have not yet had time to play with
> it
>        really.
>
>        Any advice if this is OK to be done in production environment,
> would be
>        very much appreciated.
>
>        We plan to expand to many more racks, so we might move from
>        single-everything (pod/cluster) to multiple PODs/clusters etc, and
> thus
>        design Primary Storage accordingly.
>
>        Thanks !
>
>        --
>
>        Andrija Panić
>
>
>
>
> DISCLAIMER
> ==========
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Andrija Panic <an...@gmail.com>.
Hi guys,

Thanks a lot for the good info!
Will take look on this soon!

Cheers,
Andrija

On Sep 30, 2017 14:26, "Tutkowski, Mike" <Mi...@netapp.com> wrote:

> Good points, Sateesh! Thanks for chiming in. :)
>
> On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi <sateesh.chodapuneedi@
> accelerite.com<ma...@accelerite.com>> wrote:
>
> Hi Andrija,
> I’ve converted cluster-wide NFS based storage pools to zone-wide in the
> past.
>
> Basically there are 2 steps for NFS and Ceph,
> 1. DB update
> 2. If there are more than 1 cluster in that zone, then do un-manage &
> manage all the clusters except the original cluster
>
> In addition to Mike’s suggestion, you need to do following,
> • Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table
>
> Example SQL looks like below, given that the hypervisor in my setup is
> VMware.
> mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL,
> hypervisor='VMware' where id=<STORAGE_POOL_ID>;
>
> With DB update, the changes would be reflected in UI as well.
>
> Post the DB update, it is important to un-manage, followed by manage
> clusters (except the original cluster to which this storage pool belongs
> to) so that all hosts in other clusters also to connect to this storage
> pool, making this pool as a full-fledged zone wide storage pool.
>
> Hope this helps you!
>
> Regards,
> Sateesh Ch,
> CloudStack Development, Accelerite,
> www.accelerite.com<http://www.accelerite.com>
> @accelerite
>
>
> -----Original Message-----
> From: "Tutkowski, Mike" <Mike.Tutkowski@netapp.com<mailto:
> Mike.Tutkowski@netapp.com>>
> Reply-To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> dev@cloudstack.apache.org<ma...@cloudstack.apache.org>>
> Date: Friday, 29 September 2017 at 6:57 PM
> To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> dev@cloudstack.apache.org<ma...@cloudstack.apache.org>>, "
> users@cloudstack.apache.org<ma...@cloudstack.apache.org>" <
> users@cloudstack.apache.org<ma...@cloudstack.apache.org>>
> Subject: Re: Advice on converting zone-wide to cluster-wide storage
>
>    Hi Andrija,
>
>    I just took a look at the SolidFire logic around adding primary storage
> at the zone level versus the cluster scope.
>
>    I recommend you try this in development prior to production, but it
> looks like you can make the following changes for SolidFire:
>
>    • In cloud.storage_pool, enter the applicable value for pod_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
>    • In cloud.storage_pool, enter the applicable value for cluster_id
> (this should be null when being used as zone-wide storage and an integer
> when being used as cluster-scoped storage).
>    • In cloud.storage_pool, change the hypervisor_type from Any to (in
> your case) KVM.
>
>    Talk to you later!
>    Mike
>
>    On 9/29/17, 5:18 AM, "Andrija Panic" <andrija.panic@gmail.com<mailto:
> andrija.panic@gmail.com>> wrote:
>
>        Hi all,
>
>        I was wondering if anyone have experience hacking DB and converting
>        zone-wide primary storage to cluster-wide.
>
>        We have:
>        1 x NFS primary storage, zone-wide
>        1 x CEPH primary storage, zone-wide
>        1 x SOLIDFIRE orimary storage, zone-wide
>        1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular
> secondary
>        storage (SS not relevant here).
>
>        I'm assuming few DB changes would do it  - storage_pool table /
> scope,
>        cluster_id, pod_id fileds), but have not yet had time to play with
> it
>        really.
>
>        Any advice if this is OK to be done in production environment,
> would be
>        very much appreciated.
>
>        We plan to expand to many more racks, so we might move from
>        single-everything (pod/cluster) to multiple PODs/clusters etc, and
> thus
>        design Primary Storage accordingly.
>
>        Thanks !
>
>        --
>
>        Andrija Panić
>
>
>
>
> DISCLAIMER
> ==========
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>

Re: Advice on converting zone-wide to cluster-wide storage

Posted by "Tutkowski, Mike" <Mi...@netapp.com>.
Good points, Sateesh! Thanks for chiming in. :)

On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi <sa...@accelerite.com>> wrote:

Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, hypervisor='VMware' where id=<STORAGE_POOL_ID>;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters (except the original cluster to which this storage pool belongs to) so that all hosts in other clusters also to connect to this storage pool, making this pool as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com<http://www.accelerite.com>
@accelerite


-----Original Message-----
From: "Tutkowski, Mike" <Mi...@netapp.com>>
Reply-To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <de...@cloudstack.apache.org>>
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <de...@cloudstack.apache.org>>, "users@cloudstack.apache.org<ma...@cloudstack.apache.org>" <us...@cloudstack.apache.org>>
Subject: Re: Advice on converting zone-wide to cluster-wide storage

   Hi Andrija,

   I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.

   I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:

   • In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
   • In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
   • In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.

   Talk to you later!
   Mike

   On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com>> wrote:

       Hi all,

       I was wondering if anyone have experience hacking DB and converting
       zone-wide primary storage to cluster-wide.

       We have:
       1 x NFS primary storage, zone-wide
       1 x CEPH primary storage, zone-wide
       1 x SOLIDFIRE orimary storage, zone-wide
       1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
       storage (SS not relevant here).

       I'm assuming few DB changes would do it  - storage_pool table / scope,
       cluster_id, pod_id fileds), but have not yet had time to play with it
       really.

       Any advice if this is OK to be done in production environment, would be
       very much appreciated.

       We plan to expand to many more racks, so we might move from
       single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
       design Primary Storage accordingly.

       Thanks !

       --

       Andrija Panić




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: Advice on converting zone-wide to cluster-wide storage

Posted by "Tutkowski, Mike" <Mi...@netapp.com>.
Good points, Sateesh! Thanks for chiming in. :)

On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi <sa...@accelerite.com>> wrote:

Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, hypervisor='VMware' where id=<STORAGE_POOL_ID>;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters (except the original cluster to which this storage pool belongs to) so that all hosts in other clusters also to connect to this storage pool, making this pool as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com<http://www.accelerite.com>
@accelerite


-----Original Message-----
From: "Tutkowski, Mike" <Mi...@netapp.com>>
Reply-To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <de...@cloudstack.apache.org>>
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org<ma...@cloudstack.apache.org>" <de...@cloudstack.apache.org>>, "users@cloudstack.apache.org<ma...@cloudstack.apache.org>" <us...@cloudstack.apache.org>>
Subject: Re: Advice on converting zone-wide to cluster-wide storage

   Hi Andrija,

   I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.

   I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:

   • In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
   • In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
   • In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.

   Talk to you later!
   Mike

   On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com>> wrote:

       Hi all,

       I was wondering if anyone have experience hacking DB and converting
       zone-wide primary storage to cluster-wide.

       We have:
       1 x NFS primary storage, zone-wide
       1 x CEPH primary storage, zone-wide
       1 x SOLIDFIRE orimary storage, zone-wide
       1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
       storage (SS not relevant here).

       I'm assuming few DB changes would do it  - storage_pool table / scope,
       cluster_id, pod_id fileds), but have not yet had time to play with it
       really.

       Any advice if this is OK to be done in production environment, would be
       very much appreciated.

       We plan to expand to many more racks, so we might move from
       single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
       design Primary Storage accordingly.

       Thanks !

       --

       Andrija Panić




DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Sateesh Chodapuneedi <sa...@accelerite.com>.
Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, hypervisor='VMware' where id=<STORAGE_POOL_ID>;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters (except the original cluster to which this storage pool belongs to) so that all hosts in other clusters also to connect to this storage pool, making this pool as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com
@accelerite


-----Original Message-----
From: "Tutkowski, Mike" <Mi...@netapp.com>
Reply-To: "dev@cloudstack.apache.org" <de...@cloudstack.apache.org>
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org" <de...@cloudstack.apache.org>, "users@cloudstack.apache.org" <us...@cloudstack.apache.org>
Subject: Re: Advice on converting zone-wide to cluster-wide storage

    Hi Andrija,
    
    I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.
    
    I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:
    
    • In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
    • In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
    • In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.
    
    Talk to you later!
    Mike
    
    On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:
    
        Hi all,
        
        I was wondering if anyone have experience hacking DB and converting
        zone-wide primary storage to cluster-wide.
        
        We have:
        1 x NFS primary storage, zone-wide
        1 x CEPH primary storage, zone-wide
        1 x SOLIDFIRE orimary storage, zone-wide
        1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
        storage (SS not relevant here).
        
        I'm assuming few DB changes would do it  - storage_pool table / scope,
        cluster_id, pod_id fileds), but have not yet had time to play with it
        really.
        
        Any advice if this is OK to be done in production environment, would be
        very much appreciated.
        
        We plan to expand to many more racks, so we might move from
        single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
        design Primary Storage accordingly.
        
        Thanks !
        
        -- 
        
        Andrija Panić
        
    
    

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Sateesh Chodapuneedi <sa...@accelerite.com>.
Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, hypervisor='VMware' where id=<STORAGE_POOL_ID>;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters (except the original cluster to which this storage pool belongs to) so that all hosts in other clusters also to connect to this storage pool, making this pool as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com
@accelerite


-----Original Message-----
From: "Tutkowski, Mike" <Mi...@netapp.com>
Reply-To: "dev@cloudstack.apache.org" <de...@cloudstack.apache.org>
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org" <de...@cloudstack.apache.org>, "users@cloudstack.apache.org" <us...@cloudstack.apache.org>
Subject: Re: Advice on converting zone-wide to cluster-wide storage

    Hi Andrija,
    
    I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.
    
    I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:
    
    • In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
    • In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
    • In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.
    
    Talk to you later!
    Mike
    
    On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:
    
        Hi all,
        
        I was wondering if anyone have experience hacking DB and converting
        zone-wide primary storage to cluster-wide.
        
        We have:
        1 x NFS primary storage, zone-wide
        1 x CEPH primary storage, zone-wide
        1 x SOLIDFIRE orimary storage, zone-wide
        1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
        storage (SS not relevant here).
        
        I'm assuming few DB changes would do it  - storage_pool table / scope,
        cluster_id, pod_id fileds), but have not yet had time to play with it
        really.
        
        Any advice if this is OK to be done in production environment, would be
        very much appreciated.
        
        We plan to expand to many more racks, so we might move from
        single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
        design Primary Storage accordingly.
        
        Thanks !
        
        -- 
        
        Andrija Panić
        
    
    

DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Andrija Panic <an...@gmail.com>.
Hi Mike,

thx for that info, that is exactly what I also see as DB differences, but
was also wondering if anyone played with it in Production :)

Will wait for some more reply hopefully !

Cheers
Andrija

On 29 September 2017 at 15:27, Tutkowski, Mike <Mi...@netapp.com>
wrote:

> Hi Andrija,
>
> I just took a look at the SolidFire logic around adding primary storage at
> the zone level versus the cluster scope.
>
> I recommend you try this in development prior to production, but it looks
> like you can make the following changes for SolidFire:
>
> • In cloud.storage_pool, enter the applicable value for pod_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
> • In cloud.storage_pool, enter the applicable value for cluster_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
> • In cloud.storage_pool, change the hypervisor_type from Any to (in your
> case) KVM.
>
> Talk to you later!
> Mike
>
> On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:
>
>     Hi all,
>
>     I was wondering if anyone have experience hacking DB and converting
>     zone-wide primary storage to cluster-wide.
>
>     We have:
>     1 x NFS primary storage, zone-wide
>     1 x CEPH primary storage, zone-wide
>     1 x SOLIDFIRE orimary storage, zone-wide
>     1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
>     storage (SS not relevant here).
>
>     I'm assuming few DB changes would do it  - storage_pool table / scope,
>     cluster_id, pod_id fileds), but have not yet had time to play with it
>     really.
>
>     Any advice if this is OK to be done in production environment, would be
>     very much appreciated.
>
>     We plan to expand to many more racks, so we might move from
>     single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
>     design Primary Storage accordingly.
>
>     Thanks !
>
>     --
>
>     Andrija Panić
>
>
>


-- 

Andrija Panić

Re: Advice on converting zone-wide to cluster-wide storage

Posted by Andrija Panic <an...@gmail.com>.
Hi Mike,

thx for that info, that is exactly what I also see as DB differences, but
was also wondering if anyone played with it in Production :)

Will wait for some more reply hopefully !

Cheers
Andrija

On 29 September 2017 at 15:27, Tutkowski, Mike <Mi...@netapp.com>
wrote:

> Hi Andrija,
>
> I just took a look at the SolidFire logic around adding primary storage at
> the zone level versus the cluster scope.
>
> I recommend you try this in development prior to production, but it looks
> like you can make the following changes for SolidFire:
>
> • In cloud.storage_pool, enter the applicable value for pod_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
> • In cloud.storage_pool, enter the applicable value for cluster_id (this
> should be null when being used as zone-wide storage and an integer when
> being used as cluster-scoped storage).
> • In cloud.storage_pool, change the hypervisor_type from Any to (in your
> case) KVM.
>
> Talk to you later!
> Mike
>
> On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:
>
>     Hi all,
>
>     I was wondering if anyone have experience hacking DB and converting
>     zone-wide primary storage to cluster-wide.
>
>     We have:
>     1 x NFS primary storage, zone-wide
>     1 x CEPH primary storage, zone-wide
>     1 x SOLIDFIRE orimary storage, zone-wide
>     1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
>     storage (SS not relevant here).
>
>     I'm assuming few DB changes would do it  - storage_pool table / scope,
>     cluster_id, pod_id fileds), but have not yet had time to play with it
>     really.
>
>     Any advice if this is OK to be done in production environment, would be
>     very much appreciated.
>
>     We plan to expand to many more racks, so we might move from
>     single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
>     design Primary Storage accordingly.
>
>     Thanks !
>
>     --
>
>     Andrija Panić
>
>
>


-- 

Andrija Panić

Re: Advice on converting zone-wide to cluster-wide storage

Posted by "Tutkowski, Mike" <Mi...@netapp.com>.
Hi Andrija,

I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.

I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:

• In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
• In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
• In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.

Talk to you later!
Mike

On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:

    Hi all,
    
    I was wondering if anyone have experience hacking DB and converting
    zone-wide primary storage to cluster-wide.
    
    We have:
    1 x NFS primary storage, zone-wide
    1 x CEPH primary storage, zone-wide
    1 x SOLIDFIRE orimary storage, zone-wide
    1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
    storage (SS not relevant here).
    
    I'm assuming few DB changes would do it  - storage_pool table / scope,
    cluster_id, pod_id fileds), but have not yet had time to play with it
    really.
    
    Any advice if this is OK to be done in production environment, would be
    very much appreciated.
    
    We plan to expand to many more racks, so we might move from
    single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
    design Primary Storage accordingly.
    
    Thanks !
    
    -- 
    
    Andrija Panić
    


Re: Advice on converting zone-wide to cluster-wide storage

Posted by "Tutkowski, Mike" <Mi...@netapp.com>.
Hi Andrija,

I just took a look at the SolidFire logic around adding primary storage at the zone level versus the cluster scope.

I recommend you try this in development prior to production, but it looks like you can make the following changes for SolidFire:

• In cloud.storage_pool, enter the applicable value for pod_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
• In cloud.storage_pool, enter the applicable value for cluster_id (this should be null when being used as zone-wide storage and an integer when being used as cluster-scoped storage).
• In cloud.storage_pool, change the hypervisor_type from Any to (in your case) KVM.

Talk to you later!
Mike

On 9/29/17, 5:18 AM, "Andrija Panic" <an...@gmail.com> wrote:

    Hi all,
    
    I was wondering if anyone have experience hacking DB and converting
    zone-wide primary storage to cluster-wide.
    
    We have:
    1 x NFS primary storage, zone-wide
    1 x CEPH primary storage, zone-wide
    1 x SOLIDFIRE orimary storage, zone-wide
    1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
    storage (SS not relevant here).
    
    I'm assuming few DB changes would do it  - storage_pool table / scope,
    cluster_id, pod_id fileds), but have not yet had time to play with it
    really.
    
    Any advice if this is OK to be done in production environment, would be
    very much appreciated.
    
    We plan to expand to many more racks, so we might move from
    single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
    design Primary Storage accordingly.
    
    Thanks !
    
    -- 
    
    Andrija Panić