You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Mike Tutkowski <mi...@solidfire.com> on 2013/02/07 20:17:40 UTC

Supporting SolidFire QoS Before 4.2

Hi everyone,

I learned yesterday that Edison's new storage plug-in architecture will
first be released with 4.2.

As such, I began to wonder if there was any way outside of CloudStack that
I could support CloudStack users who want to make use of SolidFire's QoS
feature (controlling IOPS).

A couple of us brainstormed for a bit and this is what we came up with.
 Can anyone confirm if this could work?

********************

The CloudStack Admin creates a Primary Storage that is of the type
PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
High IOPS).

The CloudStack Admin creates a Compute Offering that refers to the
"SolidFire_High" Storage Tag.

In the CSP's own GUI, a user picks the Compute Offering referred to above.
 The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
invoke a script of mine.

My script is passed in the necessary information.  It creates a SolidFire
volume with the necessary attributes and hooks it up to the hypervisor
running in the Cluster.  It updates my Primary Storage's Tag with the IQN
(or some unique identifier), then it updates my Compute Offering's Storage
Tag with this same value.

Once the Primary Storage and the Compute Offering have been updated,
CloudStack can begin the process of creating a VM Instance.

Once the VM Instance is up and running, the CSP's GUI could set the Primary
Storage's and Compute Offering's Storage Tags back to the "SolidFire_High"
value.

********************

The one problem I see with this is that you would have to be sure not to
kick off the process of creating such a Compute Offering until your
previous such Compute Offering was finished (because there is a race
condition while the Storage Tag field points to the IQN (or some other
unique identifier)).

Thanks!

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
We are essentially working on both.  :)

For 4.2, I intend on having a SolidFire plug-in available.

But, in the meanwhile, we don't want to not have a solution available for
customers today and those tomorrow who aren't going to be on 4.2 right
away.  We want to say we have *some* solution for them...even if it's not
ideal right away.


On Thu, Feb 7, 2013 at 3:43 PM, Ahmad Emneina <ae...@gmail.com> wrote:

> My $0.02 is $solidfiredev should focus on the end goal of implementing
> this awesome feature, and not some interim solution that wont be supported
> going forward.
>
>
> On Thu, Feb 7, 2013 at 2:06 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> So, yeah, at this point, I'm playing around with ideas.  I'm thinking my
>> initial flow, while not perfect by any means (reference potential race
>> condition), might work sufficiently.
>>
>> I definitely would like to tap people in the CloudStack community for a
>> sanity check, though.  :)
>>
>>
>> On Thu, Feb 7, 2013 at 2:30 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Exactly...and Edison's new plug-in architecture will enable us to create
>>> a volume per VM Instance or Data Disk, but that won't be out until 4.2.
>>>
>>> In the meanwhile, I'm kind of brainstorming to see if I can write a
>>> script to enable this functionality before 4.2 (and for customers who won't
>>> upgrade to 4.2 right away when it comes out).
>>>
>>>
>>> On Thu, Feb 7, 2013 at 2:27 PM, Ahmad Emneina <ae...@gmail.com>wrote:
>>>
>>>> got it. the iops are shared amongst guest vm disks that reside on a
>>>> volume. So is the idea to create a volume per vm?
>>>>
>>>>
>>>> On Thu, Feb 7, 2013 at 1:22 PM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Also, when I say "volume", that is equal to "LUN" when talking about
>>>>> SolidFire.
>>>>>
>>>>>
>>>>> On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Hi Ahmad,
>>>>>>
>>>>>> Thanks for the comments.
>>>>>>
>>>>>> In regards to your first idea about creating multiple targets on the
>>>>>> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>>>>>>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
>>>>>> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
>>>>>> be serviced by a single SolidFire volume.  If I ever have more than one VM
>>>>>> Instance, for example, running on the same SolidFire volume, I can't
>>>>>> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
>>>>>> by the other VM(s) running on the same volume).
>>>>>>
>>>>>> Does that make sense?  If you see some flaw in my logic, please let
>>>>>> me know.  :)
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com>wrote:
>>>>>>
>>>>>>> Hey Mike,
>>>>>>>
>>>>>>> The cleanest approach as a user would be:
>>>>>>> I create multiple targets on the solid fire. Each with a different
>>>>>>> IOPs setting (assuming you can control max iops for volumes this way). I'd
>>>>>>> mount each to the hypervisor and create a specific service offering for
>>>>>>> each. That way youre intercepting calls with a script and modifying
>>>>>>> tags/offerings on the fly, and avoid said race condition.
>>>>>>>
>>>>>>> second cleanest approach IMO:
>>>>>>> Or to backpack off your previous method. Create 3 different service
>>>>>>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>>>>>>> your volume create script can intercept the call and will know the QOS
>>>>>>> based on the offering.
>>>>>>>
>>>>>>> Why I'd do this is say you change the service offering to fast while
>>>>>>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>>>>>>> her vm... she'll be upgraded to fast. All because your modifying the one
>>>>>>> and only offering.
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>
>>>>>>>> Hi everyone,
>>>>>>>>
>>>>>>>> I learned yesterday that Edison's new storage plug-in architecture
>>>>>>>> will
>>>>>>>> first be released with 4.2.
>>>>>>>>
>>>>>>>> As such, I began to wonder if there was any way outside of
>>>>>>>> CloudStack that
>>>>>>>> I could support CloudStack users who want to make use of
>>>>>>>> SolidFire's QoS
>>>>>>>> feature (controlling IOPS).
>>>>>>>>
>>>>>>>> A couple of us brainstormed for a bit and this is what we came up
>>>>>>>> with.
>>>>>>>>  Can anyone confirm if this could work?
>>>>>>>>
>>>>>>>> ********************
>>>>>>>>
>>>>>>>> The CloudStack Admin creates a Primary Storage that is of the type
>>>>>>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>>>>>>> SolidFire
>>>>>>>> High IOPS).
>>>>>>>>
>>>>>>>> The CloudStack Admin creates a Compute Offering that refers to the
>>>>>>>> "SolidFire_High" Storage Tag.
>>>>>>>>
>>>>>>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>>>>>>> above.
>>>>>>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues
>>>>>>>> it to
>>>>>>>> invoke a script of mine.
>>>>>>>>
>>>>>>>> My script is passed in the necessary information.  It creates a
>>>>>>>> SolidFire
>>>>>>>> volume with the necessary attributes and hooks it up to the
>>>>>>>> hypervisor
>>>>>>>> running in the Cluster.  It updates my Primary Storage's Tag with
>>>>>>>> the IQN
>>>>>>>> (or some unique identifier), then it updates my Compute Offering's
>>>>>>>> Storage
>>>>>>>> Tag with this same value.
>>>>>>>>
>>>>>>>> Once the Primary Storage and the Compute Offering have been updated,
>>>>>>>> CloudStack can begin the process of creating a VM Instance.
>>>>>>>>
>>>>>>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>>>>>>> Primary
>>>>>>>> Storage's and Compute Offering's Storage Tags back to the
>>>>>>>> "SolidFire_High"
>>>>>>>> value.
>>>>>>>>
>>>>>>>> ********************
>>>>>>>>
>>>>>>>> The one problem I see with this is that you would have to be sure
>>>>>>>> not to
>>>>>>>> kick off the process of creating such a Compute Offering until your
>>>>>>>> previous such Compute Offering was finished (because there is a race
>>>>>>>> condition while the Storage Tag field points to the IQN (or some
>>>>>>>> other
>>>>>>>> unique identifier)).
>>>>>>>>
>>>>>>>> Thanks!
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the
>>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Ahmad Emneina <ae...@gmail.com>.
My $0.02 is $solidfiredev should focus on the end goal of implementing this
awesome feature, and not some interim solution that wont be supported going
forward.


On Thu, Feb 7, 2013 at 2:06 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
> wrote:

> So, yeah, at this point, I'm playing around with ideas.  I'm thinking my
> initial flow, while not perfect by any means (reference potential race
> condition), might work sufficiently.
>
> I definitely would like to tap people in the CloudStack community for a
> sanity check, though.  :)
>
>
> On Thu, Feb 7, 2013 at 2:30 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Exactly...and Edison's new plug-in architecture will enable us to create
>> a volume per VM Instance or Data Disk, but that won't be out until 4.2.
>>
>> In the meanwhile, I'm kind of brainstorming to see if I can write a
>> script to enable this functionality before 4.2 (and for customers who won't
>> upgrade to 4.2 right away when it comes out).
>>
>>
>> On Thu, Feb 7, 2013 at 2:27 PM, Ahmad Emneina <ae...@gmail.com> wrote:
>>
>>> got it. the iops are shared amongst guest vm disks that reside on a
>>> volume. So is the idea to create a volume per vm?
>>>
>>>
>>> On Thu, Feb 7, 2013 at 1:22 PM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Also, when I say "volume", that is equal to "LUN" when talking about
>>>> SolidFire.
>>>>
>>>>
>>>> On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Hi Ahmad,
>>>>>
>>>>> Thanks for the comments.
>>>>>
>>>>> In regards to your first idea about creating multiple targets on the
>>>>> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>>>>>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
>>>>> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
>>>>> be serviced by a single SolidFire volume.  If I ever have more than one VM
>>>>> Instance, for example, running on the same SolidFire volume, I can't
>>>>> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
>>>>> by the other VM(s) running on the same volume).
>>>>>
>>>>> Does that make sense?  If you see some flaw in my logic, please let me
>>>>> know.  :)
>>>>>
>>>>> Thanks!
>>>>>
>>>>>
>>>>> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com>wrote:
>>>>>
>>>>>> Hey Mike,
>>>>>>
>>>>>> The cleanest approach as a user would be:
>>>>>> I create multiple targets on the solid fire. Each with a different
>>>>>> IOPs setting (assuming you can control max iops for volumes this way). I'd
>>>>>> mount each to the hypervisor and create a specific service offering for
>>>>>> each. That way youre intercepting calls with a script and modifying
>>>>>> tags/offerings on the fly, and avoid said race condition.
>>>>>>
>>>>>> second cleanest approach IMO:
>>>>>> Or to backpack off your previous method. Create 3 different service
>>>>>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>>>>>> your volume create script can intercept the call and will know the QOS
>>>>>> based on the offering.
>>>>>>
>>>>>> Why I'd do this is say you change the service offering to fast while
>>>>>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>>>>>> her vm... she'll be upgraded to fast. All because your modifying the one
>>>>>> and only offering.
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>
>>>>>>> Hi everyone,
>>>>>>>
>>>>>>> I learned yesterday that Edison's new storage plug-in architecture
>>>>>>> will
>>>>>>> first be released with 4.2.
>>>>>>>
>>>>>>> As such, I began to wonder if there was any way outside of
>>>>>>> CloudStack that
>>>>>>> I could support CloudStack users who want to make use of SolidFire's
>>>>>>> QoS
>>>>>>> feature (controlling IOPS).
>>>>>>>
>>>>>>> A couple of us brainstormed for a bit and this is what we came up
>>>>>>> with.
>>>>>>>  Can anyone confirm if this could work?
>>>>>>>
>>>>>>> ********************
>>>>>>>
>>>>>>> The CloudStack Admin creates a Primary Storage that is of the type
>>>>>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>>>>>> SolidFire
>>>>>>> High IOPS).
>>>>>>>
>>>>>>> The CloudStack Admin creates a Compute Offering that refers to the
>>>>>>> "SolidFire_High" Storage Tag.
>>>>>>>
>>>>>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>>>>>> above.
>>>>>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues
>>>>>>> it to
>>>>>>> invoke a script of mine.
>>>>>>>
>>>>>>> My script is passed in the necessary information.  It creates a
>>>>>>> SolidFire
>>>>>>> volume with the necessary attributes and hooks it up to the
>>>>>>> hypervisor
>>>>>>> running in the Cluster.  It updates my Primary Storage's Tag with
>>>>>>> the IQN
>>>>>>> (or some unique identifier), then it updates my Compute Offering's
>>>>>>> Storage
>>>>>>> Tag with this same value.
>>>>>>>
>>>>>>> Once the Primary Storage and the Compute Offering have been updated,
>>>>>>> CloudStack can begin the process of creating a VM Instance.
>>>>>>>
>>>>>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>>>>>> Primary
>>>>>>> Storage's and Compute Offering's Storage Tags back to the
>>>>>>> "SolidFire_High"
>>>>>>> value.
>>>>>>>
>>>>>>> ********************
>>>>>>>
>>>>>>> The one problem I see with this is that you would have to be sure
>>>>>>> not to
>>>>>>> kick off the process of creating such a Compute Offering until your
>>>>>>> previous such Compute Offering was finished (because there is a race
>>>>>>> condition while the Storage Tag field points to the IQN (or some
>>>>>>> other
>>>>>>> unique identifier)).
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
So, yeah, at this point, I'm playing around with ideas.  I'm thinking my
initial flow, while not perfect by any means (reference potential race
condition), might work sufficiently.

I definitely would like to tap people in the CloudStack community for a
sanity check, though.  :)


On Thu, Feb 7, 2013 at 2:30 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
> wrote:

> Exactly...and Edison's new plug-in architecture will enable us to create a
> volume per VM Instance or Data Disk, but that won't be out until 4.2.
>
> In the meanwhile, I'm kind of brainstorming to see if I can write a script
> to enable this functionality before 4.2 (and for customers who won't
> upgrade to 4.2 right away when it comes out).
>
>
> On Thu, Feb 7, 2013 at 2:27 PM, Ahmad Emneina <ae...@gmail.com> wrote:
>
>> got it. the iops are shared amongst guest vm disks that reside on a
>> volume. So is the idea to create a volume per vm?
>>
>>
>> On Thu, Feb 7, 2013 at 1:22 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Also, when I say "volume", that is equal to "LUN" when talking about
>>> SolidFire.
>>>
>>>
>>> On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Hi Ahmad,
>>>>
>>>> Thanks for the comments.
>>>>
>>>> In regards to your first idea about creating multiple targets on the
>>>> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>>>>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
>>>> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
>>>> be serviced by a single SolidFire volume.  If I ever have more than one VM
>>>> Instance, for example, running on the same SolidFire volume, I can't
>>>> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
>>>> by the other VM(s) running on the same volume).
>>>>
>>>> Does that make sense?  If you see some flaw in my logic, please let me
>>>> know.  :)
>>>>
>>>> Thanks!
>>>>
>>>>
>>>> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com>wrote:
>>>>
>>>>> Hey Mike,
>>>>>
>>>>> The cleanest approach as a user would be:
>>>>> I create multiple targets on the solid fire. Each with a different
>>>>> IOPs setting (assuming you can control max iops for volumes this way). I'd
>>>>> mount each to the hypervisor and create a specific service offering for
>>>>> each. That way youre intercepting calls with a script and modifying
>>>>> tags/offerings on the fly, and avoid said race condition.
>>>>>
>>>>> second cleanest approach IMO:
>>>>> Or to backpack off your previous method. Create 3 different service
>>>>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>>>>> your volume create script can intercept the call and will know the QOS
>>>>> based on the offering.
>>>>>
>>>>> Why I'd do this is say you change the service offering to fast while
>>>>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>>>>> her vm... she'll be upgraded to fast. All because your modifying the one
>>>>> and only offering.
>>>>>
>>>>>
>>>>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> I learned yesterday that Edison's new storage plug-in architecture
>>>>>> will
>>>>>> first be released with 4.2.
>>>>>>
>>>>>> As such, I began to wonder if there was any way outside of CloudStack
>>>>>> that
>>>>>> I could support CloudStack users who want to make use of SolidFire's
>>>>>> QoS
>>>>>> feature (controlling IOPS).
>>>>>>
>>>>>> A couple of us brainstormed for a bit and this is what we came up
>>>>>> with.
>>>>>>  Can anyone confirm if this could work?
>>>>>>
>>>>>> ********************
>>>>>>
>>>>>> The CloudStack Admin creates a Primary Storage that is of the type
>>>>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>>>>> SolidFire
>>>>>> High IOPS).
>>>>>>
>>>>>> The CloudStack Admin creates a Compute Offering that refers to the
>>>>>> "SolidFire_High" Storage Tag.
>>>>>>
>>>>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>>>>> above.
>>>>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues
>>>>>> it to
>>>>>> invoke a script of mine.
>>>>>>
>>>>>> My script is passed in the necessary information.  It creates a
>>>>>> SolidFire
>>>>>> volume with the necessary attributes and hooks it up to the hypervisor
>>>>>> running in the Cluster.  It updates my Primary Storage's Tag with the
>>>>>> IQN
>>>>>> (or some unique identifier), then it updates my Compute Offering's
>>>>>> Storage
>>>>>> Tag with this same value.
>>>>>>
>>>>>> Once the Primary Storage and the Compute Offering have been updated,
>>>>>> CloudStack can begin the process of creating a VM Instance.
>>>>>>
>>>>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>>>>> Primary
>>>>>> Storage's and Compute Offering's Storage Tags back to the
>>>>>> "SolidFire_High"
>>>>>> value.
>>>>>>
>>>>>> ********************
>>>>>>
>>>>>> The one problem I see with this is that you would have to be sure not
>>>>>> to
>>>>>> kick off the process of creating such a Compute Offering until your
>>>>>> previous such Compute Offering was finished (because there is a race
>>>>>> condition while the Storage Tag field points to the IQN (or some other
>>>>>> unique identifier)).
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the
>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
Exactly...and Edison's new plug-in architecture will enable us to create a
volume per VM Instance or Data Disk, but that won't be out until 4.2.

In the meanwhile, I'm kind of brainstorming to see if I can write a script
to enable this functionality before 4.2 (and for customers who won't
upgrade to 4.2 right away when it comes out).


On Thu, Feb 7, 2013 at 2:27 PM, Ahmad Emneina <ae...@gmail.com> wrote:

> got it. the iops are shared amongst guest vm disks that reside on a
> volume. So is the idea to create a volume per vm?
>
>
> On Thu, Feb 7, 2013 at 1:22 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Also, when I say "volume", that is equal to "LUN" when talking about
>> SolidFire.
>>
>>
>> On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Hi Ahmad,
>>>
>>> Thanks for the comments.
>>>
>>> In regards to your first idea about creating multiple targets on the
>>> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>>>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
>>> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
>>> be serviced by a single SolidFire volume.  If I ever have more than one VM
>>> Instance, for example, running on the same SolidFire volume, I can't
>>> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
>>> by the other VM(s) running on the same volume).
>>>
>>> Does that make sense?  If you see some flaw in my logic, please let me
>>> know.  :)
>>>
>>> Thanks!
>>>
>>>
>>> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com>wrote:
>>>
>>>> Hey Mike,
>>>>
>>>> The cleanest approach as a user would be:
>>>> I create multiple targets on the solid fire. Each with a different IOPs
>>>> setting (assuming you can control max iops for volumes this way). I'd mount
>>>> each to the hypervisor and create a specific service offering for each.
>>>> That way youre intercepting calls with a script and modifying
>>>> tags/offerings on the fly, and avoid said race condition.
>>>>
>>>> second cleanest approach IMO:
>>>> Or to backpack off your previous method. Create 3 different service
>>>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>>>> your volume create script can intercept the call and will know the QOS
>>>> based on the offering.
>>>>
>>>> Why I'd do this is say you change the service offering to fast while
>>>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>>>> her vm... she'll be upgraded to fast. All because your modifying the one
>>>> and only offering.
>>>>
>>>>
>>>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Hi everyone,
>>>>>
>>>>> I learned yesterday that Edison's new storage plug-in architecture will
>>>>> first be released with 4.2.
>>>>>
>>>>> As such, I began to wonder if there was any way outside of CloudStack
>>>>> that
>>>>> I could support CloudStack users who want to make use of SolidFire's
>>>>> QoS
>>>>> feature (controlling IOPS).
>>>>>
>>>>> A couple of us brainstormed for a bit and this is what we came up with.
>>>>>  Can anyone confirm if this could work?
>>>>>
>>>>> ********************
>>>>>
>>>>> The CloudStack Admin creates a Primary Storage that is of the type
>>>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>>>> SolidFire
>>>>> High IOPS).
>>>>>
>>>>> The CloudStack Admin creates a Compute Offering that refers to the
>>>>> "SolidFire_High" Storage Tag.
>>>>>
>>>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>>>> above.
>>>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it
>>>>> to
>>>>> invoke a script of mine.
>>>>>
>>>>> My script is passed in the necessary information.  It creates a
>>>>> SolidFire
>>>>> volume with the necessary attributes and hooks it up to the hypervisor
>>>>> running in the Cluster.  It updates my Primary Storage's Tag with the
>>>>> IQN
>>>>> (or some unique identifier), then it updates my Compute Offering's
>>>>> Storage
>>>>> Tag with this same value.
>>>>>
>>>>> Once the Primary Storage and the Compute Offering have been updated,
>>>>> CloudStack can begin the process of creating a VM Instance.
>>>>>
>>>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>>>> Primary
>>>>> Storage's and Compute Offering's Storage Tags back to the
>>>>> "SolidFire_High"
>>>>> value.
>>>>>
>>>>> ********************
>>>>>
>>>>> The one problem I see with this is that you would have to be sure not
>>>>> to
>>>>> kick off the process of creating such a Compute Offering until your
>>>>> previous such Compute Offering was finished (because there is a race
>>>>> condition while the Storage Tag field points to the IQN (or some other
>>>>> unique identifier)).
>>>>>
>>>>> Thanks!
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the
>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Ahmad Emneina <ae...@gmail.com>.
got it. the iops are shared amongst guest vm disks that reside on a volume.
So is the idea to create a volume per vm?


On Thu, Feb 7, 2013 at 1:22 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
> wrote:

> Also, when I say "volume", that is equal to "LUN" when talking about
> SolidFire.
>
>
> On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hi Ahmad,
>>
>> Thanks for the comments.
>>
>> In regards to your first idea about creating multiple targets on the
>> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
>> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
>> be serviced by a single SolidFire volume.  If I ever have more than one VM
>> Instance, for example, running on the same SolidFire volume, I can't
>> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
>> by the other VM(s) running on the same volume).
>>
>> Does that make sense?  If you see some flaw in my logic, please let me
>> know.  :)
>>
>> Thanks!
>>
>>
>> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com> wrote:
>>
>>> Hey Mike,
>>>
>>> The cleanest approach as a user would be:
>>> I create multiple targets on the solid fire. Each with a different IOPs
>>> setting (assuming you can control max iops for volumes this way). I'd mount
>>> each to the hypervisor and create a specific service offering for each.
>>> That way youre intercepting calls with a script and modifying
>>> tags/offerings on the fly, and avoid said race condition.
>>>
>>> second cleanest approach IMO:
>>> Or to backpack off your previous method. Create 3 different service
>>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>>> your volume create script can intercept the call and will know the QOS
>>> based on the offering.
>>>
>>> Why I'd do this is say you change the service offering to fast while
>>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>>> her vm... she'll be upgraded to fast. All because your modifying the one
>>> and only offering.
>>>
>>>
>>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> I learned yesterday that Edison's new storage plug-in architecture will
>>>> first be released with 4.2.
>>>>
>>>> As such, I began to wonder if there was any way outside of CloudStack
>>>> that
>>>> I could support CloudStack users who want to make use of SolidFire's QoS
>>>> feature (controlling IOPS).
>>>>
>>>> A couple of us brainstormed for a bit and this is what we came up with.
>>>>  Can anyone confirm if this could work?
>>>>
>>>> ********************
>>>>
>>>> The CloudStack Admin creates a Primary Storage that is of the type
>>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>>> SolidFire
>>>> High IOPS).
>>>>
>>>> The CloudStack Admin creates a Compute Offering that refers to the
>>>> "SolidFire_High" Storage Tag.
>>>>
>>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>>> above.
>>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it
>>>> to
>>>> invoke a script of mine.
>>>>
>>>> My script is passed in the necessary information.  It creates a
>>>> SolidFire
>>>> volume with the necessary attributes and hooks it up to the hypervisor
>>>> running in the Cluster.  It updates my Primary Storage's Tag with the
>>>> IQN
>>>> (or some unique identifier), then it updates my Compute Offering's
>>>> Storage
>>>> Tag with this same value.
>>>>
>>>> Once the Primary Storage and the Compute Offering have been updated,
>>>> CloudStack can begin the process of creating a VM Instance.
>>>>
>>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>>> Primary
>>>> Storage's and Compute Offering's Storage Tags back to the
>>>> "SolidFire_High"
>>>> value.
>>>>
>>>> ********************
>>>>
>>>> The one problem I see with this is that you would have to be sure not to
>>>> kick off the process of creating such a Compute Offering until your
>>>> previous such Compute Offering was finished (because there is a race
>>>> condition while the Storage Tag field points to the IQN (or some other
>>>> unique identifier)).
>>>>
>>>> Thanks!
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the
>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>
>>
>> --
>> *Mike Tutkowski*
>>  *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
Also, when I say "volume", that is equal to "LUN" when talking about
SolidFire.


On Thu, Feb 7, 2013 at 2:21 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
> wrote:

> Hi Ahmad,
>
> Thanks for the comments.
>
> In regards to your first idea about creating multiple targets on the
> SolidFire system, I'd be concerned that I wouldn't know how many to create.
>  Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
> a per-volume basis, in our ideal world, each VM Instance or Data Disk would
> be serviced by a single SolidFire volume.  If I ever have more than one VM
> Instance, for example, running on the same SolidFire volume, I can't
> guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
> by the other VM(s) running on the same volume).
>
> Does that make sense?  If you see some flaw in my logic, please let me
> know.  :)
>
> Thanks!
>
>
> On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com> wrote:
>
>> Hey Mike,
>>
>> The cleanest approach as a user would be:
>> I create multiple targets on the solid fire. Each with a different IOPs
>> setting (assuming you can control max iops for volumes this way). I'd mount
>> each to the hypervisor and create a specific service offering for each.
>> That way youre intercepting calls with a script and modifying
>> tags/offerings on the fly, and avoid said race condition.
>>
>> second cleanest approach IMO:
>> Or to backpack off your previous method. Create 3 different service
>> offerings (fast, medium, slow).  That way when the deploy vm is called,
>> your volume create script can intercept the call and will know the QOS
>> based on the offering.
>>
>> Why I'd do this is say you change the service offering to fast while
>> provisioning a vm, and a user with a disk thats meant to be slow reboots
>> her vm... she'll be upgraded to fast. All because your modifying the one
>> and only offering.
>>
>>
>> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I learned yesterday that Edison's new storage plug-in architecture will
>>> first be released with 4.2.
>>>
>>> As such, I began to wonder if there was any way outside of CloudStack
>>> that
>>> I could support CloudStack users who want to make use of SolidFire's QoS
>>> feature (controlling IOPS).
>>>
>>> A couple of us brainstormed for a bit and this is what we came up with.
>>>  Can anyone confirm if this could work?
>>>
>>> ********************
>>>
>>> The CloudStack Admin creates a Primary Storage that is of the type
>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
>>> SolidFire
>>> High IOPS).
>>>
>>> The CloudStack Admin creates a Compute Offering that refers to the
>>> "SolidFire_High" Storage Tag.
>>>
>>> In the CSP's own GUI, a user picks the Compute Offering referred to
>>> above.
>>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
>>> invoke a script of mine.
>>>
>>> My script is passed in the necessary information.  It creates a SolidFire
>>> volume with the necessary attributes and hooks it up to the hypervisor
>>> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
>>> (or some unique identifier), then it updates my Compute Offering's
>>> Storage
>>> Tag with this same value.
>>>
>>> Once the Primary Storage and the Compute Offering have been updated,
>>> CloudStack can begin the process of creating a VM Instance.
>>>
>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>> Primary
>>> Storage's and Compute Offering's Storage Tags back to the
>>> "SolidFire_High"
>>> value.
>>>
>>> ********************
>>>
>>> The one problem I see with this is that you would have to be sure not to
>>> kick off the process of creating such a Compute Offering until your
>>> previous such Compute Offering was finished (because there is a race
>>> condition while the Storage Tag field points to the IQN (or some other
>>> unique identifier)).
>>>
>>> Thanks!
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hi Ahmad,

Thanks for the comments.

In regards to your first idea about creating multiple targets on the
SolidFire system, I'd be concerned that I wouldn't know how many to create.
 Would I make 100...200...?  Not sure.  Since SolidFire can control IOPS on
a per-volume basis, in our ideal world, each VM Instance or Data Disk would
be serviced by a single SolidFire volume.  If I ever have more than one VM
Instance, for example, running on the same SolidFire volume, I can't
guarantee any individual VM its IOPS (as its IOPS may be absorbed unequally
by the other VM(s) running on the same volume).

Does that make sense?  If you see some flaw in my logic, please let me
know.  :)

Thanks!


On Thu, Feb 7, 2013 at 1:02 PM, Ahmad Emneina <ae...@gmail.com> wrote:

> Hey Mike,
>
> The cleanest approach as a user would be:
> I create multiple targets on the solid fire. Each with a different IOPs
> setting (assuming you can control max iops for volumes this way). I'd mount
> each to the hypervisor and create a specific service offering for each.
> That way youre intercepting calls with a script and modifying
> tags/offerings on the fly, and avoid said race condition.
>
> second cleanest approach IMO:
> Or to backpack off your previous method. Create 3 different service
> offerings (fast, medium, slow).  That way when the deploy vm is called,
> your volume create script can intercept the call and will know the QOS
> based on the offering.
>
> Why I'd do this is say you change the service offering to fast while
> provisioning a vm, and a user with a disk thats meant to be slow reboots
> her vm... she'll be upgraded to fast. All because your modifying the one
> and only offering.
>
>
> On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hi everyone,
>>
>> I learned yesterday that Edison's new storage plug-in architecture will
>> first be released with 4.2.
>>
>> As such, I began to wonder if there was any way outside of CloudStack that
>> I could support CloudStack users who want to make use of SolidFire's QoS
>> feature (controlling IOPS).
>>
>> A couple of us brainstormed for a bit and this is what we came up with.
>>  Can anyone confirm if this could work?
>>
>> ********************
>>
>> The CloudStack Admin creates a Primary Storage that is of the type
>> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
>> High IOPS).
>>
>> The CloudStack Admin creates a Compute Offering that refers to the
>> "SolidFire_High" Storage Tag.
>>
>> In the CSP's own GUI, a user picks the Compute Offering referred to above.
>>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
>> invoke a script of mine.
>>
>> My script is passed in the necessary information.  It creates a SolidFire
>> volume with the necessary attributes and hooks it up to the hypervisor
>> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
>> (or some unique identifier), then it updates my Compute Offering's Storage
>> Tag with this same value.
>>
>> Once the Primary Storage and the Compute Offering have been updated,
>> CloudStack can begin the process of creating a VM Instance.
>>
>> Once the VM Instance is up and running, the CSP's GUI could set the
>> Primary
>> Storage's and Compute Offering's Storage Tags back to the "SolidFire_High"
>> value.
>>
>> ********************
>>
>> The one problem I see with this is that you would have to be sure not to
>> kick off the process of creating such a Compute Offering until your
>> previous such Compute Offering was finished (because there is a race
>> condition while the Storage Tag field points to the IQN (or some other
>> unique identifier)).
>>
>> Thanks!
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>>
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Ahmad Emneina <ae...@gmail.com>.
Hey Mike,

The cleanest approach as a user would be:
I create multiple targets on the solid fire. Each with a different IOPs
setting (assuming you can control max iops for volumes this way). I'd mount
each to the hypervisor and create a specific service offering for each.
That way youre intercepting calls with a script and modifying
tags/offerings on the fly, and avoid said race condition.

second cleanest approach IMO:
Or to backpack off your previous method. Create 3 different service
offerings (fast, medium, slow).  That way when the deploy vm is called,
your volume create script can intercept the call and will know the QOS
based on the offering.

Why I'd do this is say you change the service offering to fast while
provisioning a vm, and a user with a disk thats meant to be slow reboots
her vm... she'll be upgraded to fast. All because your modifying the one
and only offering.


On Thu, Feb 7, 2013 at 11:17 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hi everyone,
>
> I learned yesterday that Edison's new storage plug-in architecture will
> first be released with 4.2.
>
> As such, I began to wonder if there was any way outside of CloudStack that
> I could support CloudStack users who want to make use of SolidFire's QoS
> feature (controlling IOPS).
>
> A couple of us brainstormed for a bit and this is what we came up with.
>  Can anyone confirm if this could work?
>
> ********************
>
> The CloudStack Admin creates a Primary Storage that is of the type
> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
> High IOPS).
>
> The CloudStack Admin creates a Compute Offering that refers to the
> "SolidFire_High" Storage Tag.
>
> In the CSP's own GUI, a user picks the Compute Offering referred to above.
>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
> invoke a script of mine.
>
> My script is passed in the necessary information.  It creates a SolidFire
> volume with the necessary attributes and hooks it up to the hypervisor
> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
> (or some unique identifier), then it updates my Compute Offering's Storage
> Tag with this same value.
>
> Once the Primary Storage and the Compute Offering have been updated,
> CloudStack can begin the process of creating a VM Instance.
>
> Once the VM Instance is up and running, the CSP's GUI could set the Primary
> Storage's and Compute Offering's Storage Tags back to the "SolidFire_High"
> value.
>
> ********************
>
> The one problem I see with this is that you would have to be sure not to
> kick off the process of creating such a Compute Offering until your
> previous such Compute Offering was finished (because there is a race
> condition while the Storage Tag field points to the IQN (or some other
> unique identifier)).
>
> Thanks!
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

RE: Supporting SolidFire QoS Before 4.2

Posted by Edison Su <Ed...@citrix.com>.
It ends up with per LUN per primary storage per volume? And do you need to change UI code , in order to invoke your program? If it's only for data disk, your solution will work. May have problem with root disk for system VMs, as they are not created by from UI.

From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
Sent: Thursday, February 07, 2013 3:14 PM
To: Edison Su
Cc: cloudstack-dev@incubator.apache.org; Marcus Sorensen
Subject: Re: Supporting SolidFire QoS Before 4.2

The logic might be something like this:

User picks a Compute Offering with storage tag "SolidFire".

CSP's GUI sees that the Storage Tag is "SolidFire" and this cues it to invoke my program and pass it in the necessary data.

My program creates a SolidFire volume and talks to the hypervisor to make it aware of this volume (for XenServer, this would entail creating a storage repository that is based on my iSCSI volume).

My program could make a CS API call to update a known Primary Storage (maybe it has a known name like "SolidFirePrimaryStorage").  This program would update its Storage Tag to, say, the IQN of the volume that was just created.  This program would then go and update the Compute Offering that was selected to have its Storage Tag equal to that IQN.

Once these changes are in place, the CSP's GUI can initiate the process of spinning up a VM based on our Compute Offering and Primary Storage.

When that VM is up and running, the CSP's GUI can restore the Storage Tag "SolidFire" to the Primary Storage and Compute Offerings and we're in our initial state again and ready to service another request (we cannot service another request until the one before it is done).

Comments?

I know it's not ideal, but we're mainly looking for something workable at this point.  :)

On Thu, Feb 7, 2013 at 4:06 PM, Mike Tutkowski <mi...@solidfire.com>> wrote:
Hi Edison,

I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is straightforward enough.  We'd get back an IQN.  Are you saying at this point I'd talk to, say, XenServer and have it create a storage repository that makes use of the iSCSI target (my IQN)?  If so, once that is done, I was thinking I'd update a known (PreSetup-based) Primary Storage's Storage Tag.  After, I'd update the single Compute Offering that references that Primary Storage by changing its Storage Tag to be equal to that of my Primary Storage.  Once this is done, the VM could be spun up from the Compute Offering (and underlying Primary Storage).  When the VM Instance is up and running, the Compute Offering's and Primary Storage's Storage Tag could be changed back to some expected value.

I don't particularly like this solution in the sense that you can't kick off such a new Compute Offering until the one before it was done, but it would only serve as a temporary measure to help customers leverage our QoS feature.

What do you think?  Does this sound doable with the CS API?

On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com>> wrote:
Another solution would be, totally bypass cloudstack and hypervisor, create LUN in your storage box, then give the IQN to guest VM. Inside guest VM, there may have a script, which can grab that IQN from somewhere(can be in user data), then scan iSCSI, mount the device, etc. It can work with all the hypervisors.

From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com<ma...@solidfire.com>]
Sent: Thursday, February 07, 2013 11:18 AM
To: cloudstack-dev@incubator.apache.org<ma...@incubator.apache.org>; Edison Su; Marcus Sorensen

Subject: Supporting SolidFire QoS Before 4.2

Hi everyone,

I learned yesterday that Edison's new storage plug-in architecture will first be released with 4.2.

As such, I began to wonder if there was any way outside of CloudStack that I could support CloudStack users who want to make use of SolidFire's QoS feature (controlling IOPS).

A couple of us brainstormed for a bit and this is what we came up with.  Can anyone confirm if this could work?

********************

The CloudStack Admin creates a Primary Storage that is of the type PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire High IOPS).

The CloudStack Admin creates a Compute Offering that refers to the "SolidFire_High" Storage Tag.

In the CSP's own GUI, a user picks the Compute Offering referred to above.  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to invoke a script of mine.

My script is passed in the necessary information.  It creates a SolidFire volume with the necessary attributes and hooks it up to the hypervisor running in the Cluster.  It updates my Primary Storage's Tag with the IQN (or some unique identifier), then it updates my Compute Offering's Storage Tag with this same value.

Once the Primary Storage and the Compute Offering have been updated, CloudStack can begin the process of creating a VM Instance.

Once the VM Instance is up and running, the CSP's GUI could set the Primary Storage's and Compute Offering's Storage Tags back to the "SolidFire_High" value.

********************

The one problem I see with this is that you would have to be sure not to kick off the process of creating such a Compute Offering until your previous such Compute Offering was finished (because there is a race condition while the Storage Tag field points to the IQN (or some other unique identifier)).

Thanks!

--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkowski@solidfire.com<ma...@solidfire.com>
o: 303.746.7302<tel:303.746.7302>
Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>(tm)



--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkowski@solidfire.com<ma...@solidfire.com>
o: 303.746.7302<tel:303.746.7302>
Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>(tm)



--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkowski@solidfire.com<ma...@solidfire.com>
o: 303.746.7302
Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>(tm)

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
The logic might be something like this:

User picks a Compute Offering with storage tag "SolidFire".

CSP's GUI sees that the Storage Tag is "SolidFire" and this cues it to
invoke my program and pass it in the necessary data.

My program creates a SolidFire volume and talks to the hypervisor to make
it aware of this volume (for XenServer, this would entail creating a
storage repository that is based on my iSCSI volume).

My program could make a CS API call to update a known Primary Storage
(maybe it has a known name like "SolidFirePrimaryStorage").  This program
would update its Storage Tag to, say, the IQN of the volume that was just
created.  This program would then go and update the Compute Offering that
was selected to have its Storage Tag equal to that IQN.

Once these changes are in place, the CSP's GUI can initiate the process of
spinning up a VM based on our Compute Offering and Primary Storage.

When that VM is up and running, the CSP's GUI can restore the Storage Tag
"SolidFire" to the Primary Storage and Compute Offerings and we're in our
initial state again and ready to service another request (we cannot service
another request until the one before it is done).

Comments?

I know it's not ideal, but we're mainly looking for something workable at
this point.  :)


On Thu, Feb 7, 2013 at 4:06 PM, Mike Tutkowski <mike.tutkowski@solidfire.com
> wrote:

> Hi Edison,
>
> I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is
> straightforward enough.  We'd get back an IQN.  Are you saying at this
> point I'd talk to, say, XenServer and have it create a storage repository
> that makes use of the iSCSI target (my IQN)?  If so, once that is done, I
> was thinking I'd update a known (PreSetup-based) Primary Storage's Storage
> Tag.  After, I'd update the single Compute Offering that references that
> Primary Storage by changing its Storage Tag to be equal to that of my
> Primary Storage.  Once this is done, the VM could be spun up from the
> Compute Offering (and underlying Primary Storage).  When the VM Instance is
> up and running, the Compute Offering's and Primary Storage's Storage Tag
> could be changed back to some expected value.
>
> I don't particularly like this solution in the sense that you can't kick
> off such a new Compute Offering until the one before it was done, but it
> would only serve as a temporary measure to help customers leverage our QoS
> feature.
>
> What do you think?  Does this sound doable with the CS API?
>
>
> On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:
>
>> Another solution would be, totally bypass cloudstack and hypervisor,
>> create LUN in your storage box, then give the IQN to guest VM. Inside guest
>> VM, there may have a script, which can grab that IQN from somewhere(can be
>> in user data), then scan iSCSI, mount the device, etc. It can work with all
>> the hypervisors.****
>>
>> ** **
>>
>> *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
>> *Sent:* Thursday, February 07, 2013 11:18 AM
>> *To:* cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
>>
>> *Subject:* Supporting SolidFire QoS Before 4.2****
>>
>> ** **
>>
>> Hi everyone,****
>>
>> ** **
>>
>> I learned yesterday that Edison's new storage plug-in architecture will
>> first be released with 4.2.****
>>
>> ** **
>>
>> As such, I began to wonder if there was any way outside of CloudStack
>> that I could support CloudStack users who want to make use of SolidFire's
>> QoS feature (controlling IOPS).****
>>
>> ** **
>>
>> A couple of us brainstormed for a bit and this is what we came up with.
>>  Can anyone confirm if this could work?****
>>
>> ** **
>>
>> ************************
>>
>> ** **
>>
>> The CloudStack Admin creates a Primary Storage that is of the type
>> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
>> High IOPS).****
>>
>> ** **
>>
>> The CloudStack Admin creates a Compute Offering that refers to the
>> "SolidFire_High" Storage Tag.****
>>
>> ** **
>>
>> In the CSP's own GUI, a user picks the Compute Offering referred to
>> above.  The CSP's code sees the Storage Tag "SolidFire_High" and that cues
>> it to invoke a script of mine.****
>>
>> ** **
>>
>> My script is passed in the necessary information.  It creates a SolidFire
>> volume with the necessary attributes and hooks it up to the hypervisor
>> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
>> (or some unique identifier), then it updates my Compute Offering's Storage
>> Tag with this same value.****
>>
>> ** **
>>
>> Once the Primary Storage and the Compute Offering have been updated,
>> CloudStack can begin the process of creating a VM Instance.****
>>
>> ** **
>>
>> Once the VM Instance is up and running, the CSP's GUI could set the
>> Primary Storage's and Compute Offering's Storage Tags back to the
>> "SolidFire_High" value.****
>>
>> ** **
>>
>> ************************
>>
>> ** **
>>
>> The one problem I see with this is that you would have to be sure not to
>> kick off the process of creating such a Compute Offering until your
>> previous such Compute Offering was finished (because there is a race
>> condition while the Storage Tag field points to the IQN (or some other
>> unique identifier)).****
>>
>> ** **
>>
>> Thanks!
>> ****
>>
>> ** **
>>
>> --
>> *Mike Tutkowski*****
>>
>> *Senior CloudStack Developer, SolidFire Inc.*****
>>
>> e: mike.tutkowski@solidfire.com****
>>
>> o: 303.746.7302****
>>
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*****
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
Interesting idea, Marcus.

The hard part is in guaranteeing CSPs the IOPS they selected...especially
if they've written SLAs for their customers around said performance.

It all comes down to the "Noisy Neighbor" problem:  If you have multiple
VMs running on the same volume, any one of them can - at one point or
another - utilize a disproportionate number of IOPS and starve its
neighbors.


On Thu, Feb 7, 2013 at 4:34 PM, Marcus Sorensen <sh...@gmail.com> wrote:

> Hmm, you could do something like create a 'high iops' storage pool,
> and the VM host mounts that and creates multiple vm volumes (vhd) on
> that, just like stock, but then you have a script that polls
> cloudstack occasionally, asks how many volumes are on that pool, and
> adjusts your LUN's iops to be volumes x iops. So if you register a
> high perf 10TB lun to provide Z iops per volume as a primary storage,
> then create 3 volumes on it, your script sets the iops on that lun to
> Z x 3. It's still limited by only offering certain classes of
> performance, instead of a large variety (one class per primary pool),
> but it's another avenue to consider.
>
> On Thu, Feb 7, 2013 at 4:28 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> > He's saying that the VM can connect via iscsi directly to the
> > solidfire device, rather than the host. You'd lose more performance
> > that way and there's more overhead, but it would be a way to give
> > individual VMs their own solidfire LUN.
> >
> > On Thu, Feb 7, 2013 at 4:06 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> >> Hi Edison,
> >>
> >> I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is
> >> straightforward enough.  We'd get back an IQN.  Are you saying at this
> point
> >> I'd talk to, say, XenServer and have it create a storage repository that
> >> makes use of the iSCSI target (my IQN)?  If so, once that is done, I was
> >> thinking I'd update a known (PreSetup-based) Primary Storage's Storage
> Tag.
> >> After, I'd update the single Compute Offering that references that
> Primary
> >> Storage by changing its Storage Tag to be equal to that of my Primary
> >> Storage.  Once this is done, the VM could be spun up from the Compute
> >> Offering (and underlying Primary Storage).  When the VM Instance is up
> and
> >> running, the Compute Offering's and Primary Storage's Storage Tag could
> be
> >> changed back to some expected value.
> >>
> >> I don't particularly like this solution in the sense that you can't
> kick off
> >> such a new Compute Offering until the one before it was done, but it
> would
> >> only serve as a temporary measure to help customers leverage our QoS
> >> feature.
> >>
> >> What do you think?  Does this sound doable with the CS API?
> >>
> >>
> >> On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:
> >>>
> >>> Another solution would be, totally bypass cloudstack and hypervisor,
> >>> create LUN in your storage box, then give the IQN to guest VM. Inside
> guest
> >>> VM, there may have a script, which can grab that IQN from
> somewhere(can be
> >>> in user data), then scan iSCSI, mount the device, etc. It can work
> with all
> >>> the hypervisors.
> >>>
> >>>
> >>>
> >>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> >>> Sent: Thursday, February 07, 2013 11:18 AM
> >>> To: cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
> >>>
> >>>
> >>> Subject: Supporting SolidFire QoS Before 4.2
> >>>
> >>>
> >>>
> >>> Hi everyone,
> >>>
> >>>
> >>>
> >>> I learned yesterday that Edison's new storage plug-in architecture will
> >>> first be released with 4.2.
> >>>
> >>>
> >>>
> >>> As such, I began to wonder if there was any way outside of CloudStack
> that
> >>> I could support CloudStack users who want to make use of SolidFire's
> QoS
> >>> feature (controlling IOPS).
> >>>
> >>>
> >>>
> >>> A couple of us brainstormed for a bit and this is what we came up with.
> >>> Can anyone confirm if this could work?
> >>>
> >>>
> >>>
> >>> ********************
> >>>
> >>>
> >>>
> >>> The CloudStack Admin creates a Primary Storage that is of the type
> >>> PreSetup.  This is tagged with a name like "SolidFire_High" (for
> SolidFire
> >>> High IOPS).
> >>>
> >>>
> >>>
> >>> The CloudStack Admin creates a Compute Offering that refers to the
> >>> "SolidFire_High" Storage Tag.
> >>>
> >>>
> >>>
> >>> In the CSP's own GUI, a user picks the Compute Offering referred to
> above.
> >>> The CSP's code sees the Storage Tag "SolidFire_High" and that cues it
> to
> >>> invoke a script of mine.
> >>>
> >>>
> >>>
> >>> My script is passed in the necessary information.  It creates a
> SolidFire
> >>> volume with the necessary attributes and hooks it up to the hypervisor
> >>> running in the Cluster.  It updates my Primary Storage's Tag with the
> IQN
> >>> (or some unique identifier), then it updates my Compute Offering's
> Storage
> >>> Tag with this same value.
> >>>
> >>>
> >>>
> >>> Once the Primary Storage and the Compute Offering have been updated,
> >>> CloudStack can begin the process of creating a VM Instance.
> >>>
> >>>
> >>>
> >>> Once the VM Instance is up and running, the CSP's GUI could set the
> >>> Primary Storage's and Compute Offering's Storage Tags back to the
> >>> "SolidFire_High" value.
> >>>
> >>>
> >>>
> >>> ********************
> >>>
> >>>
> >>>
> >>> The one problem I see with this is that you would have to be sure not
> to
> >>> kick off the process of creating such a Compute Offering until your
> previous
> >>> such Compute Offering was finished (because there is a race condition
> while
> >>> the Storage Tag field points to the IQN (or some other unique
> identifier)).
> >>>
> >>>
> >>>
> >>> Thanks!
> >>>
> >>>
> >>>
> >>> --
> >>> Mike Tutkowski
> >>>
> >>> Senior CloudStack Developer, SolidFire Inc.
> >>>
> >>> e: mike.tutkowski@solidfire.com
> >>>
> >>> o: 303.746.7302
> >>>
> >>> Advancing the way the world uses the cloud™
> >>
> >>
> >>
> >>
> >> --
> >> Mike Tutkowski
> >> Senior CloudStack Developer, SolidFire Inc.
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud™
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Marcus Sorensen <sh...@gmail.com>.
Hmm, you could do something like create a 'high iops' storage pool,
and the VM host mounts that and creates multiple vm volumes (vhd) on
that, just like stock, but then you have a script that polls
cloudstack occasionally, asks how many volumes are on that pool, and
adjusts your LUN's iops to be volumes x iops. So if you register a
high perf 10TB lun to provide Z iops per volume as a primary storage,
then create 3 volumes on it, your script sets the iops on that lun to
Z x 3. It's still limited by only offering certain classes of
performance, instead of a large variety (one class per primary pool),
but it's another avenue to consider.

On Thu, Feb 7, 2013 at 4:28 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> He's saying that the VM can connect via iscsi directly to the
> solidfire device, rather than the host. You'd lose more performance
> that way and there's more overhead, but it would be a way to give
> individual VMs their own solidfire LUN.
>
> On Thu, Feb 7, 2013 at 4:06 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>> Hi Edison,
>>
>> I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is
>> straightforward enough.  We'd get back an IQN.  Are you saying at this point
>> I'd talk to, say, XenServer and have it create a storage repository that
>> makes use of the iSCSI target (my IQN)?  If so, once that is done, I was
>> thinking I'd update a known (PreSetup-based) Primary Storage's Storage Tag.
>> After, I'd update the single Compute Offering that references that Primary
>> Storage by changing its Storage Tag to be equal to that of my Primary
>> Storage.  Once this is done, the VM could be spun up from the Compute
>> Offering (and underlying Primary Storage).  When the VM Instance is up and
>> running, the Compute Offering's and Primary Storage's Storage Tag could be
>> changed back to some expected value.
>>
>> I don't particularly like this solution in the sense that you can't kick off
>> such a new Compute Offering until the one before it was done, but it would
>> only serve as a temporary measure to help customers leverage our QoS
>> feature.
>>
>> What do you think?  Does this sound doable with the CS API?
>>
>>
>> On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:
>>>
>>> Another solution would be, totally bypass cloudstack and hypervisor,
>>> create LUN in your storage box, then give the IQN to guest VM. Inside guest
>>> VM, there may have a script, which can grab that IQN from somewhere(can be
>>> in user data), then scan iSCSI, mount the device, etc. It can work with all
>>> the hypervisors.
>>>
>>>
>>>
>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
>>> Sent: Thursday, February 07, 2013 11:18 AM
>>> To: cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
>>>
>>>
>>> Subject: Supporting SolidFire QoS Before 4.2
>>>
>>>
>>>
>>> Hi everyone,
>>>
>>>
>>>
>>> I learned yesterday that Edison's new storage plug-in architecture will
>>> first be released with 4.2.
>>>
>>>
>>>
>>> As such, I began to wonder if there was any way outside of CloudStack that
>>> I could support CloudStack users who want to make use of SolidFire's QoS
>>> feature (controlling IOPS).
>>>
>>>
>>>
>>> A couple of us brainstormed for a bit and this is what we came up with.
>>> Can anyone confirm if this could work?
>>>
>>>
>>>
>>> ********************
>>>
>>>
>>>
>>> The CloudStack Admin creates a Primary Storage that is of the type
>>> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
>>> High IOPS).
>>>
>>>
>>>
>>> The CloudStack Admin creates a Compute Offering that refers to the
>>> "SolidFire_High" Storage Tag.
>>>
>>>
>>>
>>> In the CSP's own GUI, a user picks the Compute Offering referred to above.
>>> The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
>>> invoke a script of mine.
>>>
>>>
>>>
>>> My script is passed in the necessary information.  It creates a SolidFire
>>> volume with the necessary attributes and hooks it up to the hypervisor
>>> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
>>> (or some unique identifier), then it updates my Compute Offering's Storage
>>> Tag with this same value.
>>>
>>>
>>>
>>> Once the Primary Storage and the Compute Offering have been updated,
>>> CloudStack can begin the process of creating a VM Instance.
>>>
>>>
>>>
>>> Once the VM Instance is up and running, the CSP's GUI could set the
>>> Primary Storage's and Compute Offering's Storage Tags back to the
>>> "SolidFire_High" value.
>>>
>>>
>>>
>>> ********************
>>>
>>>
>>>
>>> The one problem I see with this is that you would have to be sure not to
>>> kick off the process of creating such a Compute Offering until your previous
>>> such Compute Offering was finished (because there is a race condition while
>>> the Storage Tag field points to the IQN (or some other unique identifier)).
>>>
>>>
>>>
>>> Thanks!
>>>
>>>
>>>
>>> --
>>> Mike Tutkowski
>>>
>>> Senior CloudStack Developer, SolidFire Inc.
>>>
>>> e: mike.tutkowski@solidfire.com
>>>
>>> o: 303.746.7302
>>>
>>> Advancing the way the world uses the cloud™
>>
>>
>>
>>
>> --
>> Mike Tutkowski
>> Senior CloudStack Developer, SolidFire Inc.
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud™

Re: Supporting SolidFire QoS Before 4.2

Posted by Marcus Sorensen <sh...@gmail.com>.
On Thu, Feb 7, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com> wrote:
>
> On Feb 7, 2013 5:20 PM, "Alex Huang" <Al...@citrix.com> wrote:
>>
>>
>>
>> > -----Original Message-----
>> > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
>> > Sent: Thursday, February 07, 2013 3:29 PM
>> > To: Mike Tutkowski
>> > Cc: Edison Su; cloudstack-dev@incubator.apache.org
>> > Subject: Re: Supporting SolidFire QoS Before 4.2
>> >
>> > He's saying that the VM can connect via iscsi directly to the
>> > solidfire device, rather than the host. You'd lose more performance
>> > that way and there's more overhead, but it would be a way to give
>> > individual VMs their own solidfire LUN.
>> >
>> Marcus,
>>
>> I'm interested in your comment here.  Why do you think vm having direct
>> iscsi access actually lose performance?  I would think it's actually faster
>> because there's nothing translating the raw LUN into a raw disk.
>
> With iscsi to the host you have hardware nic and iscsi initiator software
> running on hardware CPU. Then disk is attached to VM.
>
> In VM you have paravirtualized nic (overhead) and iscsi initiator on virtual
> CPU (overhead). Virtual NICs are pretty fast these days but eat lots of CPU
> in doing so. I can easily eat a core on the host doing 3-4 gbit steady to a
> VM. The hardware nic optimizations designed to get around this are still
> unusable for cloud because they tie the VM to the hardware and disable live
> migration (io-srv).
>
> I see what you're saying, that the overhead of running the initiator on a
> vcpu and over a vnic is less than attaching a local disk to a VM, but from
> what I've seen that hasn't been the case.
>
> Then there's the idea of wanting the VM on a 1gbit pub connection, but maybe
> your storage is on a 10g private net. That's common, but it could be
> engineered around I suppose.

I should qualify this by saying that I haven't perf tested running
iscsi inside a Xen VM, only KVM and VMware. I can't really speak about
dom0 emulating devices, but my impression was that dom0 had direct
hardware access, and you wouldn't get hit by a double whammy of
sharing an emulated dom0 device into another dom.

>
>>
>> --Alex

RE: Supporting SolidFire QoS Before 4.2

Posted by Marcus Sorensen <sh...@gmail.com>.
On Feb 7, 2013 5:20 PM, "Alex Huang" <Al...@citrix.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > Sent: Thursday, February 07, 2013 3:29 PM
> > To: Mike Tutkowski
> > Cc: Edison Su; cloudstack-dev@incubator.apache.org
> > Subject: Re: Supporting SolidFire QoS Before 4.2
> >
> > He's saying that the VM can connect via iscsi directly to the
> > solidfire device, rather than the host. You'd lose more performance
> > that way and there's more overhead, but it would be a way to give
> > individual VMs their own solidfire LUN.
> >
> Marcus,
>
> I'm interested in your comment here.  Why do you think vm having direct
iscsi access actually lose performance?  I would think it's actually faster
because there's nothing translating the raw LUN into a raw disk.

With iscsi to the host you have hardware nic and iscsi initiator software
running on hardware CPU. Then disk is attached to VM.

In VM you have paravirtualized nic (overhead) and iscsi initiator on
virtual CPU (overhead). Virtual NICs are pretty fast these days but eat
lots of CPU in doing so. I can easily eat a core on the host doing 3-4 gbit
steady to a VM. The hardware nic optimizations designed to get around this
are still unusable for cloud because they tie the VM to the hardware and
disable live migration (io-srv).

I see what you're saying, that the overhead of running the initiator on a
vcpu and over a vnic is less than attaching a local disk to a VM, but from
what I've seen that hasn't been the case.

Then there's the idea of wanting the VM on a 1gbit pub connection, but
maybe your storage is on a 10g private net. That's common, but it could be
engineered around I suppose.

>
> --Alex

RE: Supporting SolidFire QoS Before 4.2

Posted by Anthony Xu <Xu...@citrix.com>.
I think direct iscsi access from guest VM probably is faster than virtual disk in XenServer, 

in XenServer, virtual disk is emulated by dom0 application, virtual NIC is emulated by dom0 Kernel, direct iscsi access basically moves from virtual disk emulation to virtual NIC emulation.


Anthony

> -----Original Message-----
> From: Alex Huang [mailto:Alex.Huang@citrix.com]
> Sent: Thursday, February 07, 2013 4:21 PM
> To: Marcus Sorensen; Mike Tutkowski
> Cc: Edison Su; cloudstack-dev@incubator.apache.org
> Subject: RE: Supporting SolidFire QoS Before 4.2
> 
> 
> 
> > -----Original Message-----
> > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > Sent: Thursday, February 07, 2013 3:29 PM
> > To: Mike Tutkowski
> > Cc: Edison Su; cloudstack-dev@incubator.apache.org
> > Subject: Re: Supporting SolidFire QoS Before 4.2
> >
> > He's saying that the VM can connect via iscsi directly to the
> > solidfire device, rather than the host. You'd lose more performance
> > that way and there's more overhead, but it would be a way to give
> > individual VMs their own solidfire LUN.
> >
> Marcus,
> 
> I'm interested in your comment here.  Why do you think vm having direct
> iscsi access actually lose performance?  I would think it's actually
> faster because there's nothing translating the raw LUN into a raw disk.
> 
> --Alex

RE: Supporting SolidFire QoS Before 4.2

Posted by Alex Huang <Al...@citrix.com>.

> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: Thursday, February 07, 2013 3:29 PM
> To: Mike Tutkowski
> Cc: Edison Su; cloudstack-dev@incubator.apache.org
> Subject: Re: Supporting SolidFire QoS Before 4.2
> 
> He's saying that the VM can connect via iscsi directly to the
> solidfire device, rather than the host. You'd lose more performance
> that way and there's more overhead, but it would be a way to give
> individual VMs their own solidfire LUN.
> 
Marcus,

I'm interested in your comment here.  Why do you think vm having direct iscsi access actually lose performance?  I would think it's actually faster because there's nothing translating the raw LUN into a raw disk.

--Alex

Re: Supporting SolidFire QoS Before 4.2

Posted by Marcus Sorensen <sh...@gmail.com>.
He's saying that the VM can connect via iscsi directly to the
solidfire device, rather than the host. You'd lose more performance
that way and there's more overhead, but it would be a way to give
individual VMs their own solidfire LUN.

On Thu, Feb 7, 2013 at 4:06 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Hi Edison,
>
> I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is
> straightforward enough.  We'd get back an IQN.  Are you saying at this point
> I'd talk to, say, XenServer and have it create a storage repository that
> makes use of the iSCSI target (my IQN)?  If so, once that is done, I was
> thinking I'd update a known (PreSetup-based) Primary Storage's Storage Tag.
> After, I'd update the single Compute Offering that references that Primary
> Storage by changing its Storage Tag to be equal to that of my Primary
> Storage.  Once this is done, the VM could be spun up from the Compute
> Offering (and underlying Primary Storage).  When the VM Instance is up and
> running, the Compute Offering's and Primary Storage's Storage Tag could be
> changed back to some expected value.
>
> I don't particularly like this solution in the sense that you can't kick off
> such a new Compute Offering until the one before it was done, but it would
> only serve as a temporary measure to help customers leverage our QoS
> feature.
>
> What do you think?  Does this sound doable with the CS API?
>
>
> On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:
>>
>> Another solution would be, totally bypass cloudstack and hypervisor,
>> create LUN in your storage box, then give the IQN to guest VM. Inside guest
>> VM, there may have a script, which can grab that IQN from somewhere(can be
>> in user data), then scan iSCSI, mount the device, etc. It can work with all
>> the hypervisors.
>>
>>
>>
>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
>> Sent: Thursday, February 07, 2013 11:18 AM
>> To: cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
>>
>>
>> Subject: Supporting SolidFire QoS Before 4.2
>>
>>
>>
>> Hi everyone,
>>
>>
>>
>> I learned yesterday that Edison's new storage plug-in architecture will
>> first be released with 4.2.
>>
>>
>>
>> As such, I began to wonder if there was any way outside of CloudStack that
>> I could support CloudStack users who want to make use of SolidFire's QoS
>> feature (controlling IOPS).
>>
>>
>>
>> A couple of us brainstormed for a bit and this is what we came up with.
>> Can anyone confirm if this could work?
>>
>>
>>
>> ********************
>>
>>
>>
>> The CloudStack Admin creates a Primary Storage that is of the type
>> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
>> High IOPS).
>>
>>
>>
>> The CloudStack Admin creates a Compute Offering that refers to the
>> "SolidFire_High" Storage Tag.
>>
>>
>>
>> In the CSP's own GUI, a user picks the Compute Offering referred to above.
>> The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
>> invoke a script of mine.
>>
>>
>>
>> My script is passed in the necessary information.  It creates a SolidFire
>> volume with the necessary attributes and hooks it up to the hypervisor
>> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
>> (or some unique identifier), then it updates my Compute Offering's Storage
>> Tag with this same value.
>>
>>
>>
>> Once the Primary Storage and the Compute Offering have been updated,
>> CloudStack can begin the process of creating a VM Instance.
>>
>>
>>
>> Once the VM Instance is up and running, the CSP's GUI could set the
>> Primary Storage's and Compute Offering's Storage Tags back to the
>> "SolidFire_High" value.
>>
>>
>>
>> ********************
>>
>>
>>
>> The one problem I see with this is that you would have to be sure not to
>> kick off the process of creating such a Compute Offering until your previous
>> such Compute Offering was finished (because there is a race condition while
>> the Storage Tag field points to the IQN (or some other unique identifier)).
>>
>>
>>
>> Thanks!
>>
>>
>>
>> --
>> Mike Tutkowski
>>
>> Senior CloudStack Developer, SolidFire Inc.
>>
>> e: mike.tutkowski@solidfire.com
>>
>> o: 303.746.7302
>>
>> Advancing the way the world uses the cloud™
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Supporting SolidFire QoS Before 4.2

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hi Edison,

I'm not sure I entirely follow.  Creating the volume (LUN) on our SAN is
straightforward enough.  We'd get back an IQN.  Are you saying at this
point I'd talk to, say, XenServer and have it create a storage repository
that makes use of the iSCSI target (my IQN)?  If so, once that is done, I
was thinking I'd update a known (PreSetup-based) Primary Storage's Storage
Tag.  After, I'd update the single Compute Offering that references that
Primary Storage by changing its Storage Tag to be equal to that of my
Primary Storage.  Once this is done, the VM could be spun up from the
Compute Offering (and underlying Primary Storage).  When the VM Instance is
up and running, the Compute Offering's and Primary Storage's Storage Tag
could be changed back to some expected value.

I don't particularly like this solution in the sense that you can't kick
off such a new Compute Offering until the one before it was done, but it
would only serve as a temporary measure to help customers leverage our QoS
feature.

What do you think?  Does this sound doable with the CS API?


On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:

> Another solution would be, totally bypass cloudstack and hypervisor,
> create LUN in your storage box, then give the IQN to guest VM. Inside guest
> VM, there may have a script, which can grab that IQN from somewhere(can be
> in user data), then scan iSCSI, mount the device, etc. It can work with all
> the hypervisors.****
>
> ** **
>
> *From:* Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> *Sent:* Thursday, February 07, 2013 11:18 AM
> *To:* cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
>
> *Subject:* Supporting SolidFire QoS Before 4.2****
>
> ** **
>
> Hi everyone,****
>
> ** **
>
> I learned yesterday that Edison's new storage plug-in architecture will
> first be released with 4.2.****
>
> ** **
>
> As such, I began to wonder if there was any way outside of CloudStack that
> I could support CloudStack users who want to make use of SolidFire's QoS
> feature (controlling IOPS).****
>
> ** **
>
> A couple of us brainstormed for a bit and this is what we came up with.
>  Can anyone confirm if this could work?****
>
> ** **
>
> ************************
>
> ** **
>
> The CloudStack Admin creates a Primary Storage that is of the type
> PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire
> High IOPS).****
>
> ** **
>
> The CloudStack Admin creates a Compute Offering that refers to the
> "SolidFire_High" Storage Tag.****
>
> ** **
>
> In the CSP's own GUI, a user picks the Compute Offering referred to above.
>  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
> invoke a script of mine.****
>
> ** **
>
> My script is passed in the necessary information.  It creates a SolidFire
> volume with the necessary attributes and hooks it up to the hypervisor
> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
> (or some unique identifier), then it updates my Compute Offering's Storage
> Tag with this same value.****
>
> ** **
>
> Once the Primary Storage and the Compute Offering have been updated,
> CloudStack can begin the process of creating a VM Instance.****
>
> ** **
>
> Once the VM Instance is up and running, the CSP's GUI could set the
> Primary Storage's and Compute Offering's Storage Tags back to the
> "SolidFire_High" value.****
>
> ** **
>
> ************************
>
> ** **
>
> The one problem I see with this is that you would have to be sure not to
> kick off the process of creating such a Compute Offering until your
> previous such Compute Offering was finished (because there is a race
> condition while the Storage Tag field points to the IQN (or some other
> unique identifier)).****
>
> ** **
>
> Thanks!
> ****
>
> ** **
>
> --
> *Mike Tutkowski*****
>
> *Senior CloudStack Developer, SolidFire Inc.*****
>
> e: mike.tutkowski@solidfire.com****
>
> o: 303.746.7302****
>
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*****
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Supporting SolidFire QoS Before 4.2

Posted by Marcus Sorensen <sh...@gmail.com>.
The problems with the interim idea is that it will make it
difficult/tricky to plan your upgrade when you go to a supported
method, and you'll have to have them install something custom of yours
on top or alongside of 4.1, since it won't be in the release. In the
long run it's probably not great to do a switchup like that.

What it sounds like you want for your interim solution is to only have
one volume/vm root on each primary storage pool. The only way I can
think of to do this in an automated fashion is with a management
utility outside of cloudstack (or perhaps a patched cloudstack UI), so
when a user wants to create a volume or vm it looks like this:

send createVolume to your web utility w/size and qos settings
your web utility talks to your solidfire device, provisions the lun
your web utility talks to cloudstack, creates a unique disk offering
with unique tag 12345
your web utility talks to cloudstack, registers primary storage pool
using newly provisioned lun, with unique tag
your web utility sends cloudstack createVolume with newly created disk offering.

So instead of the normal 'set up san target/lun' -> 'create primary
storage' -> 'create disk offering' -> 'create volume from offering',
it looks like 'create volume (to your tool)' -> 'create san
target/lun' -> 'create primary storage' -> 'create disk offering' ->
'create volume from offering(in cloudstack)'

What might be better is to just utilize what's already there in the
meantime, and say "hey, you can utilize our storage to create high
performance pools and low performance pools, and the customer can
choose which one his vm is a part of, but per-vm iops won't be
available until July."

On Thu, Feb 7, 2013 at 3:55 PM, Edison Su <Ed...@citrix.com> wrote:
> Another solution would be, totally bypass cloudstack and hypervisor, create
> LUN in your storage box, then give the IQN to guest VM. Inside guest VM,
> there may have a script, which can grab that IQN from somewhere(can be in
> user data), then scan iSCSI, mount the device, etc. It can work with all the
> hypervisors.
>
>
>
> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
> Sent: Thursday, February 07, 2013 11:18 AM
> To: cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
> Subject: Supporting SolidFire QoS Before 4.2
>
>
>
> Hi everyone,
>
>
>
> I learned yesterday that Edison's new storage plug-in architecture will
> first be released with 4.2.
>
>
>
> As such, I began to wonder if there was any way outside of CloudStack that I
> could support CloudStack users who want to make use of SolidFire's QoS
> feature (controlling IOPS).
>
>
>
> A couple of us brainstormed for a bit and this is what we came up with.  Can
> anyone confirm if this could work?
>
>
>
> ********************
>
>
>
> The CloudStack Admin creates a Primary Storage that is of the type PreSetup.
> This is tagged with a name like "SolidFire_High" (for SolidFire High IOPS).
>
>
>
> The CloudStack Admin creates a Compute Offering that refers to the
> "SolidFire_High" Storage Tag.
>
>
>
> In the CSP's own GUI, a user picks the Compute Offering referred to above.
> The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to
> invoke a script of mine.
>
>
>
> My script is passed in the necessary information.  It creates a SolidFire
> volume with the necessary attributes and hooks it up to the hypervisor
> running in the Cluster.  It updates my Primary Storage's Tag with the IQN
> (or some unique identifier), then it updates my Compute Offering's Storage
> Tag with this same value.
>
>
>
> Once the Primary Storage and the Compute Offering have been updated,
> CloudStack can begin the process of creating a VM Instance.
>
>
>
> Once the VM Instance is up and running, the CSP's GUI could set the Primary
> Storage's and Compute Offering's Storage Tags back to the "SolidFire_High"
> value.
>
>
>
> ********************
>
>
>
> The one problem I see with this is that you would have to be sure not to
> kick off the process of creating such a Compute Offering until your previous
> such Compute Offering was finished (because there is a race condition while
> the Storage Tag field points to the IQN (or some other unique identifier)).
>
>
>
> Thanks!
>
>
>
> --
> Mike Tutkowski
>
> Senior CloudStack Developer, SolidFire Inc.
>
> e: mike.tutkowski@solidfire.com
>
> o: 303.746.7302
>
> Advancing the way the world uses the cloud™

RE: Supporting SolidFire QoS Before 4.2

Posted by Edison Su <Ed...@citrix.com>.
Another solution would be, totally bypass cloudstack and hypervisor, create LUN in your storage box, then give the IQN to guest VM. Inside guest VM, there may have a script, which can grab that IQN from somewhere(can be in user data), then scan iSCSI, mount the device, etc. It can work with all the hypervisors.

From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
Sent: Thursday, February 07, 2013 11:18 AM
To: cloudstack-dev@incubator.apache.org; Edison Su; Marcus Sorensen
Subject: Supporting SolidFire QoS Before 4.2

Hi everyone,

I learned yesterday that Edison's new storage plug-in architecture will first be released with 4.2.

As such, I began to wonder if there was any way outside of CloudStack that I could support CloudStack users who want to make use of SolidFire's QoS feature (controlling IOPS).

A couple of us brainstormed for a bit and this is what we came up with.  Can anyone confirm if this could work?

********************

The CloudStack Admin creates a Primary Storage that is of the type PreSetup.  This is tagged with a name like "SolidFire_High" (for SolidFire High IOPS).

The CloudStack Admin creates a Compute Offering that refers to the "SolidFire_High" Storage Tag.

In the CSP's own GUI, a user picks the Compute Offering referred to above.  The CSP's code sees the Storage Tag "SolidFire_High" and that cues it to invoke a script of mine.

My script is passed in the necessary information.  It creates a SolidFire volume with the necessary attributes and hooks it up to the hypervisor running in the Cluster.  It updates my Primary Storage's Tag with the IQN (or some unique identifier), then it updates my Compute Offering's Storage Tag with this same value.

Once the Primary Storage and the Compute Offering have been updated, CloudStack can begin the process of creating a VM Instance.

Once the VM Instance is up and running, the CSP's GUI could set the Primary Storage's and Compute Offering's Storage Tags back to the "SolidFire_High" value.

********************

The one problem I see with this is that you would have to be sure not to kick off the process of creating such a Compute Offering until your previous such Compute Offering was finished (because there is a race condition while the Storage Tag field points to the IQN (or some other unique identifier)).

Thanks!

--
Mike Tutkowski
Senior CloudStack Developer, SolidFire Inc.
e: mike.tutkowski@solidfire.com<ma...@solidfire.com>
o: 303.746.7302
Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>(tm)