You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Mike Tutkowski <mi...@solidfire.com> on 2013/09/14 00:55:11 UTC

Managed storage with KVM

Hi,

As you may remember, during the 4.2 release I developed a SolidFire
(storage) plug-in for CloudStack.

This plug-in was invoked by the storage framework at the necessary times so
that I could dynamically create and delete volumes on the SolidFire SAN
(among other activities).

This is necessary so I can establish a 1:1 mapping between a CloudStack
volume and a SolidFire volume for QoS.

In the past, CloudStack always expected the admin to create large volumes
ahead of time and those volumes would likely house many root and data disks
(which is not QoS friendly).

To make this 1:1 mapping scheme work, I needed to modify logic in the
XenServer and VMware plug-ins so they could create/delete storage
repositories/datastores as needed.

For 4.3 I want to make this happen with KVM.

I'm coming up to speed with how this might work on KVM, but I'm still
pretty new to KVM.

Does anyone familiar with KVM know how I will need to interact with the
iSCSI target? For example, will I have to expect Open iSCSI will be
installed on the KVM host and use it for this to work?

Thanks for any suggestions,
Mike

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
This would require that they put a clustered filesystem on the lun, right?
Seems like it would be better for them to use CLVM and make a volume group
from the luns, I'll bet some of your customers are doing that unless they
are explicitly instructed otherwise, that's how others are doing iscsi or
fibrechannel storage.
On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Ah, OK, I didn't know that was such new ground in KVM with CS.
>
> So, the way people use our SAN with KVM and CS today is by selecting
> SharedMountPoint and specifying the location of the share.
>
> They can set up their share using Open iSCSI by discovering their iSCSI
> target, logging in to it, then mounting it somewhere on their file system.
>
> Would it make sense for me to just do that discovery, logging in, and
> mounting behind the scenes for them and letting the current code manage the
> rest as it currently does?
>
>
> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Oh, hypervisor snapshots are a bit different. I need to catch up on the
>> work done in KVM, but this is basically just disk snapshots + memory dump.
>> I still think disk snapshots would preferably be handled by the SAN, and
>> then memory dumps can go to secondary storage or something else. This is
>> relatively new ground with CS and KVM, so we will want to see how others
>> are planning theirs.
>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>
>>> Let me back up and say I don't think you'd use a vdi style on an iscsi
>>> lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>> and that seems unnecessary and a performance killer.
>>>
>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>> handling snapshots on the San side via the storage plugin is best. My
>>> impression from the storage plugin refactor was that there was a snapshot
>>> service that would allow the San to handle snapshots.
>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> Ideally volume snapshots can be handled by the SAN back end, if the SAN
>>>> supports it. The cloudstack mgmt server could call your plugin for volume
>>>> snapshot and it would be hypervisor agnostic. As far as space, that would
>>>> depend on how your SAN handles it. With ours, we carve out luns from a
>>>> pool, and the snapshot spave comes from the pool and is independent of the
>>>> LUN size the host sees.
>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>
>>>>> Hey Marcus,
>>>>>
>>>>> I wonder if the iSCSI storage pool type for libvirt won't work when
>>>>> you take into consideration hypervisor snapshots?
>>>>>
>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>
>>>>> Same idea for VMware, I believe.
>>>>>
>>>>> So, what would happen in my case (let's say for XenServer and VMware
>>>>> for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>>>> iSCSI target that is larger than what the user requested for the CloudStack
>>>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>>>> space is not actually used unless it needs to be). The CloudStack volume
>>>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>>>> taken. This snapshot would also reside on the SAN volume.
>>>>>
>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>> see how using this model will work.
>>>>>
>>>>> Perhaps I will have to go enhance the current way this works with DIR?
>>>>>
>>>>> What do you think?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>
>>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>>> what libvirt has for iSCSI instead.
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> To your question about SharedMountPoint, I believe it just acts like
>>>>>>> a
>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>> responsible for mounting a file system that all KVM hosts can access,
>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>> could
>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>>>>>> knows that the provided directory path has VM images.
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>> > Multiples, in fact.
>>>>>>> >
>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>> >>
>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>> >> Name                 State      Autostart
>>>>>>> >> -----------------------------------------
>>>>>>> >> default              active     yes
>>>>>>> >> iSCSI                active     no
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>> >>>
>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>> >>>
>>>>>>> >>> I see what you're saying now.
>>>>>>> >>>
>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>>>>>> target.
>>>>>>> >>>
>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>>>> would only
>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>>>> pool.
>>>>>>> >>>
>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs
>>>>>>> on the
>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>> support
>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>> >>>
>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>> supports
>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one
>>>>>>> of its
>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>> >>>>
>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>> >>>>
>>>>>>> >>>>     public enum poolType {
>>>>>>> >>>>
>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>> DIR("dir"),
>>>>>>> >>>> RBD("rbd");
>>>>>>> >>>>
>>>>>>> >>>>         String _poolType;
>>>>>>> >>>>
>>>>>>> >>>>         poolType(String poolType) {
>>>>>>> >>>>
>>>>>>> >>>>             _poolType = poolType;
>>>>>>> >>>>
>>>>>>> >>>>         }
>>>>>>> >>>>
>>>>>>> >>>>         @Override
>>>>>>> >>>>
>>>>>>> >>>>         public String toString() {
>>>>>>> >>>>
>>>>>>> >>>>             return _poolType;
>>>>>>> >>>>
>>>>>>> >>>>         }
>>>>>>> >>>>
>>>>>>> >>>>     }
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> It doesn't look like the iSCSI type is currently being used,
>>>>>>> but I'm
>>>>>>> >>>> understanding more what you were getting at.
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>> "netfs" option
>>>>>>> >>>> above or is that just for NFS?
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> Thanks!
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com>
>>>>>>> >>>> wrote:
>>>>>>> >>>>>
>>>>>>> >>>>> Take a look at this:
>>>>>>> >>>>>
>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>> >>>>>
>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot
>>>>>>> be
>>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>>> will take
>>>>>>> >>>>> care of. Libvirt just does the work of logging in and hooking
>>>>>>> it up to
>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>>>>>> >>>>>
>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>> mapping, or if
>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You
>>>>>>> may need
>>>>>>> >>>>> to write some test code or read up a bit more about this. Let
>>>>>>> us know.
>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>> adaptor
>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross
>>>>>>> that
>>>>>>> >>>>> bridge when we get there.
>>>>>>> >>>>>
>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll
>>>>>>> see a
>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>> object. You
>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done
>>>>>>> for
>>>>>>> >>>>> other pool types, and maybe write some test java code to see
>>>>>>> if you
>>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>>> before you
>>>>>>> >>>>> get started.
>>>>>>> >>>>>
>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>> figure it
>>>>>>> >>>>> > supports
>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>> pointed out
>>>>>>> >>>>> >> last
>>>>>>> >>>>> >> week or so.
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>> >>>>> >> wrote:
>>>>>>> >>>>> >>>
>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>>> utilities
>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>> distro. Then
>>>>>>> >>>>> >>> you'd call
>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See
>>>>>>> the info I
>>>>>>> >>>>> >>> sent
>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>>> iscsi
>>>>>>> >>>>> >>> storage type
>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>> >>>>> >>>
>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>> >>>>> >>> wrote:
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Hi,
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>>>>>> SolidFire
>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>>>>> necessary
>>>>>>> >>>>> >>>> times
>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes on
>>>>>>> the
>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>> >>>>> >>>> (among other activities).
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>> between a
>>>>>>> >>>>> >>>> CloudStack
>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>> create large
>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>> house many
>>>>>>> >>>>> >>>> root and
>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>>>>> logic in
>>>>>>> >>>>> >>>> the
>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>>>>>> storage
>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM,
>>>>>>> but I'm
>>>>>>> >>>>> >>>> still
>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>> interact with
>>>>>>> >>>>> >>>> the
>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>> iSCSI will be
>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>> >>>>> >>>> Mike
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> --
>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> --
>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> > --
>>>>>>> >>>>> > Mike Tutkowski
>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> > o: 303.746.7302
>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> --
>>>>>>> >>>> Mike Tutkowski
>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>> >>>> o: 303.746.7302
>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> --
>>>>>>> >>> Mike Tutkowski
>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>> >>> o: 303.746.7302
>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> --
>>>>>>> >> Mike Tutkowski
>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>> >> o: 303.746.7302
>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
If you wire up the block device you won't have to require users to manage a
clustered filesystem or lvm, and all of the work in maintaining those
clustered services and quorum management, cloudstack will ensure only one
vm is using the disks at any given time and where. It would be cake
compared to dealing with mounts and filesystem's.
On Sep 13, 2013 8:07 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Yeah, I think it would be nice if it supported Live Migration.
>
> That's kind of why I was initially leaning toward SharedMountPoint and
> just doing the work ahead of time to get things in a state where the
> current code could run with it.
>
>
> On Fri, Sep 13, 2013 at 8:00 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> No, as that would rely on virtualized network/iscsi initiator inside the
>> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
>> disk to the VM, rather than attaching some image file that resides on a
>> filesystem, mounted on the host, living on a target.
>>
>> Actually, if you plan on the storage supporting live migration I think
>> this is the only way. You can't put a filesystem on it and mount it in two
>> places to facilitate migration unless its a clustered filesystem, in which
>> case you're back to shared mount point.
>>
>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>> specific cluster management, a custom CLVM. They don't use a filesystem
>> either.
>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> When you say, "wire up the lun directly to the vm," do you mean
>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>>> I know.
>>>
>>>
>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> Better to wire up the lun directly to the vm unless there is a good
>>>> reason not to.
>>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>> wrote:
>>>>
>>>>> You could do that, but as mentioned I think its a mistake to go to the
>>>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>>>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>>> more overhead with the filesystem and its journaling, etc.
>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>
>>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>>> SharedMountPoint and specifying the location of the share.
>>>>>>
>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>> system.
>>>>>>
>>>>>> Would it make sense for me to just do that discovery, logging in, and
>>>>>> mounting behind the scenes for them and letting the current code manage the
>>>>>> rest as it currently does?
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>>> are planning theirs.
>>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>
>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>>> the LUN size the host sees.
>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hey Marcus,
>>>>>>>>>>
>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>
>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>
>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>
>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>
>>>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>>>> see how using this model will work.
>>>>>>>>>>
>>>>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>>>>> DIR?
>>>>>>>>>>
>>>>>>>>>> What do you think?
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>
>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>>>>> like a
>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>> access,
>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>> It could
>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>> >
>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>>>> iSCSI target.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>> there would only
>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>> storage pool.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>>>>> support
>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>> supports
>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>>>> one of its
>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>> used, but I'm
>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects
>>>>>>>>>>>> the
>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>> the "netfs" option
>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>> cannot be
>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>> plugin will take
>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>> hooking it up to
>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>> stuff).
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>> mapping, or if
>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>>>> You may need
>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>>>>> Let us know.
>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>> storage adaptor
>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>>> cross that
>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>>>>> doc.
>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>> you'll see a
>>>>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>>>>> object. You
>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>>>> done for
>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>>>> see if you
>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>> pools before you
>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>>>>> figure it
>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>>>>> pointed out
>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>> initiator utilities
>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>>>> distro. Then
>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>> See the info I
>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>>>>> the necessary
>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>> volumes on the
>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>> between a
>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>>>>> create large
>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>>>>> house many
>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>> modify logic in
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>> create/delete storage
>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>> to interact with
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>> Open iSCSI will be
>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>> work?
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> --
>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> --
>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> --
>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Yeah, I think it would be nice if it supported Live Migration.

That's kind of why I was initially leaning toward SharedMountPoint and just
doing the work ahead of time to get things in a state where the current
code could run with it.


On Fri, Sep 13, 2013 at 8:00 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> No, as that would rely on virtualized network/iscsi initiator inside the
> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
> disk to the VM, rather than attaching some image file that resides on a
> filesystem, mounted on the host, living on a target.
>
> Actually, if you plan on the storage supporting live migration I think
> this is the only way. You can't put a filesystem on it and mount it in two
> places to facilitate migration unless its a clustered filesystem, in which
> case you're back to shared mount point.
>
> As far as I'm aware, the xenserver SR style is basically LVM with a xen
> specific cluster management, a custom CLVM. They don't use a filesystem
> either.
> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> When you say, "wire up the lun directly to the vm," do you mean
>> circumventing the hypervisor? I didn't think we could do that in CS.
>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>> I know.
>>
>>
>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Better to wire up the lun directly to the vm unless there is a good
>>> reason not to.
>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> You could do that, but as mentioned I think its a mistake to go to the
>>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>> more overhead with the filesystem and its journaling, etc.
>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>
>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>
>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>> SharedMountPoint and specifying the location of the share.
>>>>>
>>>>> They can set up their share using Open iSCSI by discovering their
>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>> system.
>>>>>
>>>>> Would it make sense for me to just do that discovery, logging in, and
>>>>> mounting behind the scenes for them and letting the current code manage the
>>>>> rest as it currently does?
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>> are planning theirs.
>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>
>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>> service that would allow the San to handle snapshots.
>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if the
>>>>>>>> SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>> the LUN size the host sees.
>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>
>>>>>>>>> Hey Marcus,
>>>>>>>>>
>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>
>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>
>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>
>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>
>>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>>> see how using this model will work.
>>>>>>>>>
>>>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>>>> DIR?
>>>>>>>>>
>>>>>>>>> What do you think?
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>
>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>>>> like a
>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>> access,
>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>>>>>> could
>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack
>>>>>>>>>>> just
>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>> >
>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>> >>
>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>>> iSCSI target.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>> there would only
>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>> storage pool.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>>>> support
>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>> supports
>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>>> one of its
>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>>>>>> DIR("dir"),
>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>     }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>> used, but I'm
>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects
>>>>>>>>>>> the
>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>>>>>> "netfs" option
>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>> cannot be
>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>> plugin will take
>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>> hooking it up to
>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>> stuff).
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>> mapping, or if
>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>>> You may need
>>>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>>>> Let us know.
>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>>>>>> adaptor
>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>> cross that
>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>>>> doc.
>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>> you'll see a
>>>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>>>> object. You
>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>>> done for
>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>>> see if you
>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>> pools before you
>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>>>> figure it
>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>>>> pointed out
>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>> initiator utilities
>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>>> distro. Then
>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>> See the info I
>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>>>> the necessary
>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes
>>>>>>>>>>> on the
>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>> between a
>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>>>> create large
>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>>>> house many
>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>> modify logic in
>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>> create/delete storage
>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>>>>>> interact with
>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>>>>>> iSCSI will be
>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> > --
>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> --
>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> --
>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> --
>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>>
>>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I think the way people bill for this kind of storage is simply by seeing
how many volumes are in use for a given CS account and tracing a volume
back to the Disk Offering it was created from, which contains info about
guaranteed IOPS.

I am not aware of what stats may be collected for this for XenServer and
VMware.


On Tue, Sep 17, 2013 at 8:17 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> OK, if you log in per lun, then just saving the info for future reference
> is fine.
>
> Does CS provide storage stats at all, then, for other platforms?
> On Sep 17, 2013 8:01 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Plus, when you log in to a LUN, you need the CHAP info and this info is
> > required for each LUN (as opposed to being for the SAN).
> >
> > This is how my createStoragePool currently looks, so I think we're on the
> > same page.
> >
> >
> > public KVMStoragePool createStoragePool(String name, String host, int
> port,
> > String path, String userInfo, StoragePoolType type) {
> >
> >         iScsiAdmStoragePool storagePool = new iScsiAdmStoragePool(name,
> > host, port, this);
> >
> >         _mapUuidToAdaptor.put(name, storagePool);
> >
> >         return storagePool;
> >
> >     }
> >
> >
> > On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > What do you do with Xen? I imagine the user enter the SAN details when
> > > registering the pool? A the pool details are basically just
> instructions
> > on
> > > how to log into a target, correct?
> > >
> > > You can choose to log in a KVM host to the target during
> > createStoragePool
> > > and save the pool in a map, or just save the pool info in a map for
> > future
> > > reference by uuid, for when you do need to log in. The
> createStoragePool
> > > then just becomes a way to save the pool info to the agent. Personally,
> > I'd
> > > log in on the pool create and look/scan for specific luns when they're
> > > needed, but I haven't thought it through thoroughly. I just say that
> > mainly
> > > because login only happens once, the first time the pool is used, and
> > every
> > > other storage command is about discovering new luns or maybe
> > > deleting/disconnecting luns no longer needed. On the other hand, you
> > could
> > > do all of the above: log in on pool create, then also check if you're
> > > logged in on other commands and log in if you've lost connection.
> > >
> > > With Xen, what does your registered pool   show in the UI for
> avail/used
> > > capacity, and how does it get that info? I assume there is some sort of
> > > disk pool that the luns are carved from, and that your plugin is called
> > to
> > > talk to the SAN and expose to the user how much of that pool has been
> > > allocated. Knowing how you already solves these problems with Xen will
> > help
> > > figure out what to do with KVM.
> > >
> > > If this is the case, I think the plugin can continue to handle it
> rather
> > > than getting details from the agent. I'm not sure if that means nulls
> are
> > > OK for these on the agent side or what, I need to look at the storage
> > > plugin arch more closely.
> > > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> > > wrote:
> > >
> > > > Hey Marcus,
> > > >
> > > > I'm reviewing your e-mails as I implement the necessary methods in
> new
> > > > classes.
> > > >
> > > > "So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> > > > the pool data (host, port, name, path) which would be used to log the
> > > > host into the initiator."
> > > >
> > > > Can you tell me, in my case, since a storage pool (primary storage)
> is
> > > > actually the SAN, I wouldn't really be logging into anything at this
> > > point,
> > > > correct?
> > > >
> > > > Also, what kind of capacity, available, and used bytes make sense to
> > > report
> > > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> case
> > > and
> > > > not an individual LUN)?
> > > >
> > > > Thanks!
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> shadowsor@gmail.com
> > > > >wrote:
> > > >
> > > > > Ok, KVM will be close to that, of course, because only the
> hypervisor
> > > > > classes differ, the rest is all mgmt server. Creating a volume is
> > just
> > > > > a db entry until it's deployed for the first time.
> > AttachVolumeCommand
> > > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > > > StorageAdaptor) to log in the host to the target and then you have
> a
> > > > > block device.  Maybe libvirt will do that for you, but my quick
> read
> > > > > made it sound like the iscsi libvirt pool type is actually a pool,
> > not
> > > > > a lun or volume, so you'll need to figure out if that works or if
> > > > > you'll have to use iscsiadm commands.
> > > > >
> > > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > > > doesn't really manage your pool the way you want), you're going to
> > > > > have to create a version of KVMStoragePool class and a
> StorageAdaptor
> > > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > > > implementing all of the methods, then in KVMStorageManager.java
> > > > > there's a "_storageMapper" map. This is used to select the correct
> > > > > adaptor, you can see in this file that every call first pulls the
> > > > > correct adaptor out of this map via getStorageAdaptor. So you can
> see
> > > > > a comment in this file that says "add other storage adaptors here",
> > > > > where it puts to this map, this is where you'd register your
> adaptor.
> > > > >
> > > > > So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> > > > > the pool data (host, port, name, path) which would be used to log
> the
> > > > > host into the initiator. I *believe* the method getPhysicalDisk
> will
> > > > > need to do the work of attaching the lun.  AttachVolumeCommand
> calls
> > > > > this and then creates the XML diskdef and attaches it to the VM.
> Now,
> > > > > one thing you need to know is that createStoragePool is called
> often,
> > > > > sometimes just to make sure the pool is there. You may want to
> create
> > > > > a map in your adaptor class and keep track of pools that have been
> > > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> > asks
> > > > > libvirt about which storage pools exist. There are also calls to
> > > > > refresh the pool stats, and all of the other calls can be seen in
> the
> > > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> > but
> > > > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > > > volumes are created on the mgmt server via the plugin now, so
> > whatever
> > > > > doesn't apply can just be stubbed out (or optionally
> > > > > extended/reimplemented here, if you don't mind the hosts talking to
> > > > > the san api).
> > > > >
> > > > > There is a difference between attaching new volumes and launching a
> > VM
> > > > > with existing volumes.  In the latter case, the VM definition that
> > was
> > > > > passed to the KVM agent includes the disks, (StartCommand).
> > > > >
> > > > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > > > would need to be kept the same. Is it just a definition to the SAN
> > > > > (ip address or some such, port number) and perhaps a volume pool
> > name?
> > > > >
> > > > > > If there is a way for me to update the ACL list on the SAN to
> have
> > > > only a
> > > > > > single KVM host have access to the volume, that would be ideal.
> > > > >
> > > > > That depends on your SAN API.  I was under the impression that the
> > > > > storage plugin framework allowed for acls, or for you to do
> whatever
> > > > > you want for create/attach/delete/snapshot, etc. You'd just call
> your
> > > > > SAN API with the host info for the ACLs prior to when the disk is
> > > > > attached (or the VM is started).  I'd have to look more at the
> > > > > framework to know the details, in 4.1 I would do this in
> > > > > getPhysicalDisk just prior to connecting up the LUN.
> > > > >
> > > > >
> > > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > > > <mi...@solidfire.com> wrote:
> > > > > > OK, yeah, the ACL part will be interesting. That is a bit
> different
> > > > from
> > > > > how
> > > > > > it works with XenServer and VMware.
> > > > > >
> > > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > > > >
> > > > > > * The user creates a CS volume (this is just recorded in the
> > > > > cloud.volumes
> > > > > > table).
> > > > > >
> > > > > > * The user attaches the volume as a disk to a VM for the first
> time
> > > (if
> > > > > the
> > > > > > storage allocator picks the SolidFire plug-in, the storage
> > framework
> > > > > invokes
> > > > > > a method on the plug-in that creates a volume on the SAN...info
> > like
> > > > the
> > > > > IQN
> > > > > > of the SAN volume is recorded in the DB).
> > > > > >
> > > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
> It
> > > > > > determines based on a flag passed in that the storage in question
> > is
> > > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > > preallocated
> > > > > > storage). This tells it to discover the iSCSI target. Once
> > discovered
> > > > it
> > > > > > determines if the iSCSI target already contains a storage
> > repository
> > > > (it
> > > > > > would if this were a re-attach situation). If it does contain an
> SR
> > > > > already,
> > > > > > then there should already be one VDI, as well. If there is no SR,
> > an
> > > SR
> > > > > is
> > > > > > created and a single VDI is created within it (that takes up
> about
> > as
> > > > > much
> > > > > > space as was requested for the CloudStack volume).
> > > > > >
> > > > > > * The normal attach-volume logic continues (it depends on the
> > > existence
> > > > > of
> > > > > > an SR and a VDI).
> > > > > >
> > > > > > The VMware case is essentially the same (mainly just substitute
> > > > datastore
> > > > > > for SR and VMDK for VDI).
> > > > > >
> > > > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > > > target,
> > > > > > but only the host that is currently running the VM that is using
> > the
> > > > VDI
> > > > > (or
> > > > > > VMKD) is actually using the disk.
> > > > > >
> > > > > > Live Migration should be OK because the hypervisors communicate
> > with
> > > > > > whatever metadata they have on the SR (or datastore).
> > > > > >
> > > > > > I see what you're saying with KVM, though.
> > > > > >
> > > > > > In that case, the hosts are clustered only in CloudStack's eyes.
> CS
> > > > > controls
> > > > > > Live Migration. You don't really need a clustered filesystem on
> the
> > > > LUN.
> > > > > The
> > > > > > LUN could be handed over raw to the VM using it.
> > > > > >
> > > > > > If there is a way for me to update the ACL list on the SAN to
> have
> > > > only a
> > > > > > single KVM host have access to the volume, that would be ideal.
> > > > > >
> > > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
> > the
> > > > > iSCSI
> > > > > > target. I'll also need to take the resultant new device and pass
> it
> > > > into
> > > > > the
> > > > > > VM.
> > > > > >
> > > > > > Does this sound reasonable? Please call me out on anything I seem
> > > > > incorrect
> > > > > > about. :)
> > > > > >
> > > > > > Thanks for all the thought on this, Marcus!
> > > > > >
> > > > > >
> > > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > > > > wrote:
> > > > > >>
> > > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > > > attach
> > > > > >> the disk def to the vm. You may need to do your own
> StorageAdaptor
> > > and
> > > > > run
> > > > > >> iscsiadm commands to accomplish that, depending on how the
> libvirt
> > > > iscsi
> > > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how
> it
> > > > works
> > > > > on
> > > > > >> xen at the momen., nor is it ideal.
> > > > > >>
> > > > > >> Your plugin will handle acls as far as which host can see which
> > luns
> > > > as
> > > > > >> well, I remember discussing that months ago, so that a disk
> won't
> > be
> > > > > >> connected until the hypervisor has exclusive access, so it will
> be
> > > > safe
> > > > > and
> > > > > >> fence the disk from rogue nodes that cloudstack loses
> connectivity
> > > > > with. It
> > > > > >> should revoke access to everything but the target host... Except
> > for
> > > > > during
> > > > > >> migration but we can discuss that later, there's a migration
> prep
> > > > > process
> > > > > >> where the new host can be added to the acls, and the old host
> can
> > be
> > > > > removed
> > > > > >> post migration.
> > > > > >>
> > > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > > mike.tutkowski@solidfire.com
> > > > > >
> > > > > >> wrote:
> > > > > >>>
> > > > > >>> Yeah, that would be ideal.
> > > > > >>>
> > > > > >>> So, I would still need to discover the iSCSI target, log in to
> > it,
> > > > then
> > > > > >>> figure out what /dev/sdX was created as a result (and leave it
> as
> > > is
> > > > -
> > > > > do
> > > > > >>> not format it with any file system...clustered or not). I would
> > > pass
> > > > > that
> > > > > >>> device into the VM.
> > > > > >>>
> > > > > >>> Kind of accurate?
> > > > > >>>
> > > > > >>>
> > > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > > shadowsor@gmail.com>
> > > > > >>> wrote:
> > > > > >>>>
> > > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> > > There
> > > > > are
> > > > > >>>> ones that work for block devices rather than files. You can
> > piggy
> > > > > back off
> > > > > >>>> of the existing disk definitions and attach it to the vm as a
> > > block
> > > > > device.
> > > > > >>>> The definition is an XML string per libvirt XML format. You
> may
> > > want
> > > > > to use
> > > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > > > mentioned,
> > > > > >>>> there are by-id paths to the block devices, as well as other
> > ones
> > > > > that will
> > > > > >>>> be consistent and easier for management, not sure how familiar
> > you
> > > > > are with
> > > > > >>>> device naming on Linux.
> > > > > >>>>
> > > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <
> shadowsor@gmail.com
> > >
> > > > > wrote:
> > > > > >>>>>
> > > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > > > inside
> > > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > > > hypervisor) as
> > > > > >>>>> a disk to the VM, rather than attaching some image file that
> > > > resides
> > > > > on a
> > > > > >>>>> filesystem, mounted on the host, living on a target.
> > > > > >>>>>
> > > > > >>>>> Actually, if you plan on the storage supporting live
> migration
> > I
> > > > > think
> > > > > >>>>> this is the only way. You can't put a filesystem on it and
> > mount
> > > it
> > > > > in two
> > > > > >>>>> places to facilitate migration unless its a clustered
> > filesystem,
> > > > in
> > > > > which
> > > > > >>>>> case you're back to shared mount point.
> > > > > >>>>>
> > > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> > > with a
> > > > > xen
> > > > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > > > filesystem
> > > > > >>>>> either.
> > > > > >>>>>
> > > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > > > >>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>
> > > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> > mean
> > > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> that
> > in
> > > > CS.
> > > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > hypervisor,
> > > > as
> > > > > far as I
> > > > > >>>>>> know.
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > > > shadowsor@gmail.com>
> > > > > >>>>>> wrote:
> > > > > >>>>>>>
> > > > > >>>>>>> Better to wire up the lun directly to the vm unless there
> is
> > a
> > > > good
> > > > > >>>>>>> reason not to.
> > > > > >>>>>>>
> > > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com>
> > > > > >>>>>>> wrote:
> > > > > >>>>>>>>
> > > > > >>>>>>>> You could do that, but as mentioned I think its a mistake
> to
> > > go
> > > > to
> > > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> luns
> > > and
> > > > > then putting
> > > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
> or
> > > > even
> > > > > RAW disk
> > > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
> > the
> > > > > way, and have
> > > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > > > >>>>>>>>
> > > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> with
> > > CS.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is
> by
> > > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
> > the
> > > > > share.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> They can set up their share using Open iSCSI by
> discovering
> > > > their
> > > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> somewhere
> > on
> > > > > their file
> > > > > >>>>>>>>> system.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> > logging
> > > > in,
> > > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> > > current
> > > > > code manage
> > > > > >>>>>>>>> the rest as it currently does?
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> > > catch
> > > > up
> > > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > > > snapshots + memory
> > > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> > > handled
> > > > > by the SAN,
> > > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > > something
> > > > > else. This is
> > > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want
> to
> > > see
> > > > > how others are
> > > > > >>>>>>>>>> planning theirs.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > > shadowsor@gmail.com
> > > > > >
> > > > > >>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> > style
> > > on
> > > > > an
> > > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > format.
> > > > > Otherwise you're
> > > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> creating a
> > > > > QCOW2 disk image,
> > > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> the
> > > VM,
> > > > > and
> > > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> plugin
> > > is
> > > > > best. My
> > > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> > there
> > > > was
> > > > > a snapshot
> > > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > > > shadowsor@gmail.com>
> > > > > >>>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> back
> > > end,
> > > > > if
> > > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> > call
> > > > > your plugin for
> > > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic.
> As
> > > far
> > > > > as space, that
> > > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> > > carve
> > > > > out luns from a
> > > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and
> is
> > > > > independent of the
> > > > > >>>>>>>>>>>> LUN size the host sees.
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Hey Marcus,
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> > won't
> > > > > work
> > > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> snapshots?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> the
> > > VDI
> > > > > for
> > > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
> > as
> > > > the
> > > > > volume is on.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > XenServer
> > > > and
> > > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > > snapshots
> > > > > in 4.2) is I'd
> > > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> user
> > > > > requested for the
> > > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> thinly
> > > > > provisions volumes,
> > > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
> > be).
> > > > > The CloudStack
> > > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> > > until a
> > > > > hypervisor
> > > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
> > the
> > > > > SAN volume.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > creation
> > > of
> > > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
> > if
> > > > > there were support
> > > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> > iSCSI
> > > > > target), then I
> > > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> this
> > > > works
> > > > > >>>>>>>>>>>>> with DIR?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> What do you think?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Thanks
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> access
> > > > today.
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> > > well
> > > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
> it
> > > > just
> > > > > >>>>>>>>>>>>>>> acts like a
> > > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> The
> > > > > end-user
> > > > > >>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> > > hosts
> > > > > can
> > > > > >>>>>>>>>>>>>>> access,
> > > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> the
> > > > > storage.
> > > > > >>>>>>>>>>>>>>> It could
> > > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> filesystem,
> > > > > >>>>>>>>>>>>>>> cloudstack just
> > > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> images.
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> the
> > > same
> > > > > >>>>>>>>>>>>>>> > time.
> > > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > > > >>>>>>>>>>>>>>> >
> > > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> > > based
> > > > on
> > > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> one
> > > LUN,
> > > > > so
> > > > > >>>>>>>>>>>>>>> >>> there would only
> > > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > > > (libvirt)
> > > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> iSCSI
> > > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > libvirt
> > > > does
> > > > > >>>>>>>>>>>>>>> >>> not support
> > > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
> if
> > > > > libvirt
> > > > > >>>>>>>>>>>>>>> >>> supports
> > > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> > > since
> > > > > >>>>>>>>>>>>>>> >>> each one of its
> > > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > > > targets/LUNs).
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>     }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> currently
> > > > being
> > > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> > someone
> > > > > >>>>>>>>>>>>>>> >>>> selects the
> > > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> iSCSI,
> > is
> > > > > that
> > > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> Sorensen
> > > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> > > server,
> > > > > and
> > > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> believe
> > > > your
> > > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> logging
> > in
> > > > and
> > > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
> in
> > > the
> > > > > Xen
> > > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > provides
> > > a
> > > > > 1:1
> > > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
> > as
> > > a
> > > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> > > about
> > > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> your
> > > own
> > > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > LibvirtStorageAdaptor.java.
> > > >  We
> > > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> > java
> > > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/sources/java/javadoc/Normally,
> > > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> > > that
> > > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> > how
> > > > that
> > > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> > java
> > > > code
> > > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > > > storage
> > > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> > more,
> > > > but
> > > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > targets,
> > > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > > > classes
> > > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> > > Sorensen
> > > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> > iscsi
> > > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > packages
> > > > for
> > > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > initiator
> > > > > login.
> > > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> LibvirtStorageAdaptor.java
> > > and
> > > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > release
> > > I
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > > > framework
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> > > delete
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
> 1:1
> > > > > mapping
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> > the
> > > > > admin
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> > > would
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> > > needed
> > > > > to
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> > could
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> > KVM.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> might
> > > > work
> > > > > on
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> > > will
> > > > > need
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have
> to
> > > > expect
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> > > this
> > > > to
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> > Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> > cloud™
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> cloud™
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> > --
> > > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> --
> > > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> --
> > > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> --
> > > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> --
> > > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> --
> > > > > >>>>>>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> --
> > > > > >>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> --
> > > > > >>>>>> Mike Tutkowski
> > > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>> o: 303.746.7302
> > > > > >>>>>> Advancing the way the world uses the cloud™
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> --
> > > > > >>> Mike Tutkowski
> > > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>> e: mike.tutkowski@solidfire.com
> > > > > >>> o: 303.746.7302
> > > > > >>> Advancing the way the world uses the cloud™
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Mike Tutkowski
> > > > > > Senior CloudStack Developer, SolidFire Inc.
> > > > > > e: mike.tutkowski@solidfire.com
> > > > > > o: 303.746.7302
> > > > > > Advancing the way the world uses the cloud™
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > > >
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
OK, if you log in per lun, then just saving the info for future reference
is fine.

Does CS provide storage stats at all, then, for other platforms?
On Sep 17, 2013 8:01 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Plus, when you log in to a LUN, you need the CHAP info and this info is
> required for each LUN (as opposed to being for the SAN).
>
> This is how my createStoragePool currently looks, so I think we're on the
> same page.
>
>
> public KVMStoragePool createStoragePool(String name, String host, int port,
> String path, String userInfo, StoragePoolType type) {
>
>         iScsiAdmStoragePool storagePool = new iScsiAdmStoragePool(name,
> host, port, this);
>
>         _mapUuidToAdaptor.put(name, storagePool);
>
>         return storagePool;
>
>     }
>
>
> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > What do you do with Xen? I imagine the user enter the SAN details when
> > registering the pool? A the pool details are basically just instructions
> on
> > how to log into a target, correct?
> >
> > You can choose to log in a KVM host to the target during
> createStoragePool
> > and save the pool in a map, or just save the pool info in a map for
> future
> > reference by uuid, for when you do need to log in. The createStoragePool
> > then just becomes a way to save the pool info to the agent. Personally,
> I'd
> > log in on the pool create and look/scan for specific luns when they're
> > needed, but I haven't thought it through thoroughly. I just say that
> mainly
> > because login only happens once, the first time the pool is used, and
> every
> > other storage command is about discovering new luns or maybe
> > deleting/disconnecting luns no longer needed. On the other hand, you
> could
> > do all of the above: log in on pool create, then also check if you're
> > logged in on other commands and log in if you've lost connection.
> >
> > With Xen, what does your registered pool   show in the UI for avail/used
> > capacity, and how does it get that info? I assume there is some sort of
> > disk pool that the luns are carved from, and that your plugin is called
> to
> > talk to the SAN and expose to the user how much of that pool has been
> > allocated. Knowing how you already solves these problems with Xen will
> help
> > figure out what to do with KVM.
> >
> > If this is the case, I think the plugin can continue to handle it rather
> > than getting details from the agent. I'm not sure if that means nulls are
> > OK for these on the agent side or what, I need to look at the storage
> > plugin arch more closely.
> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
> > wrote:
> >
> > > Hey Marcus,
> > >
> > > I'm reviewing your e-mails as I implement the necessary methods in new
> > > classes.
> > >
> > > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > the pool data (host, port, name, path) which would be used to log the
> > > host into the initiator."
> > >
> > > Can you tell me, in my case, since a storage pool (primary storage) is
> > > actually the SAN, I wouldn't really be logging into anything at this
> > point,
> > > correct?
> > >
> > > Also, what kind of capacity, available, and used bytes make sense to
> > report
> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
> > and
> > > not an individual LUN)?
> > >
> > > Thanks!
> > >
> > >
> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
> > > >wrote:
> > >
> > > > Ok, KVM will be close to that, of course, because only the hypervisor
> > > > classes differ, the rest is all mgmt server. Creating a volume is
> just
> > > > a db entry until it's deployed for the first time.
> AttachVolumeCommand
> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > > StorageAdaptor) to log in the host to the target and then you have a
> > > > block device.  Maybe libvirt will do that for you, but my quick read
> > > > made it sound like the iscsi libvirt pool type is actually a pool,
> not
> > > > a lun or volume, so you'll need to figure out if that works or if
> > > > you'll have to use iscsiadm commands.
> > > >
> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > > doesn't really manage your pool the way you want), you're going to
> > > > have to create a version of KVMStoragePool class and a StorageAdaptor
> > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > > implementing all of the methods, then in KVMStorageManager.java
> > > > there's a "_storageMapper" map. This is used to select the correct
> > > > adaptor, you can see in this file that every call first pulls the
> > > > correct adaptor out of this map via getStorageAdaptor. So you can see
> > > > a comment in this file that says "add other storage adaptors here",
> > > > where it puts to this map, this is where you'd register your adaptor.
> > > >
> > > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > > the pool data (host, port, name, path) which would be used to log the
> > > > host into the initiator. I *believe* the method getPhysicalDisk will
> > > > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > > > this and then creates the XML diskdef and attaches it to the VM. Now,
> > > > one thing you need to know is that createStoragePool is called often,
> > > > sometimes just to make sure the pool is there. You may want to create
> > > > a map in your adaptor class and keep track of pools that have been
> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> asks
> > > > libvirt about which storage pools exist. There are also calls to
> > > > refresh the pool stats, and all of the other calls can be seen in the
> > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> but
> > > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > > volumes are created on the mgmt server via the plugin now, so
> whatever
> > > > doesn't apply can just be stubbed out (or optionally
> > > > extended/reimplemented here, if you don't mind the hosts talking to
> > > > the san api).
> > > >
> > > > There is a difference between attaching new volumes and launching a
> VM
> > > > with existing volumes.  In the latter case, the VM definition that
> was
> > > > passed to the KVM agent includes the disks, (StartCommand).
> > > >
> > > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > > would need to be kept the same. Is it just a definition to the SAN
> > > > (ip address or some such, port number) and perhaps a volume pool
> name?
> > > >
> > > > > If there is a way for me to update the ACL list on the SAN to have
> > > only a
> > > > > single KVM host have access to the volume, that would be ideal.
> > > >
> > > > That depends on your SAN API.  I was under the impression that the
> > > > storage plugin framework allowed for acls, or for you to do whatever
> > > > you want for create/attach/delete/snapshot, etc. You'd just call your
> > > > SAN API with the host info for the ACLs prior to when the disk is
> > > > attached (or the VM is started).  I'd have to look more at the
> > > > framework to know the details, in 4.1 I would do this in
> > > > getPhysicalDisk just prior to connecting up the LUN.
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > > <mi...@solidfire.com> wrote:
> > > > > OK, yeah, the ACL part will be interesting. That is a bit different
> > > from
> > > > how
> > > > > it works with XenServer and VMware.
> > > > >
> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > > >
> > > > > * The user creates a CS volume (this is just recorded in the
> > > > cloud.volumes
> > > > > table).
> > > > >
> > > > > * The user attaches the volume as a disk to a VM for the first time
> > (if
> > > > the
> > > > > storage allocator picks the SolidFire plug-in, the storage
> framework
> > > > invokes
> > > > > a method on the plug-in that creates a volume on the SAN...info
> like
> > > the
> > > > IQN
> > > > > of the SAN volume is recorded in the DB).
> > > > >
> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > > > > determines based on a flag passed in that the storage in question
> is
> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > preallocated
> > > > > storage). This tells it to discover the iSCSI target. Once
> discovered
> > > it
> > > > > determines if the iSCSI target already contains a storage
> repository
> > > (it
> > > > > would if this were a re-attach situation). If it does contain an SR
> > > > already,
> > > > > then there should already be one VDI, as well. If there is no SR,
> an
> > SR
> > > > is
> > > > > created and a single VDI is created within it (that takes up about
> as
> > > > much
> > > > > space as was requested for the CloudStack volume).
> > > > >
> > > > > * The normal attach-volume logic continues (it depends on the
> > existence
> > > > of
> > > > > an SR and a VDI).
> > > > >
> > > > > The VMware case is essentially the same (mainly just substitute
> > > datastore
> > > > > for SR and VMDK for VDI).
> > > > >
> > > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > > target,
> > > > > but only the host that is currently running the VM that is using
> the
> > > VDI
> > > > (or
> > > > > VMKD) is actually using the disk.
> > > > >
> > > > > Live Migration should be OK because the hypervisors communicate
> with
> > > > > whatever metadata they have on the SR (or datastore).
> > > > >
> > > > > I see what you're saying with KVM, though.
> > > > >
> > > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
> > > > controls
> > > > > Live Migration. You don't really need a clustered filesystem on the
> > > LUN.
> > > > The
> > > > > LUN could be handed over raw to the VM using it.
> > > > >
> > > > > If there is a way for me to update the ACL list on the SAN to have
> > > only a
> > > > > single KVM host have access to the volume, that would be ideal.
> > > > >
> > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
> the
> > > > iSCSI
> > > > > target. I'll also need to take the resultant new device and pass it
> > > into
> > > > the
> > > > > VM.
> > > > >
> > > > > Does this sound reasonable? Please call me out on anything I seem
> > > > incorrect
> > > > > about. :)
> > > > >
> > > > > Thanks for all the thought on this, Marcus!
> > > > >
> > > > >
> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > shadowsor@gmail.com>
> > > > > wrote:
> > > > >>
> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > > attach
> > > > >> the disk def to the vm. You may need to do your own StorageAdaptor
> > and
> > > > run
> > > > >> iscsiadm commands to accomplish that, depending on how the libvirt
> > > iscsi
> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> > > works
> > > > on
> > > > >> xen at the momen., nor is it ideal.
> > > > >>
> > > > >> Your plugin will handle acls as far as which host can see which
> luns
> > > as
> > > > >> well, I remember discussing that months ago, so that a disk won't
> be
> > > > >> connected until the hypervisor has exclusive access, so it will be
> > > safe
> > > > and
> > > > >> fence the disk from rogue nodes that cloudstack loses connectivity
> > > > with. It
> > > > >> should revoke access to everything but the target host... Except
> for
> > > > during
> > > > >> migration but we can discuss that later, there's a migration prep
> > > > process
> > > > >> where the new host can be added to the acls, and the old host can
> be
> > > > removed
> > > > >> post migration.
> > > > >>
> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > mike.tutkowski@solidfire.com
> > > > >
> > > > >> wrote:
> > > > >>>
> > > > >>> Yeah, that would be ideal.
> > > > >>>
> > > > >>> So, I would still need to discover the iSCSI target, log in to
> it,
> > > then
> > > > >>> figure out what /dev/sdX was created as a result (and leave it as
> > is
> > > -
> > > > do
> > > > >>> not format it with any file system...clustered or not). I would
> > pass
> > > > that
> > > > >>> device into the VM.
> > > > >>>
> > > > >>> Kind of accurate?
> > > > >>>
> > > > >>>
> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > > >>> wrote:
> > > > >>>>
> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> > There
> > > > are
> > > > >>>> ones that work for block devices rather than files. You can
> piggy
> > > > back off
> > > > >>>> of the existing disk definitions and attach it to the vm as a
> > block
> > > > device.
> > > > >>>> The definition is an XML string per libvirt XML format. You may
> > want
> > > > to use
> > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > > mentioned,
> > > > >>>> there are by-id paths to the block devices, as well as other
> ones
> > > > that will
> > > > >>>> be consistent and easier for management, not sure how familiar
> you
> > > > are with
> > > > >>>> device naming on Linux.
> > > > >>>>
> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <shadowsor@gmail.com
> >
> > > > wrote:
> > > > >>>>>
> > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > > inside
> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > > hypervisor) as
> > > > >>>>> a disk to the VM, rather than attaching some image file that
> > > resides
> > > > on a
> > > > >>>>> filesystem, mounted on the host, living on a target.
> > > > >>>>>
> > > > >>>>> Actually, if you plan on the storage supporting live migration
> I
> > > > think
> > > > >>>>> this is the only way. You can't put a filesystem on it and
> mount
> > it
> > > > in two
> > > > >>>>> places to facilitate migration unless its a clustered
> filesystem,
> > > in
> > > > which
> > > > >>>>> case you're back to shared mount point.
> > > > >>>>>
> > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> > with a
> > > > xen
> > > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > > filesystem
> > > > >>>>> either.
> > > > >>>>>
> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > > >>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>
> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> mean
> > > > >>>>>> circumventing the hypervisor? I didn't think we could do that
> in
> > > CS.
> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> hypervisor,
> > > as
> > > > far as I
> > > > >>>>>> know.
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > > shadowsor@gmail.com>
> > > > >>>>>> wrote:
> > > > >>>>>>>
> > > > >>>>>>> Better to wire up the lun directly to the vm unless there is
> a
> > > good
> > > > >>>>>>> reason not to.
> > > > >>>>>>>
> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > shadowsor@gmail.com>
> > > > >>>>>>> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
> > go
> > > to
> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
> > and
> > > > then putting
> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> > > even
> > > > RAW disk
> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
> the
> > > > way, and have
> > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > > >>>>>>>>
> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>
> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
> > CS.
> > > > >>>>>>>>>
> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
> the
> > > > share.
> > > > >>>>>>>>>
> > > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
> > > their
> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
> on
> > > > their file
> > > > >>>>>>>>> system.
> > > > >>>>>>>>>
> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> logging
> > > in,
> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> > current
> > > > code manage
> > > > >>>>>>>>> the rest as it currently does?
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> > catch
> > > up
> > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > > snapshots + memory
> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> > handled
> > > > by the SAN,
> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > something
> > > > else. This is
> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
> > see
> > > > how others are
> > > > >>>>>>>>>> planning theirs.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com
> > > > >
> > > > >>>>>>>>>> wrote:
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> style
> > on
> > > > an
> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> format.
> > > > Otherwise you're
> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> > > > QCOW2 disk image,
> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
> > VM,
> > > > and
> > > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
> > is
> > > > best. My
> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> there
> > > was
> > > > a snapshot
> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > > shadowsor@gmail.com>
> > > > >>>>>>>>>>> wrote:
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
> > end,
> > > > if
> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> call
> > > > your plugin for
> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
> > far
> > > > as space, that
> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> > carve
> > > > out luns from a
> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> > > > independent of the
> > > > >>>>>>>>>>>> LUN size the host sees.
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Hey Marcus,
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> won't
> > > > work
> > > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
> > VDI
> > > > for
> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
> as
> > > the
> > > > volume is on.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> XenServer
> > > and
> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > snapshots
> > > > in 4.2) is I'd
> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> > > > requested for the
> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> > > > provisions volumes,
> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
> be).
> > > > The CloudStack
> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> > until a
> > > > hypervisor
> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
> the
> > > > SAN volume.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> creation
> > of
> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
> if
> > > > there were support
> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> iSCSI
> > > > target), then I
> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> > > works
> > > > >>>>>>>>>>>>> with DIR?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> What do you think?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Thanks
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> > > today.
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> > well
> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> > > just
> > > > >>>>>>>>>>>>>>> acts like a
> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> > > > end-user
> > > > >>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> > hosts
> > > > can
> > > > >>>>>>>>>>>>>>> access,
> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> > > > storage.
> > > > >>>>>>>>>>>>>>> It could
> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> > > > >>>>>>>>>>>>>>> cloudstack just
> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
> > same
> > > > >>>>>>>>>>>>>>> > time.
> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > > >>>>>>>>>>>>>>> >
> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> > based
> > > on
> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
> > LUN,
> > > > so
> > > > >>>>>>>>>>>>>>> >>> there would only
> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > > (libvirt)
> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> libvirt
> > > does
> > > > >>>>>>>>>>>>>>> >>> not support
> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> > > > libvirt
> > > > >>>>>>>>>>>>>>> >>> supports
> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> > since
> > > > >>>>>>>>>>>>>>> >>> each one of its
> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > > targets/LUNs).
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>     }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> > > being
> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> someone
> > > > >>>>>>>>>>>>>>> >>>> selects the
> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
> is
> > > > that
> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>>
> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> > server,
> > > > and
> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> > > your
> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
> in
> > > and
> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
> > the
> > > > Xen
> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> provides
> > a
> > > > 1:1
> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
> as
> > a
> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> > about
> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
> > own
> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> LibvirtStorageAdaptor.java.
> > >  We
> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> java
> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > > >>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/sources/java/javadoc/Normally,
> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> > that
> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> how
> > > that
> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> java
> > > code
> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > > storage
> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> more,
> > > but
> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> targets,
> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > > classes
> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> > Sorensen
> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> iscsi
> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> packages
> > > for
> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> initiator
> > > > login.
> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
> > and
> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> release
> > I
> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > > framework
> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> > delete
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> > > > mapping
> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> the
> > > > admin
> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> > would
> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> > needed
> > > > to
> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> could
> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> KVM.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> > > work
> > > > on
> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> > will
> > > > need
> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> > > expect
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> > this
> > > to
> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> cloud™
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> > --
> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> --
> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> --
> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> --
> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> --
> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> --
> > > > >>>>>>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> --
> > > > >>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> --
> > > > >>>>>> Mike Tutkowski
> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>> o: 303.746.7302
> > > > >>>>>> Advancing the way the world uses the cloud™
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> --
> > > > >>> Mike Tutkowski
> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>> e: mike.tutkowski@solidfire.com
> > > > >>> o: 303.746.7302
> > > > >>> Advancing the way the world uses the cloud™
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Mike Tutkowski
> > > > > Senior CloudStack Developer, SolidFire Inc.
> > > > > e: mike.tutkowski@solidfire.com
> > > > > o: 303.746.7302
> > > > > Advancing the way the world uses the cloud™
> > > >
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Woops...I named this incorrectly:

_mapUuidToAdaptor.put(name, storagePool);

should be

_mapUuidToPool.put(name, storagePool);


On Tue, Sep 17, 2013 at 8:01 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Plus, when you log in to a LUN, you need the CHAP info and this info is
> required for each LUN (as opposed to being for the SAN).
>
> This is how my createStoragePool currently looks, so I think we're on the
> same page.
>
>
> public KVMStoragePool createStoragePool(String name, String host, intport, String path, String userInfo, StoragePoolType type) {
>
>         iScsiAdmStoragePool storagePool = new iScsiAdmStoragePool(name,
> host, port, this);
>
>         _mapUuidToAdaptor.put(name, storagePool);
>
>         return storagePool;
>
>     }
>
>
> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> What do you do with Xen? I imagine the user enter the SAN details when
>> registering the pool? A the pool details are basically just instructions
>> on
>> how to log into a target, correct?
>>
>> You can choose to log in a KVM host to the target during createStoragePool
>> and save the pool in a map, or just save the pool info in a map for future
>> reference by uuid, for when you do need to log in. The createStoragePool
>> then just becomes a way to save the pool info to the agent. Personally,
>> I'd
>> log in on the pool create and look/scan for specific luns when they're
>> needed, but I haven't thought it through thoroughly. I just say that
>> mainly
>> because login only happens once, the first time the pool is used, and
>> every
>> other storage command is about discovering new luns or maybe
>> deleting/disconnecting luns no longer needed. On the other hand, you could
>> do all of the above: log in on pool create, then also check if you're
>> logged in on other commands and log in if you've lost connection.
>>
>> With Xen, what does your registered pool   show in the UI for avail/used
>> capacity, and how does it get that info? I assume there is some sort of
>> disk pool that the luns are carved from, and that your plugin is called to
>> talk to the SAN and expose to the user how much of that pool has been
>> allocated. Knowing how you already solves these problems with Xen will
>> help
>> figure out what to do with KVM.
>>
>> If this is the case, I think the plugin can continue to handle it rather
>> than getting details from the agent. I'm not sure if that means nulls are
>> OK for these on the agent side or what, I need to look at the storage
>> plugin arch more closely.
>> On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > Hey Marcus,
>> >
>> > I'm reviewing your e-mails as I implement the necessary methods in new
>> > classes.
>> >
>> > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > the pool data (host, port, name, path) which would be used to log the
>> > host into the initiator."
>> >
>> > Can you tell me, in my case, since a storage pool (primary storage) is
>> > actually the SAN, I wouldn't really be logging into anything at this
>> point,
>> > correct?
>> >
>> > Also, what kind of capacity, available, and used bytes make sense to
>> report
>> > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
>> and
>> > not an individual LUN)?
>> >
>> > Thanks!
>> >
>> >
>> > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > Ok, KVM will be close to that, of course, because only the hypervisor
>> > > classes differ, the rest is all mgmt server. Creating a volume is just
>> > > a db entry until it's deployed for the first time. AttachVolumeCommand
>> > > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> > > StorageAdaptor) to log in the host to the target and then you have a
>> > > block device.  Maybe libvirt will do that for you, but my quick read
>> > > made it sound like the iscsi libvirt pool type is actually a pool, not
>> > > a lun or volume, so you'll need to figure out if that works or if
>> > > you'll have to use iscsiadm commands.
>> > >
>> > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> > > doesn't really manage your pool the way you want), you're going to
>> > > have to create a version of KVMStoragePool class and a StorageAdaptor
>> > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> > > implementing all of the methods, then in KVMStorageManager.java
>> > > there's a "_storageMapper" map. This is used to select the correct
>> > > adaptor, you can see in this file that every call first pulls the
>> > > correct adaptor out of this map via getStorageAdaptor. So you can see
>> > > a comment in this file that says "add other storage adaptors here",
>> > > where it puts to this map, this is where you'd register your adaptor.
>> > >
>> > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > > the pool data (host, port, name, path) which would be used to log the
>> > > host into the initiator. I *believe* the method getPhysicalDisk will
>> > > need to do the work of attaching the lun.  AttachVolumeCommand calls
>> > > this and then creates the XML diskdef and attaches it to the VM. Now,
>> > > one thing you need to know is that createStoragePool is called often,
>> > > sometimes just to make sure the pool is there. You may want to create
>> > > a map in your adaptor class and keep track of pools that have been
>> > > created, LibvirtStorageAdaptor doesn't have to do this because it asks
>> > > libvirt about which storage pools exist. There are also calls to
>> > > refresh the pool stats, and all of the other calls can be seen in the
>> > > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>> > > it's probably a hold-over from 4.1, as I have the vague idea that
>> > > volumes are created on the mgmt server via the plugin now, so whatever
>> > > doesn't apply can just be stubbed out (or optionally
>> > > extended/reimplemented here, if you don't mind the hosts talking to
>> > > the san api).
>> > >
>> > > There is a difference between attaching new volumes and launching a VM
>> > > with existing volumes.  In the latter case, the VM definition that was
>> > > passed to the KVM agent includes the disks, (StartCommand).
>> > >
>> > > I'd be interested in how your pool is defined for Xen, I imagine it
>> > > would need to be kept the same. Is it just a definition to the SAN
>> > > (ip address or some such, port number) and perhaps a volume pool name?
>> > >
>> > > > If there is a way for me to update the ACL list on the SAN to have
>> > only a
>> > > > single KVM host have access to the volume, that would be ideal.
>> > >
>> > > That depends on your SAN API.  I was under the impression that the
>> > > storage plugin framework allowed for acls, or for you to do whatever
>> > > you want for create/attach/delete/snapshot, etc. You'd just call your
>> > > SAN API with the host info for the ACLs prior to when the disk is
>> > > attached (or the VM is started).  I'd have to look more at the
>> > > framework to know the details, in 4.1 I would do this in
>> > > getPhysicalDisk just prior to connecting up the LUN.
>> > >
>> > >
>> > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > > <mi...@solidfire.com> wrote:
>> > > > OK, yeah, the ACL part will be interesting. That is a bit different
>> > from
>> > > how
>> > > > it works with XenServer and VMware.
>> > > >
>> > > > Just to give you an idea how it works in 4.2 with XenServer:
>> > > >
>> > > > * The user creates a CS volume (this is just recorded in the
>> > > cloud.volumes
>> > > > table).
>> > > >
>> > > > * The user attaches the volume as a disk to a VM for the first time
>> (if
>> > > the
>> > > > storage allocator picks the SolidFire plug-in, the storage framework
>> > > invokes
>> > > > a method on the plug-in that creates a volume on the SAN...info like
>> > the
>> > > IQN
>> > > > of the SAN volume is recorded in the DB).
>> > > >
>> > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>> > > > determines based on a flag passed in that the storage in question is
>> > > > "CloudStack-managed" storage (as opposed to "traditional"
>> preallocated
>> > > > storage). This tells it to discover the iSCSI target. Once
>> discovered
>> > it
>> > > > determines if the iSCSI target already contains a storage repository
>> > (it
>> > > > would if this were a re-attach situation). If it does contain an SR
>> > > already,
>> > > > then there should already be one VDI, as well. If there is no SR,
>> an SR
>> > > is
>> > > > created and a single VDI is created within it (that takes up about
>> as
>> > > much
>> > > > space as was requested for the CloudStack volume).
>> > > >
>> > > > * The normal attach-volume logic continues (it depends on the
>> existence
>> > > of
>> > > > an SR and a VDI).
>> > > >
>> > > > The VMware case is essentially the same (mainly just substitute
>> > datastore
>> > > > for SR and VMDK for VDI).
>> > > >
>> > > > In both cases, all hosts in the cluster have discovered the iSCSI
>> > target,
>> > > > but only the host that is currently running the VM that is using the
>> > VDI
>> > > (or
>> > > > VMKD) is actually using the disk.
>> > > >
>> > > > Live Migration should be OK because the hypervisors communicate with
>> > > > whatever metadata they have on the SR (or datastore).
>> > > >
>> > > > I see what you're saying with KVM, though.
>> > > >
>> > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
>> > > controls
>> > > > Live Migration. You don't really need a clustered filesystem on the
>> > LUN.
>> > > The
>> > > > LUN could be handed over raw to the VM using it.
>> > > >
>> > > > If there is a way for me to update the ACL list on the SAN to have
>> > only a
>> > > > single KVM host have access to the volume, that would be ideal.
>> > > >
>> > > > Also, I agree I'll need to use iscsiadm to discover and log in to
>> the
>> > > iSCSI
>> > > > target. I'll also need to take the resultant new device and pass it
>> > into
>> > > the
>> > > > VM.
>> > > >
>> > > > Does this sound reasonable? Please call me out on anything I seem
>> > > incorrect
>> > > > about. :)
>> > > >
>> > > > Thanks for all the thought on this, Marcus!
>> > > >
>> > > >
>> > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> > > > wrote:
>> > > >>
>> > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
>> > attach
>> > > >> the disk def to the vm. You may need to do your own StorageAdaptor
>> and
>> > > run
>> > > >> iscsiadm commands to accomplish that, depending on how the libvirt
>> > iscsi
>> > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>> > works
>> > > on
>> > > >> xen at the momen., nor is it ideal.
>> > > >>
>> > > >> Your plugin will handle acls as far as which host can see which
>> luns
>> > as
>> > > >> well, I remember discussing that months ago, so that a disk won't
>> be
>> > > >> connected until the hypervisor has exclusive access, so it will be
>> > safe
>> > > and
>> > > >> fence the disk from rogue nodes that cloudstack loses connectivity
>> > > with. It
>> > > >> should revoke access to everything but the target host... Except
>> for
>> > > during
>> > > >> migration but we can discuss that later, there's a migration prep
>> > > process
>> > > >> where the new host can be added to the acls, and the old host can
>> be
>> > > removed
>> > > >> post migration.
>> > > >>
>> > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> > mike.tutkowski@solidfire.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Yeah, that would be ideal.
>> > > >>>
>> > > >>> So, I would still need to discover the iSCSI target, log in to it,
>> > then
>> > > >>> figure out what /dev/sdX was created as a result (and leave it as
>> is
>> > -
>> > > do
>> > > >>> not format it with any file system...clustered or not). I would
>> pass
>> > > that
>> > > >>> device into the VM.
>> > > >>>
>> > > >>> Kind of accurate?
>> > > >>>
>> > > >>>
>> > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> > shadowsor@gmail.com>
>> > > >>> wrote:
>> > > >>>>
>> > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>> There
>> > > are
>> > > >>>> ones that work for block devices rather than files. You can piggy
>> > > back off
>> > > >>>> of the existing disk definitions and attach it to the vm as a
>> block
>> > > device.
>> > > >>>> The definition is an XML string per libvirt XML format. You may
>> want
>> > > to use
>> > > >>>> an alternate path to the disk rather than just /dev/sdx like I
>> > > mentioned,
>> > > >>>> there are by-id paths to the block devices, as well as other ones
>> > > that will
>> > > >>>> be consistent and easier for management, not sure how familiar
>> you
>> > > are with
>> > > >>>> device naming on Linux.
>> > > >>>>
>> > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
>> > > wrote:
>> > > >>>>>
>> > > >>>>> No, as that would rely on virtualized network/iscsi initiator
>> > inside
>> > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>> > > hypervisor) as
>> > > >>>>> a disk to the VM, rather than attaching some image file that
>> > resides
>> > > on a
>> > > >>>>> filesystem, mounted on the host, living on a target.
>> > > >>>>>
>> > > >>>>> Actually, if you plan on the storage supporting live migration I
>> > > think
>> > > >>>>> this is the only way. You can't put a filesystem on it and
>> mount it
>> > > in two
>> > > >>>>> places to facilitate migration unless its a clustered
>> filesystem,
>> > in
>> > > which
>> > > >>>>> case you're back to shared mount point.
>> > > >>>>>
>> > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
>> with a
>> > > xen
>> > > >>>>> specific cluster management, a custom CLVM. They don't use a
>> > > filesystem
>> > > >>>>> either.
>> > > >>>>>
>> > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> > > >>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>
>> > > >>>>>> When you say, "wire up the lun directly to the vm," do you mean
>> > > >>>>>> circumventing the hypervisor? I didn't think we could do that
>> in
>> > CS.
>> > > >>>>>> OpenStack, on the other hand, always circumvents the
>> hypervisor,
>> > as
>> > > far as I
>> > > >>>>>> know.
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> > > shadowsor@gmail.com>
>> > > >>>>>> wrote:
>> > > >>>>>>>
>> > > >>>>>>> Better to wire up the lun directly to the vm unless there is a
>> > good
>> > > >>>>>>> reason not to.
>> > > >>>>>>>
>> > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> shadowsor@gmail.com>
>> > > >>>>>>> wrote:
>> > > >>>>>>>>
>> > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
>> go
>> > to
>> > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>> and
>> > > then putting
>> > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
>> > even
>> > > RAW disk
>> > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
>> > > way, and have
>> > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
>> > > >>>>>>>>
>> > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> > > >>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>
>> > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>> CS.
>> > > >>>>>>>>>
>> > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>> > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
>> the
>> > > share.
>> > > >>>>>>>>>
>> > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
>> > their
>> > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
>> on
>> > > their file
>> > > >>>>>>>>> system.
>> > > >>>>>>>>>
>> > > >>>>>>>>> Would it make sense for me to just do that discovery,
>> logging
>> > in,
>> > > >>>>>>>>> and mounting behind the scenes for them and letting the
>> current
>> > > code manage
>> > > >>>>>>>>> the rest as it currently does?
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> > > >>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>
>> > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>> catch
>> > up
>> > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
>> > > snapshots + memory
>> > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
>> handled
>> > > by the SAN,
>> > > >>>>>>>>>> and then memory dumps can go to secondary storage or
>> something
>> > > else. This is
>> > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>> see
>> > > how others are
>> > > >>>>>>>>>> planning theirs.
>> > > >>>>>>>>>>
>> > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> > shadowsor@gmail.com
>> > > >
>> > > >>>>>>>>>> wrote:
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> style on
>> > > an
>> > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
>> > > Otherwise you're
>> > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
>> > > QCOW2 disk image,
>> > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>> VM,
>> > > and
>> > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
>> is
>> > > best. My
>> > > >>>>>>>>>>> impression from the storage plugin refactor was that there
>> > was
>> > > a snapshot
>> > > >>>>>>>>>>> service that would allow the San to handle snapshots.
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> > > shadowsor@gmail.com>
>> > > >>>>>>>>>>> wrote:
>> > > >>>>>>>>>>>>
>> > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>> end,
>> > > if
>> > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
>> call
>> > > your plugin for
>> > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>> far
>> > > as space, that
>> > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>> carve
>> > > out luns from a
>> > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>> > > independent of the
>> > > >>>>>>>>>>>> LUN size the host sees.
>> > > >>>>>>>>>>>>
>> > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Hey Marcus,
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>> won't
>> > > work
>> > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>> VDI
>> > > for
>> > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository as
>> > the
>> > > volume is on.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
>> XenServer
>> > and
>> > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> snapshots
>> > > in 4.2) is I'd
>> > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>> > > requested for the
>> > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
>> > > provisions volumes,
>> > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
>> be).
>> > > The CloudStack
>> > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>> until a
>> > > hypervisor
>> > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
>> the
>> > > SAN volume.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> creation of
>> > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
>> > > there were support
>> > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>> iSCSI
>> > > target), then I
>> > > >>>>>>>>>>>>> don't see how using this model will work.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>> > works
>> > > >>>>>>>>>>>>> with DIR?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> What do you think?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Thanks
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>> > today.
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>> well
>> > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
>> > just
>> > > >>>>>>>>>>>>>>> acts like a
>> > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>> > > end-user
>> > > >>>>>>>>>>>>>>> is
>> > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>> hosts
>> > > can
>> > > >>>>>>>>>>>>>>> access,
>> > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>> > > storage.
>> > > >>>>>>>>>>>>>>> It could
>> > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>> > > >>>>>>>>>>>>>>> cloudstack just
>> > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>> > > >>>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>> same
>> > > >>>>>>>>>>>>>>> > time.
>> > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> > > >>>>>>>>>>>>>>> >
>> > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>> > > >>>>>>>>>>>>>>> >> -----------------------------------------
>> > > >>>>>>>>>>>>>>> >> default              active     yes
>> > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>> based
>> > on
>> > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>> LUN,
>> > > so
>> > > >>>>>>>>>>>>>>> >>> there would only
>> > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> > (libvirt)
>> > > >>>>>>>>>>>>>>> >>> storage pool.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>> > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
>> > does
>> > > >>>>>>>>>>>>>>> >>> not support
>> > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
>> > > libvirt
>> > > >>>>>>>>>>>>>>> >>> supports
>> > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>> since
>> > > >>>>>>>>>>>>>>> >>> each one of its
>> > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> > > targets/LUNs).
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         @Override
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>     }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
>> > being
>> > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> someone
>> > > >>>>>>>>>>>>>>> >>>> selects the
>> > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
>> is
>> > > that
>> > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> Thanks!
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> > > >>>>>>>>>>>>>>> >>>> wrote:
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>>
>> > http://libvirt.org/storage.html#StorageBackendISCSI
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> server,
>> > > and
>> > > >>>>>>>>>>>>>>> >>>>> cannot be
>> > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
>> > your
>> > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
>> in
>> > and
>> > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
>> the
>> > > Xen
>> > > >>>>>>>>>>>>>>> >>>>> stuff).
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> provides a
>> > > 1:1
>> > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
>> as a
>> > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>> about
>> > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>> own
>> > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> > > >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>> >  We
>> > > >>>>>>>>>>>>>>> >>>>> can cross that
>> > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>> > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
>> > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>> that
>> > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
>> > that
>> > > >>>>>>>>>>>>>>> >>>>> is done for
>> > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
>> > code
>> > > >>>>>>>>>>>>>>> >>>>> to see if you
>> > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>> > storage
>> > > >>>>>>>>>>>>>>> >>>>> pools before you
>> > > >>>>>>>>>>>>>>> >>>>> get started.
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>> more,
>> > but
>> > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> > > >>>>>>>>>>>>>>> >>>>> > supports
>> > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> targets,
>> > > >>>>>>>>>>>>>>> >>>>> > right?
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>> > classes
>> > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> > > >>>>>>>>>>>>>>> >>>>> >> last
>> > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> Sorensen
>> > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>> iscsi
>> > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
>> > for
>> > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
>> > > login.
>> > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
>> and
>> > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> release I
>> > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>> > > framework
>> > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>> delete
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
>> > > mapping
>> > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
>> > > admin
>> > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>> would
>> > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>> needed
>> > > to
>> > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>> > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
>> KVM.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
>> > work
>> > > on
>> > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>> will
>> > > need
>> > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>> > expect
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>> this
>> > to
>> > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> --
>> > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> > --
>> > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> --
>> > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> --
>> > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> --
>> > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> --
>> > > >>>>>>>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> --
>> > > >>>>>>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>> --
>> > > >>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>> --
>> > > >>>>>> Mike Tutkowski
>> > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>> o: 303.746.7302
>> > > >>>>>> Advancing the way the world uses the cloud™
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> --
>> > > >>> Mike Tutkowski
>> > > >>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> e: mike.tutkowski@solidfire.com
>> > > >>> o: 303.746.7302
>> > > >>> Advancing the way the world uses the cloud™
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Mike Tutkowski
>> > > > Senior CloudStack Developer, SolidFire Inc.
>> > > > e: mike.tutkowski@solidfire.com
>> > > > o: 303.746.7302
>> > > > Advancing the way the world uses the cloud™
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Plus, when you log in to a LUN, you need the CHAP info and this info is
required for each LUN (as opposed to being for the SAN).

This is how my createStoragePool currently looks, so I think we're on the
same page.


public KVMStoragePool createStoragePool(String name, String host, int port,
String path, String userInfo, StoragePoolType type) {

        iScsiAdmStoragePool storagePool = new iScsiAdmStoragePool(name,
host, port, this);

        _mapUuidToAdaptor.put(name, storagePool);

        return storagePool;

    }


On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> What do you do with Xen? I imagine the user enter the SAN details when
> registering the pool? A the pool details are basically just instructions on
> how to log into a target, correct?
>
> You can choose to log in a KVM host to the target during createStoragePool
> and save the pool in a map, or just save the pool info in a map for future
> reference by uuid, for when you do need to log in. The createStoragePool
> then just becomes a way to save the pool info to the agent. Personally, I'd
> log in on the pool create and look/scan for specific luns when they're
> needed, but I haven't thought it through thoroughly. I just say that mainly
> because login only happens once, the first time the pool is used, and every
> other storage command is about discovering new luns or maybe
> deleting/disconnecting luns no longer needed. On the other hand, you could
> do all of the above: log in on pool create, then also check if you're
> logged in on other commands and log in if you've lost connection.
>
> With Xen, what does your registered pool   show in the UI for avail/used
> capacity, and how does it get that info? I assume there is some sort of
> disk pool that the luns are carved from, and that your plugin is called to
> talk to the SAN and expose to the user how much of that pool has been
> allocated. Knowing how you already solves these problems with Xen will help
> figure out what to do with KVM.
>
> If this is the case, I think the plugin can continue to handle it rather
> than getting details from the agent. I'm not sure if that means nulls are
> OK for these on the agent side or what, I need to look at the storage
> plugin arch more closely.
> On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Hey Marcus,
> >
> > I'm reviewing your e-mails as I implement the necessary methods in new
> > classes.
> >
> > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > the pool data (host, port, name, path) which would be used to log the
> > host into the initiator."
> >
> > Can you tell me, in my case, since a storage pool (primary storage) is
> > actually the SAN, I wouldn't really be logging into anything at this
> point,
> > correct?
> >
> > Also, what kind of capacity, available, and used bytes make sense to
> report
> > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
> and
> > not an individual LUN)?
> >
> > Thanks!
> >
> >
> > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > Ok, KVM will be close to that, of course, because only the hypervisor
> > > classes differ, the rest is all mgmt server. Creating a volume is just
> > > a db entry until it's deployed for the first time. AttachVolumeCommand
> > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > StorageAdaptor) to log in the host to the target and then you have a
> > > block device.  Maybe libvirt will do that for you, but my quick read
> > > made it sound like the iscsi libvirt pool type is actually a pool, not
> > > a lun or volume, so you'll need to figure out if that works or if
> > > you'll have to use iscsiadm commands.
> > >
> > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > doesn't really manage your pool the way you want), you're going to
> > > have to create a version of KVMStoragePool class and a StorageAdaptor
> > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > implementing all of the methods, then in KVMStorageManager.java
> > > there's a "_storageMapper" map. This is used to select the correct
> > > adaptor, you can see in this file that every call first pulls the
> > > correct adaptor out of this map via getStorageAdaptor. So you can see
> > > a comment in this file that says "add other storage adaptors here",
> > > where it puts to this map, this is where you'd register your adaptor.
> > >
> > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > the pool data (host, port, name, path) which would be used to log the
> > > host into the initiator. I *believe* the method getPhysicalDisk will
> > > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > > this and then creates the XML diskdef and attaches it to the VM. Now,
> > > one thing you need to know is that createStoragePool is called often,
> > > sometimes just to make sure the pool is there. You may want to create
> > > a map in your adaptor class and keep track of pools that have been
> > > created, LibvirtStorageAdaptor doesn't have to do this because it asks
> > > libvirt about which storage pools exist. There are also calls to
> > > refresh the pool stats, and all of the other calls can be seen in the
> > > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > volumes are created on the mgmt server via the plugin now, so whatever
> > > doesn't apply can just be stubbed out (or optionally
> > > extended/reimplemented here, if you don't mind the hosts talking to
> > > the san api).
> > >
> > > There is a difference between attaching new volumes and launching a VM
> > > with existing volumes.  In the latter case, the VM definition that was
> > > passed to the KVM agent includes the disks, (StartCommand).
> > >
> > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > would need to be kept the same. Is it just a definition to the SAN
> > > (ip address or some such, port number) and perhaps a volume pool name?
> > >
> > > > If there is a way for me to update the ACL list on the SAN to have
> > only a
> > > > single KVM host have access to the volume, that would be ideal.
> > >
> > > That depends on your SAN API.  I was under the impression that the
> > > storage plugin framework allowed for acls, or for you to do whatever
> > > you want for create/attach/delete/snapshot, etc. You'd just call your
> > > SAN API with the host info for the ACLs prior to when the disk is
> > > attached (or the VM is started).  I'd have to look more at the
> > > framework to know the details, in 4.1 I would do this in
> > > getPhysicalDisk just prior to connecting up the LUN.
> > >
> > >
> > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > <mi...@solidfire.com> wrote:
> > > > OK, yeah, the ACL part will be interesting. That is a bit different
> > from
> > > how
> > > > it works with XenServer and VMware.
> > > >
> > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > >
> > > > * The user creates a CS volume (this is just recorded in the
> > > cloud.volumes
> > > > table).
> > > >
> > > > * The user attaches the volume as a disk to a VM for the first time
> (if
> > > the
> > > > storage allocator picks the SolidFire plug-in, the storage framework
> > > invokes
> > > > a method on the plug-in that creates a volume on the SAN...info like
> > the
> > > IQN
> > > > of the SAN volume is recorded in the DB).
> > > >
> > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > > > determines based on a flag passed in that the storage in question is
> > > > "CloudStack-managed" storage (as opposed to "traditional"
> preallocated
> > > > storage). This tells it to discover the iSCSI target. Once discovered
> > it
> > > > determines if the iSCSI target already contains a storage repository
> > (it
> > > > would if this were a re-attach situation). If it does contain an SR
> > > already,
> > > > then there should already be one VDI, as well. If there is no SR, an
> SR
> > > is
> > > > created and a single VDI is created within it (that takes up about as
> > > much
> > > > space as was requested for the CloudStack volume).
> > > >
> > > > * The normal attach-volume logic continues (it depends on the
> existence
> > > of
> > > > an SR and a VDI).
> > > >
> > > > The VMware case is essentially the same (mainly just substitute
> > datastore
> > > > for SR and VMDK for VDI).
> > > >
> > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > target,
> > > > but only the host that is currently running the VM that is using the
> > VDI
> > > (or
> > > > VMKD) is actually using the disk.
> > > >
> > > > Live Migration should be OK because the hypervisors communicate with
> > > > whatever metadata they have on the SR (or datastore).
> > > >
> > > > I see what you're saying with KVM, though.
> > > >
> > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
> > > controls
> > > > Live Migration. You don't really need a clustered filesystem on the
> > LUN.
> > > The
> > > > LUN could be handed over raw to the VM using it.
> > > >
> > > > If there is a way for me to update the ACL list on the SAN to have
> > only a
> > > > single KVM host have access to the volume, that would be ideal.
> > > >
> > > > Also, I agree I'll need to use iscsiadm to discover and log in to the
> > > iSCSI
> > > > target. I'll also need to take the resultant new device and pass it
> > into
> > > the
> > > > VM.
> > > >
> > > > Does this sound reasonable? Please call me out on anything I seem
> > > incorrect
> > > > about. :)
> > > >
> > > > Thanks for all the thought on this, Marcus!
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> > > > wrote:
> > > >>
> > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > attach
> > > >> the disk def to the vm. You may need to do your own StorageAdaptor
> and
> > > run
> > > >> iscsiadm commands to accomplish that, depending on how the libvirt
> > iscsi
> > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> > works
> > > on
> > > >> xen at the momen., nor is it ideal.
> > > >>
> > > >> Your plugin will handle acls as far as which host can see which luns
> > as
> > > >> well, I remember discussing that months ago, so that a disk won't be
> > > >> connected until the hypervisor has exclusive access, so it will be
> > safe
> > > and
> > > >> fence the disk from rogue nodes that cloudstack loses connectivity
> > > with. It
> > > >> should revoke access to everything but the target host... Except for
> > > during
> > > >> migration but we can discuss that later, there's a migration prep
> > > process
> > > >> where the new host can be added to the acls, and the old host can be
> > > removed
> > > >> post migration.
> > > >>
> > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Yeah, that would be ideal.
> > > >>>
> > > >>> So, I would still need to discover the iSCSI target, log in to it,
> > then
> > > >>> figure out what /dev/sdX was created as a result (and leave it as
> is
> > -
> > > do
> > > >>> not format it with any file system...clustered or not). I would
> pass
> > > that
> > > >>> device into the VM.
> > > >>>
> > > >>> Kind of accurate?
> > > >>>
> > > >>>
> > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > shadowsor@gmail.com>
> > > >>> wrote:
> > > >>>>
> > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> There
> > > are
> > > >>>> ones that work for block devices rather than files. You can piggy
> > > back off
> > > >>>> of the existing disk definitions and attach it to the vm as a
> block
> > > device.
> > > >>>> The definition is an XML string per libvirt XML format. You may
> want
> > > to use
> > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > mentioned,
> > > >>>> there are by-id paths to the block devices, as well as other ones
> > > that will
> > > >>>> be consistent and easier for management, not sure how familiar you
> > > are with
> > > >>>> device naming on Linux.
> > > >>>>
> > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
> > > wrote:
> > > >>>>>
> > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > inside
> > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > hypervisor) as
> > > >>>>> a disk to the VM, rather than attaching some image file that
> > resides
> > > on a
> > > >>>>> filesystem, mounted on the host, living on a target.
> > > >>>>>
> > > >>>>> Actually, if you plan on the storage supporting live migration I
> > > think
> > > >>>>> this is the only way. You can't put a filesystem on it and mount
> it
> > > in two
> > > >>>>> places to facilitate migration unless its a clustered filesystem,
> > in
> > > which
> > > >>>>> case you're back to shared mount point.
> > > >>>>>
> > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> with a
> > > xen
> > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > filesystem
> > > >>>>> either.
> > > >>>>>
> > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > >>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>
> > > >>>>>> When you say, "wire up the lun directly to the vm," do you mean
> > > >>>>>> circumventing the hypervisor? I didn't think we could do that in
> > CS.
> > > >>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
> > as
> > > far as I
> > > >>>>>> know.
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > >>>>>> wrote:
> > > >>>>>>>
> > > >>>>>>> Better to wire up the lun directly to the vm unless there is a
> > good
> > > >>>>>>> reason not to.
> > > >>>>>>>
> > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> shadowsor@gmail.com>
> > > >>>>>>> wrote:
> > > >>>>>>>>
> > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
> go
> > to
> > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
> and
> > > then putting
> > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> > even
> > > RAW disk
> > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
> > > way, and have
> > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > >>>>>>>>
> > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>
> > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
> CS.
> > > >>>>>>>>>
> > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> > > >>>>>>>>> selecting SharedMountPoint and specifying the location of the
> > > share.
> > > >>>>>>>>>
> > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
> > their
> > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
> > > their file
> > > >>>>>>>>> system.
> > > >>>>>>>>>
> > > >>>>>>>>> Would it make sense for me to just do that discovery, logging
> > in,
> > > >>>>>>>>> and mounting behind the scenes for them and letting the
> current
> > > code manage
> > > >>>>>>>>> the rest as it currently does?
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> catch
> > up
> > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > snapshots + memory
> > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> handled
> > > by the SAN,
> > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> something
> > > else. This is
> > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
> see
> > > how others are
> > > >>>>>>>>>> planning theirs.
> > > >>>>>>>>>>
> > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > shadowsor@gmail.com
> > > >
> > > >>>>>>>>>> wrote:
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style
> on
> > > an
> > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
> > > Otherwise you're
> > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> > > QCOW2 disk image,
> > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
> VM,
> > > and
> > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
> is
> > > best. My
> > > >>>>>>>>>>> impression from the storage plugin refactor was that there
> > was
> > > a snapshot
> > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com>
> > > >>>>>>>>>>> wrote:
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
> end,
> > > if
> > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
> > > your plugin for
> > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
> far
> > > as space, that
> > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> carve
> > > out luns from a
> > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> > > independent of the
> > > >>>>>>>>>>>> LUN size the host sees.
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Hey Marcus,
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
> > > work
> > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
> VDI
> > > for
> > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository as
> > the
> > > volume is on.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
> > and
> > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> snapshots
> > > in 4.2) is I'd
> > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> > > requested for the
> > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> > > provisions volumes,
> > > >>>>>>>>>>>>> so the space is not actually used unless it needs to be).
> > > The CloudStack
> > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> until a
> > > hypervisor
> > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
> > > SAN volume.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no creation
> of
> > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
> > > there were support
> > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
> > > target), then I
> > > >>>>>>>>>>>>> don't see how using this model will work.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> > works
> > > >>>>>>>>>>>>> with DIR?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> What do you think?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Thanks
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> > today.
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> well
> > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> > just
> > > >>>>>>>>>>>>>>> acts like a
> > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> > > end-user
> > > >>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> hosts
> > > can
> > > >>>>>>>>>>>>>>> access,
> > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> > > storage.
> > > >>>>>>>>>>>>>>> It could
> > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> > > >>>>>>>>>>>>>>> cloudstack just
> > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
> same
> > > >>>>>>>>>>>>>>> > time.
> > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > >>>>>>>>>>>>>>> >
> > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > >>>>>>>>>>>>>>> >> default              active     yes
> > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> based
> > on
> > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
> LUN,
> > > so
> > > >>>>>>>>>>>>>>> >>> there would only
> > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > (libvirt)
> > > >>>>>>>>>>>>>>> >>> storage pool.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
> > does
> > > >>>>>>>>>>>>>>> >>> not support
> > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> > > libvirt
> > > >>>>>>>>>>>>>>> >>> supports
> > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> since
> > > >>>>>>>>>>>>>>> >>> each one of its
> > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > targets/LUNs).
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         @Override
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>     }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> > being
> > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
> > > >>>>>>>>>>>>>>> >>>> selects the
> > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
> > > that
> > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > >>>>>>>>>>>>>>> >>>> wrote:
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/storage.html#StorageBackendISCSI
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> server,
> > > and
> > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> > your
> > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
> > and
> > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
> the
> > > Xen
> > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides
> a
> > > 1:1
> > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as
> a
> > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> about
> > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
> own
> > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
> >  We
> > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
> > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
> > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> that
> > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
> > that
> > > >>>>>>>>>>>>>>> >>>>> is done for
> > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
> > code
> > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > storage
> > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > >>>>>>>>>>>>>>> >>>>> get started.
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
> > but
> > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > >>>>>>>>>>>>>>> >>>>> > supports
> > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
> > > >>>>>>>>>>>>>>> >>>>> > right?
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > classes
> > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > >>>>>>>>>>>>>>> >>>>> >> last
> > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> Sorensen
> > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
> > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
> > for
> > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
> > > login.
> > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
> and
> > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release
> I
> > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > framework
> > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> delete
> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> > > mapping
> > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
> > > admin
> > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> would
> > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> needed
> > > to
> > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
> > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> > work
> > > on
> > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> will
> > > need
> > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> > expect
> > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> this
> > to
> > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> --
> > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> > --
> > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> --
> > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> --
> > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> --
> > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> --
> > > >>>>>>>>>>>>>> Mike Tutkowski
> > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>> o: 303.746.7302
> > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> --
> > > >>>>>>>>>>>>> Mike Tutkowski
> > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>> o: 303.746.7302
> > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> --
> > > >>>>>>>>> Mike Tutkowski
> > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>> o: 303.746.7302
> > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> --
> > > >>>>>> Mike Tutkowski
> > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>> o: 303.746.7302
> > > >>>>>> Advancing the way the world uses the cloud™
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> Mike Tutkowski
> > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> e: mike.tutkowski@solidfire.com
> > > >>> o: 303.746.7302
> > > >>> Advancing the way the world uses the cloud™
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Mike Tutkowski
> > > > Senior CloudStack Developer, SolidFire Inc.
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the cloud™
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I see where you're coming from.

John Burwell and I took a different approach for this kind of storage.

If you want to add capacity and/or IOPS to primary storage that's based on
my plug-in, you invoke the updateStoragePool API command and pass in the
new capacity and/or IOPS.

Your existing volumes - the ones in use by the hypervisor - are another
story, though.

I have a JIRA ticket to look into how to modify via CS the capacity and
IOPS of a volume that's based on the SolidFire SAN.

Changing IOPS is no problem. Changing the size of the volume is a bit more
involved.


On Tue, Sep 17, 2013 at 11:03 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> OK.  Most other storage types interrogate the storage for the
> capacitywhethwr directly or through the hypervisor. This makes it dynamic
> (user could add capacity and cloudstack notices), and provides accurate
> accounting things like thin provisioning. I would be surprised if edison
> didn't allow for this in the new storage framework.
> On Sep 17, 2013 10:34 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > This should answer your question, I believe:
> >
> > * When you add primary storage that is based on the SolidFire plug-in,
> you
> > specify info like host, port, number of bytes from the SAN that CS can
> use,
> > number of IOPS from the SAN that CS can use, among other info.
> >
> > * When a volume is attached for the first time and the storage framework
> > asks my plug-in to create a volume (LUN) on the SAN, my plug-in
> increments
> > the used_bytes field of the cloud.storage_pool table. If the used_bytes
> > would go above the capacity_bytes, then the allocator would not have
> > selected my plug-in to back the storage. Additionally, if the required
> IOPS
> > would bring the SolidFire SAN above the number of IOPS that were
> dedicated
> > to CS, the allocator would not have selected my plug-in to back the
> > storage.
> >
> > * When a CS volume is deleted that uses my plug-in, the storage framework
> > asks my plug-in to delete the volume (LUN) on the SAN. My plug-in
> > decrements the used_bytes field of the cloud.storage_pool table.
> >
> > So, it just boils down to we don't require the accounting of space and
> IOPS
> > to take place on the hypervisor side.
> >
> >
> > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > Ok, on most storage pools it shows how many GB free/used when listing
> > > the pool both via API and in the UI. I'm guessing those are empty then
> > > for the solid fire storage, but it seems like the user should have to
> > > define some sort of pool that the luns get carved out of, and you
> > > should be able to get the stats for that, right? Or is a solid fire
> > > appliance only one pool per appliance? This isn't about billing, but
> > > just so cloudstack itself knows whether or not there is space left on
> > > the storage device, so cloudstack can go on allocating from a
> > > different primary storage as this one fills up. There are also
> > > notifications and things. It seems like there should be a call you can
> > > handle for this, maybe Edison knows.
> > >
> > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
> > > wrote:
> > > > You respond to more than attach and detach, right? Don't you create
> > luns
> > > as
> > > > well? Or are you just referring to the hypervisor stuff?
> > > >
> > > > On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com>
> > > > wrote:
> > > >>
> > > >> Hi Marcus,
> > > >>
> > > >> I never need to respond to a CreateStoragePool call for either
> > XenServer
> > > >> or
> > > >> VMware.
> > > >>
> > > >> What happens is I respond only to the Attach- and Detach-volume
> > > commands.
> > > >>
> > > >> Let's say an attach comes in:
> > > >>
> > > >> In this case, I check to see if the storage is "managed." Talking
> > > >> XenServer
> > > >> here, if it is, I log in to the LUN that is the disk we want to
> > attach.
> > > >> After, if this is the first time attaching this disk, I create an SR
> > > and a
> > > >> VDI within the SR. If it is not the first time attaching this disk,
> > the
> > > >> LUN
> > > >> already has the SR and VDI on it.
> > > >>
> > > >> Once this is done, I let the normal "attach" logic run because this
> > > logic
> > > >> expected an SR and a VDI and now it has it.
> > > >>
> > > >> It's the same thing for VMware: Just substitute datastore for SR and
> > > VMDK
> > > >> for VDI.
> > > >>
> > > >> Does that make sense?
> > > >>
> > > >> Thanks!
> > > >>
> > > >>
> > > >> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > > >> <sh...@gmail.com>wrote:
> > > >>
> > > >> > What do you do with Xen? I imagine the user enter the SAN details
> > when
> > > >> > registering the pool? A the pool details are basically just
> > > instructions
> > > >> > on
> > > >> > how to log into a target, correct?
> > > >> >
> > > >> > You can choose to log in a KVM host to the target during
> > > >> > createStoragePool
> > > >> > and save the pool in a map, or just save the pool info in a map
> for
> > > >> > future
> > > >> > reference by uuid, for when you do need to log in. The
> > > createStoragePool
> > > >> > then just becomes a way to save the pool info to the agent.
> > > Personally,
> > > >> > I'd
> > > >> > log in on the pool create and look/scan for specific luns when
> > they're
> > > >> > needed, but I haven't thought it through thoroughly. I just say
> that
> > > >> > mainly
> > > >> > because login only happens once, the first time the pool is used,
> > and
> > > >> > every
> > > >> > other storage command is about discovering new luns or maybe
> > > >> > deleting/disconnecting luns no longer needed. On the other hand,
> you
> > > >> > could
> > > >> > do all of the above: log in on pool create, then also check if
> > you're
> > > >> > logged in on other commands and log in if you've lost connection.
> > > >> >
> > > >> > With Xen, what does your registered pool   show in the UI for
> > > avail/used
> > > >> > capacity, and how does it get that info? I assume there is some
> sort
> > > of
> > > >> > disk pool that the luns are carved from, and that your plugin is
> > > called
> > > >> > to
> > > >> > talk to the SAN and expose to the user how much of that pool has
> > been
> > > >> > allocated. Knowing how you already solves these problems with Xen
> > will
> > > >> > help
> > > >> > figure out what to do with KVM.
> > > >> >
> > > >> > If this is the case, I think the plugin can continue to handle it
> > > rather
> > > >> > than getting details from the agent. I'm not sure if that means
> > nulls
> > > >> > are
> > > >> > OK for these on the agent side or what, I need to look at the
> > storage
> > > >> > plugin arch more closely.
> > > >> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > > mike.tutkowski@solidfire.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Hey Marcus,
> > > >> > >
> > > >> > > I'm reviewing your e-mails as I implement the necessary methods
> in
> > > new
> > > >> > > classes.
> > > >> > >
> > > >> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> > all
> > > of
> > > >> > > the pool data (host, port, name, path) which would be used to
> log
> > > the
> > > >> > > host into the initiator."
> > > >> > >
> > > >> > > Can you tell me, in my case, since a storage pool (primary
> > storage)
> > > is
> > > >> > > actually the SAN, I wouldn't really be logging into anything at
> > this
> > > >> > point,
> > > >> > > correct?
> > > >> > >
> > > >> > > Also, what kind of capacity, available, and used bytes make
> sense
> > to
> > > >> > report
> > > >> > > for KVMStoragePool (since KVMStoragePool represents the SAN in
> my
> > > case
> > > >> > and
> > > >> > > not an individual LUN)?
> > > >> > >
> > > >> > > Thanks!
> > > >> > >
> > > >> > >
> > > >> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > > shadowsor@gmail.com
> > > >> > > >wrote:
> > > >> > >
> > > >> > > > Ok, KVM will be close to that, of course, because only the
> > > >> > > > hypervisor
> > > >> > > > classes differ, the rest is all mgmt server. Creating a volume
> > is
> > > >> > > > just
> > > >> > > > a db entry until it's deployed for the first time.
> > > >> > > > AttachVolumeCommand
> > > >> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > >> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
> > KVM
> > > >> > > > StorageAdaptor) to log in the host to the target and then you
> > > have a
> > > >> > > > block device.  Maybe libvirt will do that for you, but my
> quick
> > > read
> > > >> > > > made it sound like the iscsi libvirt pool type is actually a
> > pool,
> > > >> > > > not
> > > >> > > > a lun or volume, so you'll need to figure out if that works or
> > if
> > > >> > > > you'll have to use iscsiadm commands.
> > > >> > > >
> > > >> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> > Libvirt
> > > >> > > > doesn't really manage your pool the way you want), you're
> going
> > to
> > > >> > > > have to create a version of KVMStoragePool class and a
> > > >> > > > StorageAdaptor
> > > >> > > > class (see LibvirtStoragePool.java and
> > > LibvirtStorageAdaptor.java),
> > > >> > > > implementing all of the methods, then in
> KVMStorageManager.java
> > > >> > > > there's a "_storageMapper" map. This is used to select the
> > correct
> > > >> > > > adaptor, you can see in this file that every call first pulls
> > the
> > > >> > > > correct adaptor out of this map via getStorageAdaptor. So you
> > can
> > > >> > > > see
> > > >> > > > a comment in this file that says "add other storage adaptors
> > > here",
> > > >> > > > where it puts to this map, this is where you'd register your
> > > >> > > > adaptor.
> > > >> > > >
> > > >> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
> > all
> > > >> > > > of
> > > >> > > > the pool data (host, port, name, path) which would be used to
> > log
> > > >> > > > the
> > > >> > > > host into the initiator. I *believe* the method
> getPhysicalDisk
> > > will
> > > >> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> > > calls
> > > >> > > > this and then creates the XML diskdef and attaches it to the
> VM.
> > > >> > > > Now,
> > > >> > > > one thing you need to know is that createStoragePool is called
> > > >> > > > often,
> > > >> > > > sometimes just to make sure the pool is there. You may want to
> > > >> > > > create
> > > >> > > > a map in your adaptor class and keep track of pools that have
> > been
> > > >> > > > created, LibvirtStorageAdaptor doesn't have to do this because
> > it
> > > >> > > > asks
> > > >> > > > libvirt about which storage pools exist. There are also calls
> to
> > > >> > > > refresh the pool stats, and all of the other calls can be seen
> > in
> > > >> > > > the
> > > >> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> > etc,
> > > >> > > > but
> > > >> > > > it's probably a hold-over from 4.1, as I have the vague idea
> > that
> > > >> > > > volumes are created on the mgmt server via the plugin now, so
> > > >> > > > whatever
> > > >> > > > doesn't apply can just be stubbed out (or optionally
> > > >> > > > extended/reimplemented here, if you don't mind the hosts
> talking
> > > to
> > > >> > > > the san api).
> > > >> > > >
> > > >> > > > There is a difference between attaching new volumes and
> > launching
> > > a
> > > >> > > > VM
> > > >> > > > with existing volumes.  In the latter case, the VM definition
> > that
> > > >> > > > was
> > > >> > > > passed to the KVM agent includes the disks, (StartCommand).
> > > >> > > >
> > > >> > > > I'd be interested in how your pool is defined for Xen, I
> imagine
> > > it
> > > >> > > > would need to be kept the same. Is it just a definition to the
> > SAN
> > > >> > > > (ip address or some such, port number) and perhaps a volume
> pool
> > > >> > > > name?
> > > >> > > >
> > > >> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >> > > only a
> > > >> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >> > > >
> > > >> > > > That depends on your SAN API.  I was under the impression that
> > the
> > > >> > > > storage plugin framework allowed for acls, or for you to do
> > > whatever
> > > >> > > > you want for create/attach/delete/snapshot, etc. You'd just
> call
> > > >> > > > your
> > > >> > > > SAN API with the host info for the ACLs prior to when the disk
> > is
> > > >> > > > attached (or the VM is started).  I'd have to look more at the
> > > >> > > > framework to know the details, in 4.1 I would do this in
> > > >> > > > getPhysicalDisk just prior to connecting up the LUN.
> > > >> > > >
> > > >> > > >
> > > >> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > >> > > > <mi...@solidfire.com> wrote:
> > > >> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> > > >> > > > > different
> > > >> > > from
> > > >> > > > how
> > > >> > > > > it works with XenServer and VMware.
> > > >> > > > >
> > > >> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > >> > > > >
> > > >> > > > > * The user creates a CS volume (this is just recorded in the
> > > >> > > > cloud.volumes
> > > >> > > > > table).
> > > >> > > > >
> > > >> > > > > * The user attaches the volume as a disk to a VM for the
> first
> > > >> > > > > time
> > > >> > (if
> > > >> > > > the
> > > >> > > > > storage allocator picks the SolidFire plug-in, the storage
> > > >> > > > > framework
> > > >> > > > invokes
> > > >> > > > > a method on the plug-in that creates a volume on the
> > SAN...info
> > > >> > > > > like
> > > >> > > the
> > > >> > > > IQN
> > > >> > > > > of the SAN volume is recorded in the DB).
> > > >> > > > >
> > > >> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> > executed.
> > > >> > > > > It
> > > >> > > > > determines based on a flag passed in that the storage in
> > > question
> > > >> > > > > is
> > > >> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > > >> > preallocated
> > > >> > > > > storage). This tells it to discover the iSCSI target. Once
> > > >> > > > > discovered
> > > >> > > it
> > > >> > > > > determines if the iSCSI target already contains a storage
> > > >> > > > > repository
> > > >> > > (it
> > > >> > > > > would if this were a re-attach situation). If it does
> contain
> > an
> > > >> > > > > SR
> > > >> > > > already,
> > > >> > > > > then there should already be one VDI, as well. If there is
> no
> > > SR,
> > > >> > > > > an
> > > >> > SR
> > > >> > > > is
> > > >> > > > > created and a single VDI is created within it (that takes up
> > > about
> > > >> > > > > as
> > > >> > > > much
> > > >> > > > > space as was requested for the CloudStack volume).
> > > >> > > > >
> > > >> > > > > * The normal attach-volume logic continues (it depends on
> the
> > > >> > existence
> > > >> > > > of
> > > >> > > > > an SR and a VDI).
> > > >> > > > >
> > > >> > > > > The VMware case is essentially the same (mainly just
> > substitute
> > > >> > > datastore
> > > >> > > > > for SR and VMDK for VDI).
> > > >> > > > >
> > > >> > > > > In both cases, all hosts in the cluster have discovered the
> > > iSCSI
> > > >> > > target,
> > > >> > > > > but only the host that is currently running the VM that is
> > using
> > > >> > > > > the
> > > >> > > VDI
> > > >> > > > (or
> > > >> > > > > VMKD) is actually using the disk.
> > > >> > > > >
> > > >> > > > > Live Migration should be OK because the hypervisors
> > communicate
> > > >> > > > > with
> > > >> > > > > whatever metadata they have on the SR (or datastore).
> > > >> > > > >
> > > >> > > > > I see what you're saying with KVM, though.
> > > >> > > > >
> > > >> > > > > In that case, the hosts are clustered only in CloudStack's
> > eyes.
> > > >> > > > > CS
> > > >> > > > controls
> > > >> > > > > Live Migration. You don't really need a clustered filesystem
> > on
> > > >> > > > > the
> > > >> > > LUN.
> > > >> > > > The
> > > >> > > > > LUN could be handed over raw to the VM using it.
> > > >> > > > >
> > > >> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >> > > only a
> > > >> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >> > > > >
> > > >> > > > > Also, I agree I'll need to use iscsiadm to discover and log
> in
> > > to
> > > >> > > > > the
> > > >> > > > iSCSI
> > > >> > > > > target. I'll also need to take the resultant new device and
> > pass
> > > >> > > > > it
> > > >> > > into
> > > >> > > > the
> > > >> > > > > VM.
> > > >> > > > >
> > > >> > > > > Does this sound reasonable? Please call me out on anything I
> > > seem
> > > >> > > > incorrect
> > > >> > > > > about. :)
> > > >> > > > >
> > > >> > > > > Thanks for all the thought on this, Marcus!
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > > >> > shadowsor@gmail.com>
> > > >> > > > > wrote:
> > > >> > > > >>
> > > >> > > > >> Perfect. You'll have a domain def ( the VM), a disk def,
> and
> > > the
> > > >> > > attach
> > > >> > > > >> the disk def to the vm. You may need to do your own
> > > >> > > > >> StorageAdaptor
> > > >> > and
> > > >> > > > run
> > > >> > > > >> iscsiadm commands to accomplish that, depending on how the
> > > >> > > > >> libvirt
> > > >> > > iscsi
> > > >> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> > how
> > > it
> > > >> > > works
> > > >> > > > on
> > > >> > > > >> xen at the momen., nor is it ideal.
> > > >> > > > >>
> > > >> > > > >> Your plugin will handle acls as far as which host can see
> > which
> > > >> > > > >> luns
> > > >> > > as
> > > >> > > > >> well, I remember discussing that months ago, so that a disk
> > > won't
> > > >> > > > >> be
> > > >> > > > >> connected until the hypervisor has exclusive access, so it
> > will
> > > >> > > > >> be
> > > >> > > safe
> > > >> > > > and
> > > >> > > > >> fence the disk from rogue nodes that cloudstack loses
> > > >> > > > >> connectivity
> > > >> > > > with. It
> > > >> > > > >> should revoke access to everything but the target host...
> > > Except
> > > >> > > > >> for
> > > >> > > > during
> > > >> > > > >> migration but we can discuss that later, there's a
> migration
> > > prep
> > > >> > > > process
> > > >> > > > >> where the new host can be added to the acls, and the old
> host
> > > can
> > > >> > > > >> be
> > > >> > > > removed
> > > >> > > > >> post migration.
> > > >> > > > >>
> > > >> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > >> > > mike.tutkowski@solidfire.com
> > > >> > > > >
> > > >> > > > >> wrote:
> > > >> > > > >>>
> > > >> > > > >>> Yeah, that would be ideal.
> > > >> > > > >>>
> > > >> > > > >>> So, I would still need to discover the iSCSI target, log
> in
> > to
> > > >> > > > >>> it,
> > > >> > > then
> > > >> > > > >>> figure out what /dev/sdX was created as a result (and
> leave
> > it
> > > >> > > > >>> as
> > > >> > is
> > > >> > > -
> > > >> > > > do
> > > >> > > > >>> not format it with any file system...clustered or not). I
> > > would
> > > >> > pass
> > > >> > > > that
> > > >> > > > >>> device into the VM.
> > > >> > > > >>>
> > > >> > > > >>> Kind of accurate?
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > >> > > shadowsor@gmail.com>
> > > >> > > > >>> wrote:
> > > >> > > > >>>>
> > > >> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> > definitions.
> > > >> > There
> > > >> > > > are
> > > >> > > > >>>> ones that work for block devices rather than files. You
> can
> > > >> > > > >>>> piggy
> > > >> > > > back off
> > > >> > > > >>>> of the existing disk definitions and attach it to the vm
> > as a
> > > >> > block
> > > >> > > > device.
> > > >> > > > >>>> The definition is an XML string per libvirt XML format.
> You
> > > may
> > > >> > want
> > > >> > > > to use
> > > >> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> > like
> > > I
> > > >> > > > mentioned,
> > > >> > > > >>>> there are by-id paths to the block devices, as well as
> > other
> > > >> > > > >>>> ones
> > > >> > > > that will
> > > >> > > > >>>> be consistent and easier for management, not sure how
> > > familiar
> > > >> > > > >>>> you
> > > >> > > > are with
> > > >> > > > >>>> device naming on Linux.
> > > >> > > > >>>>
> > > >> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > > >> > > > >>>> <sh...@gmail.com>
> > > >> > > > wrote:
> > > >> > > > >>>>>
> > > >> > > > >>>>> No, as that would rely on virtualized network/iscsi
> > > initiator
> > > >> > > inside
> > > >> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
> lun
> > > on
> > > >> > > > hypervisor) as
> > > >> > > > >>>>> a disk to the VM, rather than attaching some image file
> > that
> > > >> > > resides
> > > >> > > > on a
> > > >> > > > >>>>> filesystem, mounted on the host, living on a target.
> > > >> > > > >>>>>
> > > >> > > > >>>>> Actually, if you plan on the storage supporting live
> > > migration
> > > >> > > > >>>>> I
> > > >> > > > think
> > > >> > > > >>>>> this is the only way. You can't put a filesystem on it
> and
> > > >> > > > >>>>> mount
> > > >> > it
> > > >> > > > in two
> > > >> > > > >>>>> places to facilitate migration unless its a clustered
> > > >> > > > >>>>> filesystem,
> > > >> > > in
> > > >> > > > which
> > > >> > > > >>>>> case you're back to shared mount point.
> > > >> > > > >>>>>
> > > >> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
> > LVM
> > > >> > with a
> > > >> > > > xen
> > > >> > > > >>>>> specific cluster management, a custom CLVM. They don't
> > use a
> > > >> > > > filesystem
> > > >> > > > >>>>> either.
> > > >> > > > >>>>>
> > > >> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > >> > > > >>>>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
> > you
> > > >> > > > >>>>>> mean
> > > >> > > > >>>>>> circumventing the hypervisor? I didn't think we could
> do
> > > that
> > > >> > > > >>>>>> in
> > > >> > > CS.
> > > >> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > > >> > > > >>>>>> hypervisor,
> > > >> > > as
> > > >> > > > far as I
> > > >> > > > >>>>>> know.
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > >> > > > shadowsor@gmail.com>
> > > >> > > > >>>>>> wrote:
> > > >> > > > >>>>>>>
> > > >> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> > there
> > > is
> > > >> > > > >>>>>>> a
> > > >> > > good
> > > >> > > > >>>>>>> reason not to.
> > > >> > > > >>>>>>>
> > > >> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > > >> > shadowsor@gmail.com>
> > > >> > > > >>>>>>> wrote:
> > > >> > > > >>>>>>>>
> > > >> > > > >>>>>>>> You could do that, but as mentioned I think its a
> > mistake
> > > >> > > > >>>>>>>> to
> > > >> > go
> > > >> > > to
> > > >> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
> to
> > > luns
> > > >> > and
> > > >> > > > then putting
> > > >> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> > QCOW2
> > > >> > > > >>>>>>>> or
> > > >> > > even
> > > >> > > > RAW disk
> > > >> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> > along
> > > >> > > > >>>>>>>> the
> > > >> > > > way, and have
> > > >> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> > > etc.
> > > >> > > > >>>>>>>>
> > > >> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > >> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in
> KVM
> > > with
> > > >> > CS.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
> > is
> > > by
> > > >> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> location
> > > of
> > > >> > > > >>>>>>>>> the
> > > >> > > > share.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> > > >> > > > >>>>>>>>> discovering
> > > >> > > their
> > > >> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> > > somewhere
> > > >> > > > >>>>>>>>> on
> > > >> > > > their file
> > > >> > > > >>>>>>>>> system.
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> Would it make sense for me to just do that
> discovery,
> > > >> > > > >>>>>>>>> logging
> > > >> > > in,
> > > >> > > > >>>>>>>>> and mounting behind the scenes for them and letting
> > the
> > > >> > current
> > > >> > > > code manage
> > > >> > > > >>>>>>>>> the rest as it currently does?
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > >> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>
> > > >> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
> need
> > to
> > > >> > catch
> > > >> > > up
> > > >> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> > > disk
> > > >> > > > snapshots + memory
> > > >> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably
> > be
> > > >> > handled
> > > >> > > > by the SAN,
> > > >> > > > >>>>>>>>>> and then memory dumps can go to secondary storage
> or
> > > >> > something
> > > >> > > > else. This is
> > > >> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> > want
> > > to
> > > >> > see
> > > >> > > > how others are
> > > >> > > > >>>>>>>>>> planning theirs.
> > > >> > > > >>>>>>>>>>
> > > >> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > >> > > shadowsor@gmail.com
> > > >> > > > >
> > > >> > > > >>>>>>>>>> wrote:
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a
> vdi
> > > >> > > > >>>>>>>>>>> style
> > > >> > on
> > > >> > > > an
> > > >> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > > >> > > > >>>>>>>>>>> format.
> > > >> > > > Otherwise you're
> > > >> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> > > creating
> > > >> > > > >>>>>>>>>>> a
> > > >> > > > QCOW2 disk image,
> > > >> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> killer.
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
> to
> > > the
> > > >> > VM,
> > > >> > > > and
> > > >> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> > > >> > > > >>>>>>>>>>> plugin
> > > >> > is
> > > >> > > > best. My
> > > >> > > > >>>>>>>>>>> impression from the storage plugin refactor was
> that
> > > >> > > > >>>>>>>>>>> there
> > > >> > > was
> > > >> > > > a snapshot
> > > >> > > > >>>>>>>>>>> service that would allow the San to handle
> > snapshots.
> > > >> > > > >>>>>>>>>>>
> > > >> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > >> > > > shadowsor@gmail.com>
> > > >> > > > >>>>>>>>>>> wrote:
> > > >> > > > >>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the
> SAN
> > > back
> > > >> > end,
> > > >> > > > if
> > > >> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> > could
> > > >> > > > >>>>>>>>>>>> call
> > > >> > > > your plugin for
> > > >> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> > agnostic.
> > > As
> > > >> > far
> > > >> > > > as space, that
> > > >> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
> ours,
> > > we
> > > >> > carve
> > > >> > > > out luns from a
> > > >> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> > and
> > > is
> > > >> > > > independent of the
> > > >> > > > >>>>>>>>>>>> LUN size the host sees.
> > > >> > > > >>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > >> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Hey Marcus,
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> > libvirt
> > > >> > > > >>>>>>>>>>>>> won't
> > > >> > > > work
> > > >> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> > > snapshots?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> snapshot,
> > > the
> > > >> > VDI
> > > >> > > > for
> > > >> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > > repository
> > > >> > > > >>>>>>>>>>>>> as
> > > >> > > the
> > > >> > > > volume is on.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > > >> > > > >>>>>>>>>>>>> XenServer
> > > >> > > and
> > > >> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> hypervisor
> > > >> > snapshots
> > > >> > > > in 4.2) is I'd
> > > >> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what
> the
> > > user
> > > >> > > > requested for the
> > > >> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> > > >> > > > >>>>>>>>>>>>> thinly
> > > >> > > > provisions volumes,
> > > >> > > > >>>>>>>>>>>>> so the space is not actually used unless it
> needs
> > to
> > > >> > > > >>>>>>>>>>>>> be).
> > > >> > > > The CloudStack
> > > >> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> > volume
> > > >> > until a
> > > >> > > > hypervisor
> > > >> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
> reside
> > > on
> > > >> > > > >>>>>>>>>>>>> the
> > > >> > > > SAN volume.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > > >> > > > >>>>>>>>>>>>> creation
> > > >> > of
> > > >> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> > > even
> > > >> > > > >>>>>>>>>>>>> if
> > > >> > > > there were support
> > > >> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
> > per
> > > >> > > > >>>>>>>>>>>>> iSCSI
> > > >> > > > target), then I
> > > >> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current
> way
> > > this
> > > >> > > works
> > > >> > > > >>>>>>>>>>>>> with DIR?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> What do you think?
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> Thanks
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> > > access
> > > >> > > today.
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
> might
> > > as
> > > >> > well
> > > >> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
> Sorensen
> > > >> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> > believe
> > > >> > > > >>>>>>>>>>>>>>> it
> > > >> > > just
> > > >> > > > >>>>>>>>>>>>>>> acts like a
> > > >> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
> that.
> > > The
> > > >> > > > end-user
> > > >> > > > >>>>>>>>>>>>>>> is
> > > >> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that
> all
> > > KVM
> > > >> > hosts
> > > >> > > > can
> > > >> > > > >>>>>>>>>>>>>>> access,
> > > >> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> providing
> > > the
> > > >> > > > storage.
> > > >> > > > >>>>>>>>>>>>>>> It could
> > > >> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> > > >> > > > >>>>>>>>>>>>>>> filesystem,
> > > >> > > > >>>>>>>>>>>>>>> cloudstack just
> > > >> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> > > >> > > > >>>>>>>>>>>>>>> images.
> > > >> > > > >>>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> Sorensen
> > > >> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
> at
> > > the
> > > >> > same
> > > >> > > > >>>>>>>>>>>>>>> > time.
> > > >> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > >> > > > >>>>>>>>>>>>>>> >
> > > >> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> > pools:
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > >> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > >> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > >> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > >> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> > pool
> > > >> > based
> > > >> > > on
> > > >> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
> have
> > > one
> > > >> > LUN,
> > > >> > > > so
> > > >> > > > >>>>>>>>>>>>>>> >>> there would only
> > > >> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
> the
> > > >> > > (libvirt)
> > > >> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
> destroys
> > > >> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > > >> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > >> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > > >> > > > >>>>>>>>>>>>>>> >>> libvirt
> > > >> > > does
> > > >> > > > >>>>>>>>>>>>>>> >>> not support
> > > >> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
> > see
> > > >> > > > >>>>>>>>>>>>>>> >>> if
> > > >> > > > libvirt
> > > >> > > > >>>>>>>>>>>>>>> >>> supports
> > > >> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > > mentioned,
> > > >> > since
> > > >> > > > >>>>>>>>>>>>>>> >>> each one of its
> > > >> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > >> > > > targets/LUNs).
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> > > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > >> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > >> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>     }
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> > > >> > > > >>>>>>>>>>>>>>> >>>> currently
> > > >> > > being
> > > >> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > >> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
> > at.
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
> when
> > > >> > > > >>>>>>>>>>>>>>> >>>> someone
> > > >> > > > >>>>>>>>>>>>>>> >>>> selects the
> > > >> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> > > iSCSI,
> > > >> > > > >>>>>>>>>>>>>>> >>>> is
> > > >> > > > that
> > > >> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > >> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> > > >> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > > >> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
> > iSCSI
> > > >> > server,
> > > >> > > > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > >> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> believe
> > > >> > > your
> > > >> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > >> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> > > logging
> > > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >> > > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> > work
> > > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >> > the
> > > >> > > > Xen
> > > >> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > > >> > > > >>>>>>>>>>>>>>> >>>>> provides
> > > >> > a
> > > >> > > > 1:1
> > > >> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > >> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> > > device
> > > >> > > > >>>>>>>>>>>>>>> >>>>> as
> > > >> > a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > >> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> > > more
> > > >> > about
> > > >> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
> write
> > > your
> > > >> > own
> > > >> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > >> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > > >> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > > >> > >  We
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
> > the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > http://libvirt.org/sources/java/javadoc/Normally,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
> made
> > > to
> > > >> > that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
> > see
> > > >> > > > >>>>>>>>>>>>>>> >>>>> how
> > > >> > > that
> > > >> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > >> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
> > test
> > > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >> > > code
> > > >> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > >> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> > > iscsi
> > > >> > > storage
> > > >> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > >> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > >> > > > >>>>>>>>>>>>>>> >>>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> > libvirt
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > > >> > > but
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
> of
> > > the
> > > >> > > classes
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
> Marcus
> > > >> > Sorensen
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
> > the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > > >> > > for
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > > >> > > > login.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > > >> > and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > > Tutkowski"
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > > >> > I
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> > storage
> > > >> > > > framework
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
> > and
> > > >> > delete
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> establish
> > a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > > >> > > > mapping
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
> > QoS.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> > expected
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > admin
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> > volumes
> > > >> > would
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> > friendly).
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> > work, I
> > > >> > needed
> > > >> > > > to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
> they
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> > with
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
> this
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > > >> > > work
> > > >> > > > on
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> > how
> > > I
> > > >> > will
> > > >> > > > need
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> > have
> > > to
> > > >> > > expect
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
> it
> > > for
> > > >> > this
> > > >> > > to
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> > SolidFire
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses
> the
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> SolidFire
> > > Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> > > cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > --
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> > > Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> > > cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>>
> > > >> > > > >>>>>>>>>>>>>>> >>>> --
> > > >> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>>
> > > >> > > > >>>>>>>>>>>>>>> >>> --
> > > >> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire
> Inc.
> > > >> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
> cloud™
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >>
> > > >> > > > >>>>>>>>>>>>>>> >> --
> > > >> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>> --
> > > >> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>>
> > > >> > > > >>>>>>>>>>>>> --
> > > >> > > > >>>>>>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>>
> > > >> > > > >>>>>>>>> --
> > > >> > > > >>>>>>>>> Mike Tutkowski
> > > >> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>>>>> o: 303.746.7302
> > > >> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>>
> > > >> > > > >>>>>> --
> > > >> > > > >>>>>> Mike Tutkowski
> > > >> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>>>>> o: 303.746.7302
> > > >> > > > >>>>>> Advancing the way the world uses the cloud™
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>>
> > > >> > > > >>> --
> > > >> > > > >>> Mike Tutkowski
> > > >> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > >>> e: mike.tutkowski@solidfire.com
> > > >> > > > >>> o: 303.746.7302
> > > >> > > > >>> Advancing the way the world uses the cloud™
> > > >> > > > >
> > > >> > > > >
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > --
> > > >> > > > > Mike Tutkowski
> > > >> > > > > Senior CloudStack Developer, SolidFire Inc.
> > > >> > > > > e: mike.tutkowski@solidfire.com
> > > >> > > > > o: 303.746.7302
> > > >> > > > > Advancing the way the world uses the cloud™
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > --
> > > >> > > *Mike Tutkowski*
> > > >> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >> > > e: mike.tutkowski@solidfire.com
> > > >> > > o: 303.746.7302
> > > >> > > Advancing the way the world uses the
> > > >> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >> > > *™*
> > > >> > >
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Mike Tutkowski*
> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> e: mike.tutkowski@solidfire.com
> > > >> o: 303.746.7302
> > > >> Advancing the way the world uses the
> > > >> cloud<http://solidfire.com/solution/overview/?video=play>
> > > >> *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
OK.  Most other storage types interrogate the storage for the
capacitywhethwr directly or through the hypervisor. This makes it dynamic
(user could add capacity and cloudstack notices), and provides accurate
accounting things like thin provisioning. I would be surprised if edison
didn't allow for this in the new storage framework.
On Sep 17, 2013 10:34 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> This should answer your question, I believe:
>
> * When you add primary storage that is based on the SolidFire plug-in, you
> specify info like host, port, number of bytes from the SAN that CS can use,
> number of IOPS from the SAN that CS can use, among other info.
>
> * When a volume is attached for the first time and the storage framework
> asks my plug-in to create a volume (LUN) on the SAN, my plug-in increments
> the used_bytes field of the cloud.storage_pool table. If the used_bytes
> would go above the capacity_bytes, then the allocator would not have
> selected my plug-in to back the storage. Additionally, if the required IOPS
> would bring the SolidFire SAN above the number of IOPS that were dedicated
> to CS, the allocator would not have selected my plug-in to back the
> storage.
>
> * When a CS volume is deleted that uses my plug-in, the storage framework
> asks my plug-in to delete the volume (LUN) on the SAN. My plug-in
> decrements the used_bytes field of the cloud.storage_pool table.
>
> So, it just boils down to we don't require the accounting of space and IOPS
> to take place on the hypervisor side.
>
>
> On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > Ok, on most storage pools it shows how many GB free/used when listing
> > the pool both via API and in the UI. I'm guessing those are empty then
> > for the solid fire storage, but it seems like the user should have to
> > define some sort of pool that the luns get carved out of, and you
> > should be able to get the stats for that, right? Or is a solid fire
> > appliance only one pool per appliance? This isn't about billing, but
> > just so cloudstack itself knows whether or not there is space left on
> > the storage device, so cloudstack can go on allocating from a
> > different primary storage as this one fills up. There are also
> > notifications and things. It seems like there should be a call you can
> > handle for this, maybe Edison knows.
> >
> > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
> > wrote:
> > > You respond to more than attach and detach, right? Don't you create
> luns
> > as
> > > well? Or are you just referring to the hypervisor stuff?
> > >
> > > On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> > > wrote:
> > >>
> > >> Hi Marcus,
> > >>
> > >> I never need to respond to a CreateStoragePool call for either
> XenServer
> > >> or
> > >> VMware.
> > >>
> > >> What happens is I respond only to the Attach- and Detach-volume
> > commands.
> > >>
> > >> Let's say an attach comes in:
> > >>
> > >> In this case, I check to see if the storage is "managed." Talking
> > >> XenServer
> > >> here, if it is, I log in to the LUN that is the disk we want to
> attach.
> > >> After, if this is the first time attaching this disk, I create an SR
> > and a
> > >> VDI within the SR. If it is not the first time attaching this disk,
> the
> > >> LUN
> > >> already has the SR and VDI on it.
> > >>
> > >> Once this is done, I let the normal "attach" logic run because this
> > logic
> > >> expected an SR and a VDI and now it has it.
> > >>
> > >> It's the same thing for VMware: Just substitute datastore for SR and
> > VMDK
> > >> for VDI.
> > >>
> > >> Does that make sense?
> > >>
> > >> Thanks!
> > >>
> > >>
> > >> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > >> <sh...@gmail.com>wrote:
> > >>
> > >> > What do you do with Xen? I imagine the user enter the SAN details
> when
> > >> > registering the pool? A the pool details are basically just
> > instructions
> > >> > on
> > >> > how to log into a target, correct?
> > >> >
> > >> > You can choose to log in a KVM host to the target during
> > >> > createStoragePool
> > >> > and save the pool in a map, or just save the pool info in a map for
> > >> > future
> > >> > reference by uuid, for when you do need to log in. The
> > createStoragePool
> > >> > then just becomes a way to save the pool info to the agent.
> > Personally,
> > >> > I'd
> > >> > log in on the pool create and look/scan for specific luns when
> they're
> > >> > needed, but I haven't thought it through thoroughly. I just say that
> > >> > mainly
> > >> > because login only happens once, the first time the pool is used,
> and
> > >> > every
> > >> > other storage command is about discovering new luns or maybe
> > >> > deleting/disconnecting luns no longer needed. On the other hand, you
> > >> > could
> > >> > do all of the above: log in on pool create, then also check if
> you're
> > >> > logged in on other commands and log in if you've lost connection.
> > >> >
> > >> > With Xen, what does your registered pool   show in the UI for
> > avail/used
> > >> > capacity, and how does it get that info? I assume there is some sort
> > of
> > >> > disk pool that the luns are carved from, and that your plugin is
> > called
> > >> > to
> > >> > talk to the SAN and expose to the user how much of that pool has
> been
> > >> > allocated. Knowing how you already solves these problems with Xen
> will
> > >> > help
> > >> > figure out what to do with KVM.
> > >> >
> > >> > If this is the case, I think the plugin can continue to handle it
> > rather
> > >> > than getting details from the agent. I'm not sure if that means
> nulls
> > >> > are
> > >> > OK for these on the agent side or what, I need to look at the
> storage
> > >> > plugin arch more closely.
> > >> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com>
> > >> > wrote:
> > >> >
> > >> > > Hey Marcus,
> > >> > >
> > >> > > I'm reviewing your e-mails as I implement the necessary methods in
> > new
> > >> > > classes.
> > >> > >
> > >> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> all
> > of
> > >> > > the pool data (host, port, name, path) which would be used to log
> > the
> > >> > > host into the initiator."
> > >> > >
> > >> > > Can you tell me, in my case, since a storage pool (primary
> storage)
> > is
> > >> > > actually the SAN, I wouldn't really be logging into anything at
> this
> > >> > point,
> > >> > > correct?
> > >> > >
> > >> > > Also, what kind of capacity, available, and used bytes make sense
> to
> > >> > report
> > >> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> > case
> > >> > and
> > >> > > not an individual LUN)?
> > >> > >
> > >> > > Thanks!
> > >> > >
> > >> > >
> > >> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > shadowsor@gmail.com
> > >> > > >wrote:
> > >> > >
> > >> > > > Ok, KVM will be close to that, of course, because only the
> > >> > > > hypervisor
> > >> > > > classes differ, the rest is all mgmt server. Creating a volume
> is
> > >> > > > just
> > >> > > > a db entry until it's deployed for the first time.
> > >> > > > AttachVolumeCommand
> > >> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > >> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
> KVM
> > >> > > > StorageAdaptor) to log in the host to the target and then you
> > have a
> > >> > > > block device.  Maybe libvirt will do that for you, but my quick
> > read
> > >> > > > made it sound like the iscsi libvirt pool type is actually a
> pool,
> > >> > > > not
> > >> > > > a lun or volume, so you'll need to figure out if that works or
> if
> > >> > > > you'll have to use iscsiadm commands.
> > >> > > >
> > >> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> Libvirt
> > >> > > > doesn't really manage your pool the way you want), you're going
> to
> > >> > > > have to create a version of KVMStoragePool class and a
> > >> > > > StorageAdaptor
> > >> > > > class (see LibvirtStoragePool.java and
> > LibvirtStorageAdaptor.java),
> > >> > > > implementing all of the methods, then in KVMStorageManager.java
> > >> > > > there's a "_storageMapper" map. This is used to select the
> correct
> > >> > > > adaptor, you can see in this file that every call first pulls
> the
> > >> > > > correct adaptor out of this map via getStorageAdaptor. So you
> can
> > >> > > > see
> > >> > > > a comment in this file that says "add other storage adaptors
> > here",
> > >> > > > where it puts to this map, this is where you'd register your
> > >> > > > adaptor.
> > >> > > >
> > >> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
> all
> > >> > > > of
> > >> > > > the pool data (host, port, name, path) which would be used to
> log
> > >> > > > the
> > >> > > > host into the initiator. I *believe* the method getPhysicalDisk
> > will
> > >> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> > calls
> > >> > > > this and then creates the XML diskdef and attaches it to the VM.
> > >> > > > Now,
> > >> > > > one thing you need to know is that createStoragePool is called
> > >> > > > often,
> > >> > > > sometimes just to make sure the pool is there. You may want to
> > >> > > > create
> > >> > > > a map in your adaptor class and keep track of pools that have
> been
> > >> > > > created, LibvirtStorageAdaptor doesn't have to do this because
> it
> > >> > > > asks
> > >> > > > libvirt about which storage pools exist. There are also calls to
> > >> > > > refresh the pool stats, and all of the other calls can be seen
> in
> > >> > > > the
> > >> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> etc,
> > >> > > > but
> > >> > > > it's probably a hold-over from 4.1, as I have the vague idea
> that
> > >> > > > volumes are created on the mgmt server via the plugin now, so
> > >> > > > whatever
> > >> > > > doesn't apply can just be stubbed out (or optionally
> > >> > > > extended/reimplemented here, if you don't mind the hosts talking
> > to
> > >> > > > the san api).
> > >> > > >
> > >> > > > There is a difference between attaching new volumes and
> launching
> > a
> > >> > > > VM
> > >> > > > with existing volumes.  In the latter case, the VM definition
> that
> > >> > > > was
> > >> > > > passed to the KVM agent includes the disks, (StartCommand).
> > >> > > >
> > >> > > > I'd be interested in how your pool is defined for Xen, I imagine
> > it
> > >> > > > would need to be kept the same. Is it just a definition to the
> SAN
> > >> > > > (ip address or some such, port number) and perhaps a volume pool
> > >> > > > name?
> > >> > > >
> > >> > > > > If there is a way for me to update the ACL list on the SAN to
> > have
> > >> > > only a
> > >> > > > > single KVM host have access to the volume, that would be
> ideal.
> > >> > > >
> > >> > > > That depends on your SAN API.  I was under the impression that
> the
> > >> > > > storage plugin framework allowed for acls, or for you to do
> > whatever
> > >> > > > you want for create/attach/delete/snapshot, etc. You'd just call
> > >> > > > your
> > >> > > > SAN API with the host info for the ACLs prior to when the disk
> is
> > >> > > > attached (or the VM is started).  I'd have to look more at the
> > >> > > > framework to know the details, in 4.1 I would do this in
> > >> > > > getPhysicalDisk just prior to connecting up the LUN.
> > >> > > >
> > >> > > >
> > >> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > >> > > > <mi...@solidfire.com> wrote:
> > >> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> > >> > > > > different
> > >> > > from
> > >> > > > how
> > >> > > > > it works with XenServer and VMware.
> > >> > > > >
> > >> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > >> > > > >
> > >> > > > > * The user creates a CS volume (this is just recorded in the
> > >> > > > cloud.volumes
> > >> > > > > table).
> > >> > > > >
> > >> > > > > * The user attaches the volume as a disk to a VM for the first
> > >> > > > > time
> > >> > (if
> > >> > > > the
> > >> > > > > storage allocator picks the SolidFire plug-in, the storage
> > >> > > > > framework
> > >> > > > invokes
> > >> > > > > a method on the plug-in that creates a volume on the
> SAN...info
> > >> > > > > like
> > >> > > the
> > >> > > > IQN
> > >> > > > > of the SAN volume is recorded in the DB).
> > >> > > > >
> > >> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> executed.
> > >> > > > > It
> > >> > > > > determines based on a flag passed in that the storage in
> > question
> > >> > > > > is
> > >> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > >> > preallocated
> > >> > > > > storage). This tells it to discover the iSCSI target. Once
> > >> > > > > discovered
> > >> > > it
> > >> > > > > determines if the iSCSI target already contains a storage
> > >> > > > > repository
> > >> > > (it
> > >> > > > > would if this were a re-attach situation). If it does contain
> an
> > >> > > > > SR
> > >> > > > already,
> > >> > > > > then there should already be one VDI, as well. If there is no
> > SR,
> > >> > > > > an
> > >> > SR
> > >> > > > is
> > >> > > > > created and a single VDI is created within it (that takes up
> > about
> > >> > > > > as
> > >> > > > much
> > >> > > > > space as was requested for the CloudStack volume).
> > >> > > > >
> > >> > > > > * The normal attach-volume logic continues (it depends on the
> > >> > existence
> > >> > > > of
> > >> > > > > an SR and a VDI).
> > >> > > > >
> > >> > > > > The VMware case is essentially the same (mainly just
> substitute
> > >> > > datastore
> > >> > > > > for SR and VMDK for VDI).
> > >> > > > >
> > >> > > > > In both cases, all hosts in the cluster have discovered the
> > iSCSI
> > >> > > target,
> > >> > > > > but only the host that is currently running the VM that is
> using
> > >> > > > > the
> > >> > > VDI
> > >> > > > (or
> > >> > > > > VMKD) is actually using the disk.
> > >> > > > >
> > >> > > > > Live Migration should be OK because the hypervisors
> communicate
> > >> > > > > with
> > >> > > > > whatever metadata they have on the SR (or datastore).
> > >> > > > >
> > >> > > > > I see what you're saying with KVM, though.
> > >> > > > >
> > >> > > > > In that case, the hosts are clustered only in CloudStack's
> eyes.
> > >> > > > > CS
> > >> > > > controls
> > >> > > > > Live Migration. You don't really need a clustered filesystem
> on
> > >> > > > > the
> > >> > > LUN.
> > >> > > > The
> > >> > > > > LUN could be handed over raw to the VM using it.
> > >> > > > >
> > >> > > > > If there is a way for me to update the ACL list on the SAN to
> > have
> > >> > > only a
> > >> > > > > single KVM host have access to the volume, that would be
> ideal.
> > >> > > > >
> > >> > > > > Also, I agree I'll need to use iscsiadm to discover and log in
> > to
> > >> > > > > the
> > >> > > > iSCSI
> > >> > > > > target. I'll also need to take the resultant new device and
> pass
> > >> > > > > it
> > >> > > into
> > >> > > > the
> > >> > > > > VM.
> > >> > > > >
> > >> > > > > Does this sound reasonable? Please call me out on anything I
> > seem
> > >> > > > incorrect
> > >> > > > > about. :)
> > >> > > > >
> > >> > > > > Thanks for all the thought on this, Marcus!
> > >> > > > >
> > >> > > > >
> > >> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > >> > shadowsor@gmail.com>
> > >> > > > > wrote:
> > >> > > > >>
> > >> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and
> > the
> > >> > > attach
> > >> > > > >> the disk def to the vm. You may need to do your own
> > >> > > > >> StorageAdaptor
> > >> > and
> > >> > > > run
> > >> > > > >> iscsiadm commands to accomplish that, depending on how the
> > >> > > > >> libvirt
> > >> > > iscsi
> > >> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> how
> > it
> > >> > > works
> > >> > > > on
> > >> > > > >> xen at the momen., nor is it ideal.
> > >> > > > >>
> > >> > > > >> Your plugin will handle acls as far as which host can see
> which
> > >> > > > >> luns
> > >> > > as
> > >> > > > >> well, I remember discussing that months ago, so that a disk
> > won't
> > >> > > > >> be
> > >> > > > >> connected until the hypervisor has exclusive access, so it
> will
> > >> > > > >> be
> > >> > > safe
> > >> > > > and
> > >> > > > >> fence the disk from rogue nodes that cloudstack loses
> > >> > > > >> connectivity
> > >> > > > with. It
> > >> > > > >> should revoke access to everything but the target host...
> > Except
> > >> > > > >> for
> > >> > > > during
> > >> > > > >> migration but we can discuss that later, there's a migration
> > prep
> > >> > > > process
> > >> > > > >> where the new host can be added to the acls, and the old host
> > can
> > >> > > > >> be
> > >> > > > removed
> > >> > > > >> post migration.
> > >> > > > >>
> > >> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > >> > > mike.tutkowski@solidfire.com
> > >> > > > >
> > >> > > > >> wrote:
> > >> > > > >>>
> > >> > > > >>> Yeah, that would be ideal.
> > >> > > > >>>
> > >> > > > >>> So, I would still need to discover the iSCSI target, log in
> to
> > >> > > > >>> it,
> > >> > > then
> > >> > > > >>> figure out what /dev/sdX was created as a result (and leave
> it
> > >> > > > >>> as
> > >> > is
> > >> > > -
> > >> > > > do
> > >> > > > >>> not format it with any file system...clustered or not). I
> > would
> > >> > pass
> > >> > > > that
> > >> > > > >>> device into the VM.
> > >> > > > >>>
> > >> > > > >>> Kind of accurate?
> > >> > > > >>>
> > >> > > > >>>
> > >> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > >> > > shadowsor@gmail.com>
> > >> > > > >>> wrote:
> > >> > > > >>>>
> > >> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> definitions.
> > >> > There
> > >> > > > are
> > >> > > > >>>> ones that work for block devices rather than files. You can
> > >> > > > >>>> piggy
> > >> > > > back off
> > >> > > > >>>> of the existing disk definitions and attach it to the vm
> as a
> > >> > block
> > >> > > > device.
> > >> > > > >>>> The definition is an XML string per libvirt XML format. You
> > may
> > >> > want
> > >> > > > to use
> > >> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> like
> > I
> > >> > > > mentioned,
> > >> > > > >>>> there are by-id paths to the block devices, as well as
> other
> > >> > > > >>>> ones
> > >> > > > that will
> > >> > > > >>>> be consistent and easier for management, not sure how
> > familiar
> > >> > > > >>>> you
> > >> > > > are with
> > >> > > > >>>> device naming on Linux.
> > >> > > > >>>>
> > >> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > >> > > > >>>> <sh...@gmail.com>
> > >> > > > wrote:
> > >> > > > >>>>>
> > >> > > > >>>>> No, as that would rely on virtualized network/iscsi
> > initiator
> > >> > > inside
> > >> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun
> > on
> > >> > > > hypervisor) as
> > >> > > > >>>>> a disk to the VM, rather than attaching some image file
> that
> > >> > > resides
> > >> > > > on a
> > >> > > > >>>>> filesystem, mounted on the host, living on a target.
> > >> > > > >>>>>
> > >> > > > >>>>> Actually, if you plan on the storage supporting live
> > migration
> > >> > > > >>>>> I
> > >> > > > think
> > >> > > > >>>>> this is the only way. You can't put a filesystem on it and
> > >> > > > >>>>> mount
> > >> > it
> > >> > > > in two
> > >> > > > >>>>> places to facilitate migration unless its a clustered
> > >> > > > >>>>> filesystem,
> > >> > > in
> > >> > > > which
> > >> > > > >>>>> case you're back to shared mount point.
> > >> > > > >>>>>
> > >> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
> LVM
> > >> > with a
> > >> > > > xen
> > >> > > > >>>>> specific cluster management, a custom CLVM. They don't
> use a
> > >> > > > filesystem
> > >> > > > >>>>> either.
> > >> > > > >>>>>
> > >> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > >> > > > >>>>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>
> > >> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
> you
> > >> > > > >>>>>> mean
> > >> > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> > that
> > >> > > > >>>>>> in
> > >> > > CS.
> > >> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > >> > > > >>>>>> hypervisor,
> > >> > > as
> > >> > > > far as I
> > >> > > > >>>>>> know.
> > >> > > > >>>>>>
> > >> > > > >>>>>>
> > >> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > >> > > > shadowsor@gmail.com>
> > >> > > > >>>>>> wrote:
> > >> > > > >>>>>>>
> > >> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> there
> > is
> > >> > > > >>>>>>> a
> > >> > > good
> > >> > > > >>>>>>> reason not to.
> > >> > > > >>>>>>>
> > >> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > >> > shadowsor@gmail.com>
> > >> > > > >>>>>>> wrote:
> > >> > > > >>>>>>>>
> > >> > > > >>>>>>>> You could do that, but as mentioned I think its a
> mistake
> > >> > > > >>>>>>>> to
> > >> > go
> > >> > > to
> > >> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> > luns
> > >> > and
> > >> > > > then putting
> > >> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> QCOW2
> > >> > > > >>>>>>>> or
> > >> > > even
> > >> > > > RAW disk
> > >> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> along
> > >> > > > >>>>>>>> the
> > >> > > > way, and have
> > >> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> > etc.
> > >> > > > >>>>>>>>
> > >> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > >> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> > with
> > >> > CS.
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
> is
> > by
> > >> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location
> > of
> > >> > > > >>>>>>>>> the
> > >> > > > share.
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> > >> > > > >>>>>>>>> discovering
> > >> > > their
> > >> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> > somewhere
> > >> > > > >>>>>>>>> on
> > >> > > > their file
> > >> > > > >>>>>>>>> system.
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> > >> > > > >>>>>>>>> logging
> > >> > > in,
> > >> > > > >>>>>>>>> and mounting behind the scenes for them and letting
> the
> > >> > current
> > >> > > > code manage
> > >> > > > >>>>>>>>> the rest as it currently does?
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > >> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > > >>>>>>>>>>
> > >> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
> to
> > >> > catch
> > >> > > up
> > >> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> > disk
> > >> > > > snapshots + memory
> > >> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably
> be
> > >> > handled
> > >> > > > by the SAN,
> > >> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > >> > something
> > >> > > > else. This is
> > >> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> want
> > to
> > >> > see
> > >> > > > how others are
> > >> > > > >>>>>>>>>> planning theirs.
> > >> > > > >>>>>>>>>>
> > >> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > >> > > shadowsor@gmail.com
> > >> > > > >
> > >> > > > >>>>>>>>>> wrote:
> > >> > > > >>>>>>>>>>>
> > >> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> > >> > > > >>>>>>>>>>> style
> > >> > on
> > >> > > > an
> > >> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > >> > > > >>>>>>>>>>> format.
> > >> > > > Otherwise you're
> > >> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> > creating
> > >> > > > >>>>>>>>>>> a
> > >> > > > QCOW2 disk image,
> > >> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > >> > > > >>>>>>>>>>>
> > >> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> > the
> > >> > VM,
> > >> > > > and
> > >> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> > >> > > > >>>>>>>>>>> plugin
> > >> > is
> > >> > > > best. My
> > >> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> > >> > > > >>>>>>>>>>> there
> > >> > > was
> > >> > > > a snapshot
> > >> > > > >>>>>>>>>>> service that would allow the San to handle
> snapshots.
> > >> > > > >>>>>>>>>>>
> > >> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > >> > > > shadowsor@gmail.com>
> > >> > > > >>>>>>>>>>> wrote:
> > >> > > > >>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> > back
> > >> > end,
> > >> > > > if
> > >> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> could
> > >> > > > >>>>>>>>>>>> call
> > >> > > > your plugin for
> > >> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> agnostic.
> > As
> > >> > far
> > >> > > > as space, that
> > >> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours,
> > we
> > >> > carve
> > >> > > > out luns from a
> > >> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> and
> > is
> > >> > > > independent of the
> > >> > > > >>>>>>>>>>>> LUN size the host sees.
> > >> > > > >>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > >> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> Hey Marcus,
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> libvirt
> > >> > > > >>>>>>>>>>>>> won't
> > >> > > > work
> > >> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> > snapshots?
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> > the
> > >> > VDI
> > >> > > > for
> > >> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > repository
> > >> > > > >>>>>>>>>>>>> as
> > >> > > the
> > >> > > > volume is on.
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > >> > > > >>>>>>>>>>>>> XenServer
> > >> > > and
> > >> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > >> > snapshots
> > >> > > > in 4.2) is I'd
> > >> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> > user
> > >> > > > requested for the
> > >> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> > >> > > > >>>>>>>>>>>>> thinly
> > >> > > > provisions volumes,
> > >> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs
> to
> > >> > > > >>>>>>>>>>>>> be).
> > >> > > > The CloudStack
> > >> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> volume
> > >> > until a
> > >> > > > hypervisor
> > >> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside
> > on
> > >> > > > >>>>>>>>>>>>> the
> > >> > > > SAN volume.
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > >> > > > >>>>>>>>>>>>> creation
> > >> > of
> > >> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> > even
> > >> > > > >>>>>>>>>>>>> if
> > >> > > > there were support
> > >> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
> per
> > >> > > > >>>>>>>>>>>>> iSCSI
> > >> > > > target), then I
> > >> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> > this
> > >> > > works
> > >> > > > >>>>>>>>>>>>> with DIR?
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> What do you think?
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> Thanks
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > >> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> > access
> > >> > > today.
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might
> > as
> > >> > well
> > >> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > >> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > > >>>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> believe
> > >> > > > >>>>>>>>>>>>>>> it
> > >> > > just
> > >> > > > >>>>>>>>>>>>>>> acts like a
> > >> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> > The
> > >> > > > end-user
> > >> > > > >>>>>>>>>>>>>>> is
> > >> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all
> > KVM
> > >> > hosts
> > >> > > > can
> > >> > > > >>>>>>>>>>>>>>> access,
> > >> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> > the
> > >> > > > storage.
> > >> > > > >>>>>>>>>>>>>>> It could
> > >> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> > >> > > > >>>>>>>>>>>>>>> filesystem,
> > >> > > > >>>>>>>>>>>>>>> cloudstack just
> > >> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> > >> > > > >>>>>>>>>>>>>>> images.
> > >> > > > >>>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > >> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> > the
> > >> > same
> > >> > > > >>>>>>>>>>>>>>> > time.
> > >> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > >> > > > >>>>>>>>>>>>>>> >
> > >> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> Tutkowski
> > >> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> pools:
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > >> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > >> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > >> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > >> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> pool
> > >> > based
> > >> > > on
> > >> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> > one
> > >> > LUN,
> > >> > > > so
> > >> > > > >>>>>>>>>>>>>>> >>> there would only
> > >> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > >> > > (libvirt)
> > >> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> > >> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > >> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > >> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > >> > > > >>>>>>>>>>>>>>> >>> libvirt
> > >> > > does
> > >> > > > >>>>>>>>>>>>>>> >>> not support
> > >> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
> see
> > >> > > > >>>>>>>>>>>>>>> >>> if
> > >> > > > libvirt
> > >> > > > >>>>>>>>>>>>>>> >>> supports
> > >> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > mentioned,
> > >> > since
> > >> > > > >>>>>>>>>>>>>>> >>> each one of its
> > >> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > >> > > > targets/LUNs).
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> > Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > >> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > >> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>         }
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>     }
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> > >> > > > >>>>>>>>>>>>>>> >>>> currently
> > >> > > being
> > >> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > >> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
> at.
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> > >> > > > >>>>>>>>>>>>>>> >>>> someone
> > >> > > > >>>>>>>>>>>>>>> >>>> selects the
> > >> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> > iSCSI,
> > >> > > > >>>>>>>>>>>>>>> >>>> is
> > >> > > > that
> > >> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > >> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> > >> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > >> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > >> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
> iSCSI
> > >> > server,
> > >> > > > and
> > >> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > >> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> > >> > > > >>>>>>>>>>>>>>> >>>>> believe
> > >> > > your
> > >> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > >> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> > logging
> > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > >> > > and
> > >> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > >> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> work
> > >> > > > >>>>>>>>>>>>>>> >>>>> in
> > >> > the
> > >> > > > Xen
> > >> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > >> > > > >>>>>>>>>>>>>>> >>>>> provides
> > >> > a
> > >> > > > 1:1
> > >> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > >> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> > device
> > >> > > > >>>>>>>>>>>>>>> >>>>> as
> > >> > a
> > >> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > >> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> > more
> > >> > about
> > >> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > >> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> > your
> > >> > own
> > >> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > >> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > >> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > >> > >  We
> > >> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > >> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
> the
> > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > >> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/sources/java/javadoc/Normally,
> > >> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > >> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made
> > to
> > >> > that
> > >> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > >> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
> see
> > >> > > > >>>>>>>>>>>>>>> >>>>> how
> > >> > > that
> > >> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > >> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
> test
> > >> > > > >>>>>>>>>>>>>>> >>>>> java
> > >> > > code
> > >> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > >> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> > iscsi
> > >> > > storage
> > >> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > >> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > >> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> > >> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> libvirt
> > >> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > >> > > but
> > >> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > >> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > >> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > >> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > >> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > >> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
> > the
> > >> > > classes
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> > >> > Sorensen
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
> the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > >> > > for
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > >> > > > login.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > >> > and
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > Tutkowski"
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > >> > I
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> storage
> > >> > > > framework
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
> and
> > >> > delete
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish
> a
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > >> > > > mapping
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
> QoS.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> expected
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > > admin
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> volumes
> > >> > would
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> friendly).
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> work, I
> > >> > needed
> > >> > > > to
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> with
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > >> > > work
> > >> > > > on
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> how
> > I
> > >> > will
> > >> > > > need
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> have
> > to
> > >> > > expect
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
> > for
> > >> > this
> > >> > > to
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> SolidFire
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> > Inc.
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> > cloud™
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > > >>>>>>>>>>>>>>> >>>>> > --
> > >> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> > Inc.
> > >> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> > cloud™
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > > >>>>>>>>>>>>>>> >>>> --
> > >> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>>
> > >> > > > >>>>>>>>>>>>>>> >>> --
> > >> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >>
> > >> > > > >>>>>>>>>>>>>>> >> --
> > >> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>> --
> > >> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>>
> > >> > > > >>>>>>>>>>>>> --
> > >> > > > >>>>>>>>>>>>> Mike Tutkowski
> > >> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>>>>>> o: 303.746.7302
> > >> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>>
> > >> > > > >>>>>>>>> --
> > >> > > > >>>>>>>>> Mike Tutkowski
> > >> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>>>>> o: 303.746.7302
> > >> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > >> > > > >>>>>>
> > >> > > > >>>>>>
> > >> > > > >>>>>>
> > >> > > > >>>>>>
> > >> > > > >>>>>> --
> > >> > > > >>>>>> Mike Tutkowski
> > >> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > >> > > > >>>>>> o: 303.746.7302
> > >> > > > >>>>>> Advancing the way the world uses the cloud™
> > >> > > > >>>
> > >> > > > >>>
> > >> > > > >>>
> > >> > > > >>>
> > >> > > > >>> --
> > >> > > > >>> Mike Tutkowski
> > >> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > > >>> e: mike.tutkowski@solidfire.com
> > >> > > > >>> o: 303.746.7302
> > >> > > > >>> Advancing the way the world uses the cloud™
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > > > --
> > >> > > > > Mike Tutkowski
> > >> > > > > Senior CloudStack Developer, SolidFire Inc.
> > >> > > > > e: mike.tutkowski@solidfire.com
> > >> > > > > o: 303.746.7302
> > >> > > > > Advancing the way the world uses the cloud™
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > --
> > >> > > *Mike Tutkowski*
> > >> > > *Senior CloudStack Developer, SolidFire Inc.*
> > >> > > e: mike.tutkowski@solidfire.com
> > >> > > o: 303.746.7302
> > >> > > Advancing the way the world uses the
> > >> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > >> > > *™*
> > >> > >
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkowski@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the
> > >> cloud<http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
This should answer your question, I believe:

* When you add primary storage that is based on the SolidFire plug-in, you
specify info like host, port, number of bytes from the SAN that CS can use,
number of IOPS from the SAN that CS can use, among other info.

* When a volume is attached for the first time and the storage framework
asks my plug-in to create a volume (LUN) on the SAN, my plug-in increments
the used_bytes field of the cloud.storage_pool table. If the used_bytes
would go above the capacity_bytes, then the allocator would not have
selected my plug-in to back the storage. Additionally, if the required IOPS
would bring the SolidFire SAN above the number of IOPS that were dedicated
to CS, the allocator would not have selected my plug-in to back the storage.

* When a CS volume is deleted that uses my plug-in, the storage framework
asks my plug-in to delete the volume (LUN) on the SAN. My plug-in
decrements the used_bytes field of the cloud.storage_pool table.

So, it just boils down to we don't require the accounting of space and IOPS
to take place on the hypervisor side.


On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Ok, on most storage pools it shows how many GB free/used when listing
> the pool both via API and in the UI. I'm guessing those are empty then
> for the solid fire storage, but it seems like the user should have to
> define some sort of pool that the luns get carved out of, and you
> should be able to get the stats for that, right? Or is a solid fire
> appliance only one pool per appliance? This isn't about billing, but
> just so cloudstack itself knows whether or not there is space left on
> the storage device, so cloudstack can go on allocating from a
> different primary storage as this one fills up. There are also
> notifications and things. It seems like there should be a call you can
> handle for this, maybe Edison knows.
>
> On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> > You respond to more than attach and detach, right? Don't you create luns
> as
> > well? Or are you just referring to the hypervisor stuff?
> >
> > On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mi...@solidfire.com>
> > wrote:
> >>
> >> Hi Marcus,
> >>
> >> I never need to respond to a CreateStoragePool call for either XenServer
> >> or
> >> VMware.
> >>
> >> What happens is I respond only to the Attach- and Detach-volume
> commands.
> >>
> >> Let's say an attach comes in:
> >>
> >> In this case, I check to see if the storage is "managed." Talking
> >> XenServer
> >> here, if it is, I log in to the LUN that is the disk we want to attach.
> >> After, if this is the first time attaching this disk, I create an SR
> and a
> >> VDI within the SR. If it is not the first time attaching this disk, the
> >> LUN
> >> already has the SR and VDI on it.
> >>
> >> Once this is done, I let the normal "attach" logic run because this
> logic
> >> expected an SR and a VDI and now it has it.
> >>
> >> It's the same thing for VMware: Just substitute datastore for SR and
> VMDK
> >> for VDI.
> >>
> >> Does that make sense?
> >>
> >> Thanks!
> >>
> >>
> >> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> >> <sh...@gmail.com>wrote:
> >>
> >> > What do you do with Xen? I imagine the user enter the SAN details when
> >> > registering the pool? A the pool details are basically just
> instructions
> >> > on
> >> > how to log into a target, correct?
> >> >
> >> > You can choose to log in a KVM host to the target during
> >> > createStoragePool
> >> > and save the pool in a map, or just save the pool info in a map for
> >> > future
> >> > reference by uuid, for when you do need to log in. The
> createStoragePool
> >> > then just becomes a way to save the pool info to the agent.
> Personally,
> >> > I'd
> >> > log in on the pool create and look/scan for specific luns when they're
> >> > needed, but I haven't thought it through thoroughly. I just say that
> >> > mainly
> >> > because login only happens once, the first time the pool is used, and
> >> > every
> >> > other storage command is about discovering new luns or maybe
> >> > deleting/disconnecting luns no longer needed. On the other hand, you
> >> > could
> >> > do all of the above: log in on pool create, then also check if you're
> >> > logged in on other commands and log in if you've lost connection.
> >> >
> >> > With Xen, what does your registered pool   show in the UI for
> avail/used
> >> > capacity, and how does it get that info? I assume there is some sort
> of
> >> > disk pool that the luns are carved from, and that your plugin is
> called
> >> > to
> >> > talk to the SAN and expose to the user how much of that pool has been
> >> > allocated. Knowing how you already solves these problems with Xen will
> >> > help
> >> > figure out what to do with KVM.
> >> >
> >> > If this is the case, I think the plugin can continue to handle it
> rather
> >> > than getting details from the agent. I'm not sure if that means nulls
> >> > are
> >> > OK for these on the agent side or what, I need to look at the storage
> >> > plugin arch more closely.
> >> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> >> > wrote:
> >> >
> >> > > Hey Marcus,
> >> > >
> >> > > I'm reviewing your e-mails as I implement the necessary methods in
> new
> >> > > classes.
> >> > >
> >> > > "So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> >> > > the pool data (host, port, name, path) which would be used to log
> the
> >> > > host into the initiator."
> >> > >
> >> > > Can you tell me, in my case, since a storage pool (primary storage)
> is
> >> > > actually the SAN, I wouldn't really be logging into anything at this
> >> > point,
> >> > > correct?
> >> > >
> >> > > Also, what kind of capacity, available, and used bytes make sense to
> >> > report
> >> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> case
> >> > and
> >> > > not an individual LUN)?
> >> > >
> >> > > Thanks!
> >> > >
> >> > >
> >> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> > > >wrote:
> >> > >
> >> > > > Ok, KVM will be close to that, of course, because only the
> >> > > > hypervisor
> >> > > > classes differ, the rest is all mgmt server. Creating a volume is
> >> > > > just
> >> > > > a db entry until it's deployed for the first time.
> >> > > > AttachVolumeCommand
> >> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> >> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> >> > > > StorageAdaptor) to log in the host to the target and then you
> have a
> >> > > > block device.  Maybe libvirt will do that for you, but my quick
> read
> >> > > > made it sound like the iscsi libvirt pool type is actually a pool,
> >> > > > not
> >> > > > a lun or volume, so you'll need to figure out if that works or if
> >> > > > you'll have to use iscsiadm commands.
> >> > > >
> >> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> >> > > > doesn't really manage your pool the way you want), you're going to
> >> > > > have to create a version of KVMStoragePool class and a
> >> > > > StorageAdaptor
> >> > > > class (see LibvirtStoragePool.java and
> LibvirtStorageAdaptor.java),
> >> > > > implementing all of the methods, then in KVMStorageManager.java
> >> > > > there's a "_storageMapper" map. This is used to select the correct
> >> > > > adaptor, you can see in this file that every call first pulls the
> >> > > > correct adaptor out of this map via getStorageAdaptor. So you can
> >> > > > see
> >> > > > a comment in this file that says "add other storage adaptors
> here",
> >> > > > where it puts to this map, this is where you'd register your
> >> > > > adaptor.
> >> > > >
> >> > > > So, referencing StorageAdaptor.java, createStoragePool accepts all
> >> > > > of
> >> > > > the pool data (host, port, name, path) which would be used to log
> >> > > > the
> >> > > > host into the initiator. I *believe* the method getPhysicalDisk
> will
> >> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> calls
> >> > > > this and then creates the XML diskdef and attaches it to the VM.
> >> > > > Now,
> >> > > > one thing you need to know is that createStoragePool is called
> >> > > > often,
> >> > > > sometimes just to make sure the pool is there. You may want to
> >> > > > create
> >> > > > a map in your adaptor class and keep track of pools that have been
> >> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> >> > > > asks
> >> > > > libvirt about which storage pools exist. There are also calls to
> >> > > > refresh the pool stats, and all of the other calls can be seen in
> >> > > > the
> >> > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> >> > > > but
> >> > > > it's probably a hold-over from 4.1, as I have the vague idea that
> >> > > > volumes are created on the mgmt server via the plugin now, so
> >> > > > whatever
> >> > > > doesn't apply can just be stubbed out (or optionally
> >> > > > extended/reimplemented here, if you don't mind the hosts talking
> to
> >> > > > the san api).
> >> > > >
> >> > > > There is a difference between attaching new volumes and launching
> a
> >> > > > VM
> >> > > > with existing volumes.  In the latter case, the VM definition that
> >> > > > was
> >> > > > passed to the KVM agent includes the disks, (StartCommand).
> >> > > >
> >> > > > I'd be interested in how your pool is defined for Xen, I imagine
> it
> >> > > > would need to be kept the same. Is it just a definition to the SAN
> >> > > > (ip address or some such, port number) and perhaps a volume pool
> >> > > > name?
> >> > > >
> >> > > > > If there is a way for me to update the ACL list on the SAN to
> have
> >> > > only a
> >> > > > > single KVM host have access to the volume, that would be ideal.
> >> > > >
> >> > > > That depends on your SAN API.  I was under the impression that the
> >> > > > storage plugin framework allowed for acls, or for you to do
> whatever
> >> > > > you want for create/attach/delete/snapshot, etc. You'd just call
> >> > > > your
> >> > > > SAN API with the host info for the ACLs prior to when the disk is
> >> > > > attached (or the VM is started).  I'd have to look more at the
> >> > > > framework to know the details, in 4.1 I would do this in
> >> > > > getPhysicalDisk just prior to connecting up the LUN.
> >> > > >
> >> > > >
> >> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >> > > > <mi...@solidfire.com> wrote:
> >> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> >> > > > > different
> >> > > from
> >> > > > how
> >> > > > > it works with XenServer and VMware.
> >> > > > >
> >> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> >> > > > >
> >> > > > > * The user creates a CS volume (this is just recorded in the
> >> > > > cloud.volumes
> >> > > > > table).
> >> > > > >
> >> > > > > * The user attaches the volume as a disk to a VM for the first
> >> > > > > time
> >> > (if
> >> > > > the
> >> > > > > storage allocator picks the SolidFire plug-in, the storage
> >> > > > > framework
> >> > > > invokes
> >> > > > > a method on the plug-in that creates a volume on the SAN...info
> >> > > > > like
> >> > > the
> >> > > > IQN
> >> > > > > of the SAN volume is recorded in the DB).
> >> > > > >
> >> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
> >> > > > > It
> >> > > > > determines based on a flag passed in that the storage in
> question
> >> > > > > is
> >> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> >> > preallocated
> >> > > > > storage). This tells it to discover the iSCSI target. Once
> >> > > > > discovered
> >> > > it
> >> > > > > determines if the iSCSI target already contains a storage
> >> > > > > repository
> >> > > (it
> >> > > > > would if this were a re-attach situation). If it does contain an
> >> > > > > SR
> >> > > > already,
> >> > > > > then there should already be one VDI, as well. If there is no
> SR,
> >> > > > > an
> >> > SR
> >> > > > is
> >> > > > > created and a single VDI is created within it (that takes up
> about
> >> > > > > as
> >> > > > much
> >> > > > > space as was requested for the CloudStack volume).
> >> > > > >
> >> > > > > * The normal attach-volume logic continues (it depends on the
> >> > existence
> >> > > > of
> >> > > > > an SR and a VDI).
> >> > > > >
> >> > > > > The VMware case is essentially the same (mainly just substitute
> >> > > datastore
> >> > > > > for SR and VMDK for VDI).
> >> > > > >
> >> > > > > In both cases, all hosts in the cluster have discovered the
> iSCSI
> >> > > target,
> >> > > > > but only the host that is currently running the VM that is using
> >> > > > > the
> >> > > VDI
> >> > > > (or
> >> > > > > VMKD) is actually using the disk.
> >> > > > >
> >> > > > > Live Migration should be OK because the hypervisors communicate
> >> > > > > with
> >> > > > > whatever metadata they have on the SR (or datastore).
> >> > > > >
> >> > > > > I see what you're saying with KVM, though.
> >> > > > >
> >> > > > > In that case, the hosts are clustered only in CloudStack's eyes.
> >> > > > > CS
> >> > > > controls
> >> > > > > Live Migration. You don't really need a clustered filesystem on
> >> > > > > the
> >> > > LUN.
> >> > > > The
> >> > > > > LUN could be handed over raw to the VM using it.
> >> > > > >
> >> > > > > If there is a way for me to update the ACL list on the SAN to
> have
> >> > > only a
> >> > > > > single KVM host have access to the volume, that would be ideal.
> >> > > > >
> >> > > > > Also, I agree I'll need to use iscsiadm to discover and log in
> to
> >> > > > > the
> >> > > > iSCSI
> >> > > > > target. I'll also need to take the resultant new device and pass
> >> > > > > it
> >> > > into
> >> > > > the
> >> > > > > VM.
> >> > > > >
> >> > > > > Does this sound reasonable? Please call me out on anything I
> seem
> >> > > > incorrect
> >> > > > > about. :)
> >> > > > >
> >> > > > > Thanks for all the thought on this, Marcus!
> >> > > > >
> >> > > > >
> >> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> >> > shadowsor@gmail.com>
> >> > > > > wrote:
> >> > > > >>
> >> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and
> the
> >> > > attach
> >> > > > >> the disk def to the vm. You may need to do your own
> >> > > > >> StorageAdaptor
> >> > and
> >> > > > run
> >> > > > >> iscsiadm commands to accomplish that, depending on how the
> >> > > > >> libvirt
> >> > > iscsi
> >> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how
> it
> >> > > works
> >> > > > on
> >> > > > >> xen at the momen., nor is it ideal.
> >> > > > >>
> >> > > > >> Your plugin will handle acls as far as which host can see which
> >> > > > >> luns
> >> > > as
> >> > > > >> well, I remember discussing that months ago, so that a disk
> won't
> >> > > > >> be
> >> > > > >> connected until the hypervisor has exclusive access, so it will
> >> > > > >> be
> >> > > safe
> >> > > > and
> >> > > > >> fence the disk from rogue nodes that cloudstack loses
> >> > > > >> connectivity
> >> > > > with. It
> >> > > > >> should revoke access to everything but the target host...
> Except
> >> > > > >> for
> >> > > > during
> >> > > > >> migration but we can discuss that later, there's a migration
> prep
> >> > > > process
> >> > > > >> where the new host can be added to the acls, and the old host
> can
> >> > > > >> be
> >> > > > removed
> >> > > > >> post migration.
> >> > > > >>
> >> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> >> > > mike.tutkowski@solidfire.com
> >> > > > >
> >> > > > >> wrote:
> >> > > > >>>
> >> > > > >>> Yeah, that would be ideal.
> >> > > > >>>
> >> > > > >>> So, I would still need to discover the iSCSI target, log in to
> >> > > > >>> it,
> >> > > then
> >> > > > >>> figure out what /dev/sdX was created as a result (and leave it
> >> > > > >>> as
> >> > is
> >> > > -
> >> > > > do
> >> > > > >>> not format it with any file system...clustered or not). I
> would
> >> > pass
> >> > > > that
> >> > > > >>> device into the VM.
> >> > > > >>>
> >> > > > >>> Kind of accurate?
> >> > > > >>>
> >> > > > >>>
> >> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> >> > > shadowsor@gmail.com>
> >> > > > >>> wrote:
> >> > > > >>>>
> >> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> >> > There
> >> > > > are
> >> > > > >>>> ones that work for block devices rather than files. You can
> >> > > > >>>> piggy
> >> > > > back off
> >> > > > >>>> of the existing disk definitions and attach it to the vm as a
> >> > block
> >> > > > device.
> >> > > > >>>> The definition is an XML string per libvirt XML format. You
> may
> >> > want
> >> > > > to use
> >> > > > >>>> an alternate path to the disk rather than just /dev/sdx like
> I
> >> > > > mentioned,
> >> > > > >>>> there are by-id paths to the block devices, as well as other
> >> > > > >>>> ones
> >> > > > that will
> >> > > > >>>> be consistent and easier for management, not sure how
> familiar
> >> > > > >>>> you
> >> > > > are with
> >> > > > >>>> device naming on Linux.
> >> > > > >>>>
> >> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> >> > > > >>>> <sh...@gmail.com>
> >> > > > wrote:
> >> > > > >>>>>
> >> > > > >>>>> No, as that would rely on virtualized network/iscsi
> initiator
> >> > > inside
> >> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun
> on
> >> > > > hypervisor) as
> >> > > > >>>>> a disk to the VM, rather than attaching some image file that
> >> > > resides
> >> > > > on a
> >> > > > >>>>> filesystem, mounted on the host, living on a target.
> >> > > > >>>>>
> >> > > > >>>>> Actually, if you plan on the storage supporting live
> migration
> >> > > > >>>>> I
> >> > > > think
> >> > > > >>>>> this is the only way. You can't put a filesystem on it and
> >> > > > >>>>> mount
> >> > it
> >> > > > in two
> >> > > > >>>>> places to facilitate migration unless its a clustered
> >> > > > >>>>> filesystem,
> >> > > in
> >> > > > which
> >> > > > >>>>> case you're back to shared mount point.
> >> > > > >>>>>
> >> > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> >> > with a
> >> > > > xen
> >> > > > >>>>> specific cluster management, a custom CLVM. They don't use a
> >> > > > filesystem
> >> > > > >>>>> either.
> >> > > > >>>>>
> >> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >> > > > >>>>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>
> >> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> >> > > > >>>>>> mean
> >> > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> that
> >> > > > >>>>>> in
> >> > > CS.
> >> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> >> > > > >>>>>> hypervisor,
> >> > > as
> >> > > > far as I
> >> > > > >>>>>> know.
> >> > > > >>>>>>
> >> > > > >>>>>>
> >> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> >> > > > shadowsor@gmail.com>
> >> > > > >>>>>> wrote:
> >> > > > >>>>>>>
> >> > > > >>>>>>> Better to wire up the lun directly to the vm unless there
> is
> >> > > > >>>>>>> a
> >> > > good
> >> > > > >>>>>>> reason not to.
> >> > > > >>>>>>>
> >> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> >> > shadowsor@gmail.com>
> >> > > > >>>>>>> wrote:
> >> > > > >>>>>>>>
> >> > > > >>>>>>>> You could do that, but as mentioned I think its a mistake
> >> > > > >>>>>>>> to
> >> > go
> >> > > to
> >> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> luns
> >> > and
> >> > > > then putting
> >> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
> >> > > > >>>>>>>> or
> >> > > even
> >> > > > RAW disk
> >> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
> >> > > > >>>>>>>> the
> >> > > > way, and have
> >> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> etc.
> >> > > > >>>>>>>>
> >> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> with
> >> > CS.
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is
> by
> >> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location
> of
> >> > > > >>>>>>>>> the
> >> > > > share.
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> >> > > > >>>>>>>>> discovering
> >> > > their
> >> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> somewhere
> >> > > > >>>>>>>>> on
> >> > > > their file
> >> > > > >>>>>>>>> system.
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> >> > > > >>>>>>>>> logging
> >> > > in,
> >> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> >> > current
> >> > > > code manage
> >> > > > >>>>>>>>> the rest as it currently does?
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> >> > > > >>>>>>>>>>
> >> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> >> > catch
> >> > > up
> >> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> disk
> >> > > > snapshots + memory
> >> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> >> > handled
> >> > > > by the SAN,
> >> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> >> > something
> >> > > > else. This is
> >> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want
> to
> >> > see
> >> > > > how others are
> >> > > > >>>>>>>>>> planning theirs.
> >> > > > >>>>>>>>>>
> >> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> >> > > shadowsor@gmail.com
> >> > > > >
> >> > > > >>>>>>>>>> wrote:
> >> > > > >>>>>>>>>>>
> >> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> >> > > > >>>>>>>>>>> style
> >> > on
> >> > > > an
> >> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> >> > > > >>>>>>>>>>> format.
> >> > > > Otherwise you're
> >> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> creating
> >> > > > >>>>>>>>>>> a
> >> > > > QCOW2 disk image,
> >> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> >> > > > >>>>>>>>>>>
> >> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> the
> >> > VM,
> >> > > > and
> >> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> >> > > > >>>>>>>>>>> plugin
> >> > is
> >> > > > best. My
> >> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> >> > > > >>>>>>>>>>> there
> >> > > was
> >> > > > a snapshot
> >> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> >> > > > >>>>>>>>>>>
> >> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> >> > > > shadowsor@gmail.com>
> >> > > > >>>>>>>>>>> wrote:
> >> > > > >>>>>>>>>>>>
> >> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> back
> >> > end,
> >> > > > if
> >> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> >> > > > >>>>>>>>>>>> call
> >> > > > your plugin for
> >> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic.
> As
> >> > far
> >> > > > as space, that
> >> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours,
> we
> >> > carve
> >> > > > out luns from a
> >> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and
> is
> >> > > > independent of the
> >> > > > >>>>>>>>>>>> LUN size the host sees.
> >> > > > >>>>>>>>>>>>
> >> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> Hey Marcus,
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> >> > > > >>>>>>>>>>>>> won't
> >> > > > work
> >> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> snapshots?
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> the
> >> > VDI
> >> > > > for
> >> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> repository
> >> > > > >>>>>>>>>>>>> as
> >> > > the
> >> > > > volume is on.
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> >> > > > >>>>>>>>>>>>> XenServer
> >> > > and
> >> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> >> > snapshots
> >> > > > in 4.2) is I'd
> >> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> user
> >> > > > requested for the
> >> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> >> > > > >>>>>>>>>>>>> thinly
> >> > > > provisions volumes,
> >> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
> >> > > > >>>>>>>>>>>>> be).
> >> > > > The CloudStack
> >> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> >> > until a
> >> > > > hypervisor
> >> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside
> on
> >> > > > >>>>>>>>>>>>> the
> >> > > > SAN volume.
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> >> > > > >>>>>>>>>>>>> creation
> >> > of
> >> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> even
> >> > > > >>>>>>>>>>>>> if
> >> > > > there were support
> >> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> >> > > > >>>>>>>>>>>>> iSCSI
> >> > > > target), then I
> >> > > > >>>>>>>>>>>>> don't see how using this model will work.
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> this
> >> > > works
> >> > > > >>>>>>>>>>>>> with DIR?
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> What do you think?
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> Thanks
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> access
> >> > > today.
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might
> as
> >> > well
> >> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > > > >>>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
> >> > > > >>>>>>>>>>>>>>> it
> >> > > just
> >> > > > >>>>>>>>>>>>>>> acts like a
> >> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> The
> >> > > > end-user
> >> > > > >>>>>>>>>>>>>>> is
> >> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all
> KVM
> >> > hosts
> >> > > > can
> >> > > > >>>>>>>>>>>>>>> access,
> >> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> the
> >> > > > storage.
> >> > > > >>>>>>>>>>>>>>> It could
> >> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> >> > > > >>>>>>>>>>>>>>> filesystem,
> >> > > > >>>>>>>>>>>>>>> cloudstack just
> >> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> >> > > > >>>>>>>>>>>>>>> images.
> >> > > > >>>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> the
> >> > same
> >> > > > >>>>>>>>>>>>>>> > time.
> >> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> >> > > > >>>>>>>>>>>>>>> >
> >> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> >> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> >> > > > >>>>>>>>>>>>>>> >> default              active     yes
> >> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> >> > based
> >> > > on
> >> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> one
> >> > LUN,
> >> > > > so
> >> > > > >>>>>>>>>>>>>>> >>> there would only
> >> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> >> > > (libvirt)
> >> > > > >>>>>>>>>>>>>>> >>> storage pool.
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> >> > > > >>>>>>>>>>>>>>> >>> iSCSI
> >> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> >> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> >> > > > >>>>>>>>>>>>>>> >>> libvirt
> >> > > does
> >> > > > >>>>>>>>>>>>>>> >>> not support
> >> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
> >> > > > >>>>>>>>>>>>>>> >>> if
> >> > > > libvirt
> >> > > > >>>>>>>>>>>>>>> >>> supports
> >> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> mentioned,
> >> > since
> >> > > > >>>>>>>>>>>>>>> >>> each one of its
> >> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> >> > > > targets/LUNs).
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         @Override
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>     }
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> >> > > > >>>>>>>>>>>>>>> >>>> currently
> >> > > being
> >> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> >> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> >> > > > >>>>>>>>>>>>>>> >>>> someone
> >> > > > >>>>>>>>>>>>>>> >>>> selects the
> >> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> iSCSI,
> >> > > > >>>>>>>>>>>>>>> >>>> is
> >> > > > that
> >> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> >> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> >> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> >> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >> > > > >>>>>>>>>>>>>>> >>>> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > http://libvirt.org/storage.html#StorageBackendISCSI
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> >> > server,
> >> > > > and
> >> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> >> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> >> > > > >>>>>>>>>>>>>>> >>>>> believe
> >> > > your
> >> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> >> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> logging
> >> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > > and
> >> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> >> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
> >> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > the
> >> > > > Xen
> >> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> >> > > > >>>>>>>>>>>>>>> >>>>> provides
> >> > a
> >> > > > 1:1
> >> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> >> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> device
> >> > > > >>>>>>>>>>>>>>> >>>>> as
> >> > a
> >> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> >> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> more
> >> > about
> >> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> >> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> your
> >> > own
> >> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> >> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> >> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> >> > >  We
> >> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> >> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> >> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/sources/java/javadoc/Normally,
> >> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> >> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made
> to
> >> > that
> >> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> >> > > > >>>>>>>>>>>>>>> >>>>> how
> >> > > that
> >> > > > >>>>>>>>>>>>>>> >>>>> is done for
> >> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> >> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > > code
> >> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> >> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> iscsi
> >> > > storage
> >> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> >> > > > >>>>>>>>>>>>>>> >>>>> get started.
> >> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> >> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> >> > > > >>>>>>>>>>>>>>> >>>>> > more,
> >> > > but
> >> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> >> > > > >>>>>>>>>>>>>>> >>>>> > supports
> >> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> >> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> >> > > > >>>>>>>>>>>>>>> >>>>> > right?
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> >> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
> the
> >> > > classes
> >> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> >> > > > >>>>>>>>>>>>>>> >>>>> >> last
> >> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> >> > Sorensen
> >> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> >> > > for
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> >> > > > login.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> >> > and
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> Tutkowski"
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> >> > I
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> >> > > > framework
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> >> > delete
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> >> > > > mapping
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > > admin
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> >> > would
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> >> > needed
> >> > > > to
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> >> > > work
> >> > > > on
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how
> I
> >> > will
> >> > > > need
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have
> to
> >> > > expect
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
> for
> >> > this
> >> > > to
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> >> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>>>> >> --
> >> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> >> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> cloud™
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > > >>>>>>>>>>>>>>> >>>>> > --
> >> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> Inc.
> >> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> cloud™
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>>
> >> > > > >>>>>>>>>>>>>>> >>>> --
> >> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>>
> >> > > > >>>>>>>>>>>>>>> >>> --
> >> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >>
> >> > > > >>>>>>>>>>>>>>> >> --
> >> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>> --
> >> > > > >>>>>>>>>>>>>> Mike Tutkowski
> >> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>>> o: 303.746.7302
> >> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>>
> >> > > > >>>>>>>>>>>>> --
> >> > > > >>>>>>>>>>>>> Mike Tutkowski
> >> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>>>>>> o: 303.746.7302
> >> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>>
> >> > > > >>>>>>>>> --
> >> > > > >>>>>>>>> Mike Tutkowski
> >> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>>>>> o: 303.746.7302
> >> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> >> > > > >>>>>>
> >> > > > >>>>>>
> >> > > > >>>>>>
> >> > > > >>>>>>
> >> > > > >>>>>> --
> >> > > > >>>>>> Mike Tutkowski
> >> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>>>>> e: mike.tutkowski@solidfire.com
> >> > > > >>>>>> o: 303.746.7302
> >> > > > >>>>>> Advancing the way the world uses the cloud™
> >> > > > >>>
> >> > > > >>>
> >> > > > >>>
> >> > > > >>>
> >> > > > >>> --
> >> > > > >>> Mike Tutkowski
> >> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> > > > >>> o: 303.746.7302
> >> > > > >>> Advancing the way the world uses the cloud™
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Mike Tutkowski
> >> > > > > Senior CloudStack Developer, SolidFire Inc.
> >> > > > > e: mike.tutkowski@solidfire.com
> >> > > > > o: 303.746.7302
> >> > > > > Advancing the way the world uses the cloud™
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > *Mike Tutkowski*
> >> > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > e: mike.tutkowski@solidfire.com
> >> > > o: 303.746.7302
> >> > > Advancing the way the world uses the
> >> > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > *™*
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the
> >> cloud<http://solidfire.com/solution/overview/?video=play>
> >> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
For what it's worth, OpenStack is quite a bit different.

All storage volumes are dynamically created (like what was enabled in 4.2)
and these volumes are directly attached to VMs (without going through the
hypervisor).

Since we go through the hypervisor, to enable a 1:1 mapping between a CS
volume and a SAN LUN required - for XenServer - the creation of an SR with
one VDI that took up as much space in the SR as it could.

Same idea with ESX(i), but - as we chatted about a bit - it's one datastore
having a VMDK file that took up as much space in the datastore as it could.

That's why - initially - I was under the impression we'd take the same
approach with KVM.

I'm glad, though, you presented the option about discovering the iSCSI
target and then attaching the resultant device as a raw device to the VM.

Since CS handles HA for KVM, that makes sense (you don't need the
equivalent then of the overhead we incur with an SR or datastore).


On Tue, Sep 17, 2013 at 10:41 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> What you're saying here is definitely something we should talk about.
>
> Hopefully my previous e-mail has clarified how this works a bit.
>
> It mainly comes down to this:
>
> For the first time in CS history, primary storage is no longer required to
> be preallocated by the admin and then handed to CS. CS volumes don't have
> to share a preallocated volume anymore.
>
> As of 4.2, primary storage can be based on a SAN (or some other storage
> device). You can tell CS how many bytes and IOPS to use from this storage
> device and CS invokes the appropriate plug-in to carve out LUNs dynamically.
>
> Each LUN is home to one and only one data disk. Data disks - in this model
> - never share a LUN.
>
> The main use case for this is so a CS volume can deliver guaranteed IOPS
> if the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
> LUN-by-LUN basis.
>
>
> On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> I guess whether or not a solidfire device is capable of hosting
>> multiple disk pools is irrelevant, we'd hope that we could get the
>> stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
>> stats aren't collected, I can't as an admin define multiple pools and
>> expect cloudstack to allocate evenly from them or fill one up and move
>> to the next, because it doesn't know how big it is.
>>
>> Ultimately this discussion has nothing to do with the KVM stuff
>> itself, just a tangent, but something to think about.
>>
>> On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>> > Ok, on most storage pools it shows how many GB free/used when listing
>> > the pool both via API and in the UI. I'm guessing those are empty then
>> > for the solid fire storage, but it seems like the user should have to
>> > define some sort of pool that the luns get carved out of, and you
>> > should be able to get the stats for that, right? Or is a solid fire
>> > appliance only one pool per appliance? This isn't about billing, but
>> > just so cloudstack itself knows whether or not there is space left on
>> > the storage device, so cloudstack can go on allocating from a
>> > different primary storage as this one fills up. There are also
>> > notifications and things. It seems like there should be a call you can
>> > handle for this, maybe Edison knows.
>> >
>> > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>> >> You respond to more than attach and detach, right? Don't you create
>> luns as
>> >> well? Or are you just referring to the hypervisor stuff?
>> >>
>> >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> >> wrote:
>> >>>
>> >>> Hi Marcus,
>> >>>
>> >>> I never need to respond to a CreateStoragePool call for either
>> XenServer
>> >>> or
>> >>> VMware.
>> >>>
>> >>> What happens is I respond only to the Attach- and Detach-volume
>> commands.
>> >>>
>> >>> Let's say an attach comes in:
>> >>>
>> >>> In this case, I check to see if the storage is "managed." Talking
>> >>> XenServer
>> >>> here, if it is, I log in to the LUN that is the disk we want to
>> attach.
>> >>> After, if this is the first time attaching this disk, I create an SR
>> and a
>> >>> VDI within the SR. If it is not the first time attaching this disk,
>> the
>> >>> LUN
>> >>> already has the SR and VDI on it.
>> >>>
>> >>> Once this is done, I let the normal "attach" logic run because this
>> logic
>> >>> expected an SR and a VDI and now it has it.
>> >>>
>> >>> It's the same thing for VMware: Just substitute datastore for SR and
>> VMDK
>> >>> for VDI.
>> >>>
>> >>> Does that make sense?
>> >>>
>> >>> Thanks!
>> >>>
>> >>>
>> >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>> >>> <sh...@gmail.com>wrote:
>> >>>
>> >>> > What do you do with Xen? I imagine the user enter the SAN details
>> when
>> >>> > registering the pool? A the pool details are basically just
>> instructions
>> >>> > on
>> >>> > how to log into a target, correct?
>> >>> >
>> >>> > You can choose to log in a KVM host to the target during
>> >>> > createStoragePool
>> >>> > and save the pool in a map, or just save the pool info in a map for
>> >>> > future
>> >>> > reference by uuid, for when you do need to log in. The
>> createStoragePool
>> >>> > then just becomes a way to save the pool info to the agent.
>> Personally,
>> >>> > I'd
>> >>> > log in on the pool create and look/scan for specific luns when
>> they're
>> >>> > needed, but I haven't thought it through thoroughly. I just say that
>> >>> > mainly
>> >>> > because login only happens once, the first time the pool is used,
>> and
>> >>> > every
>> >>> > other storage command is about discovering new luns or maybe
>> >>> > deleting/disconnecting luns no longer needed. On the other hand, you
>> >>> > could
>> >>> > do all of the above: log in on pool create, then also check if
>> you're
>> >>> > logged in on other commands and log in if you've lost connection.
>> >>> >
>> >>> > With Xen, what does your registered pool   show in the UI for
>> avail/used
>> >>> > capacity, and how does it get that info? I assume there is some
>> sort of
>> >>> > disk pool that the luns are carved from, and that your plugin is
>> called
>> >>> > to
>> >>> > talk to the SAN and expose to the user how much of that pool has
>> been
>> >>> > allocated. Knowing how you already solves these problems with Xen
>> will
>> >>> > help
>> >>> > figure out what to do with KVM.
>> >>> >
>> >>> > If this is the case, I think the plugin can continue to handle it
>> rather
>> >>> > than getting details from the agent. I'm not sure if that means
>> nulls
>> >>> > are
>> >>> > OK for these on the agent side or what, I need to look at the
>> storage
>> >>> > plugin arch more closely.
>> >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> >>> > wrote:
>> >>> >
>> >>> > > Hey Marcus,
>> >>> > >
>> >>> > > I'm reviewing your e-mails as I implement the necessary methods
>> in new
>> >>> > > classes.
>> >>> > >
>> >>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
>> all of
>> >>> > > the pool data (host, port, name, path) which would be used to log
>> the
>> >>> > > host into the initiator."
>> >>> > >
>> >>> > > Can you tell me, in my case, since a storage pool (primary
>> storage) is
>> >>> > > actually the SAN, I wouldn't really be logging into anything at
>> this
>> >>> > point,
>> >>> > > correct?
>> >>> > >
>> >>> > > Also, what kind of capacity, available, and used bytes make sense
>> to
>> >>> > report
>> >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
>> case
>> >>> > and
>> >>> > > not an individual LUN)?
>> >>> > >
>> >>> > > Thanks!
>> >>> > >
>> >>> > >
>> >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >>> > > >wrote:
>> >>> > >
>> >>> > > > Ok, KVM will be close to that, of course, because only the
>> >>> > > > hypervisor
>> >>> > > > classes differ, the rest is all mgmt server. Creating a volume
>> is
>> >>> > > > just
>> >>> > > > a db entry until it's deployed for the first time.
>> >>> > > > AttachVolumeCommand
>> >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
>> KVM
>> >>> > > > StorageAdaptor) to log in the host to the target and then you
>> have a
>> >>> > > > block device.  Maybe libvirt will do that for you, but my quick
>> read
>> >>> > > > made it sound like the iscsi libvirt pool type is actually a
>> pool,
>> >>> > > > not
>> >>> > > > a lun or volume, so you'll need to figure out if that works or
>> if
>> >>> > > > you'll have to use iscsiadm commands.
>> >>> > > >
>> >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because
>> Libvirt
>> >>> > > > doesn't really manage your pool the way you want), you're going
>> to
>> >>> > > > have to create a version of KVMStoragePool class and a
>> >>> > > > StorageAdaptor
>> >>> > > > class (see LibvirtStoragePool.java and
>> LibvirtStorageAdaptor.java),
>> >>> > > > implementing all of the methods, then in KVMStorageManager.java
>> >>> > > > there's a "_storageMapper" map. This is used to select the
>> correct
>> >>> > > > adaptor, you can see in this file that every call first pulls
>> the
>> >>> > > > correct adaptor out of this map via getStorageAdaptor. So you
>> can
>> >>> > > > see
>> >>> > > > a comment in this file that says "add other storage adaptors
>> here",
>> >>> > > > where it puts to this map, this is where you'd register your
>> >>> > > > adaptor.
>> >>> > > >
>> >>> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
>> all
>> >>> > > > of
>> >>> > > > the pool data (host, port, name, path) which would be used to
>> log
>> >>> > > > the
>> >>> > > > host into the initiator. I *believe* the method getPhysicalDisk
>> will
>> >>> > > > need to do the work of attaching the lun.  AttachVolumeCommand
>> calls
>> >>> > > > this and then creates the XML diskdef and attaches it to the VM.
>> >>> > > > Now,
>> >>> > > > one thing you need to know is that createStoragePool is called
>> >>> > > > often,
>> >>> > > > sometimes just to make sure the pool is there. You may want to
>> >>> > > > create
>> >>> > > > a map in your adaptor class and keep track of pools that have
>> been
>> >>> > > > created, LibvirtStorageAdaptor doesn't have to do this because
>> it
>> >>> > > > asks
>> >>> > > > libvirt about which storage pools exist. There are also calls to
>> >>> > > > refresh the pool stats, and all of the other calls can be seen
>> in
>> >>> > > > the
>> >>> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
>> etc,
>> >>> > > > but
>> >>> > > > it's probably a hold-over from 4.1, as I have the vague idea
>> that
>> >>> > > > volumes are created on the mgmt server via the plugin now, so
>> >>> > > > whatever
>> >>> > > > doesn't apply can just be stubbed out (or optionally
>> >>> > > > extended/reimplemented here, if you don't mind the hosts
>> talking to
>> >>> > > > the san api).
>> >>> > > >
>> >>> > > > There is a difference between attaching new volumes and
>> launching a
>> >>> > > > VM
>> >>> > > > with existing volumes.  In the latter case, the VM definition
>> that
>> >>> > > > was
>> >>> > > > passed to the KVM agent includes the disks, (StartCommand).
>> >>> > > >
>> >>> > > > I'd be interested in how your pool is defined for Xen, I
>> imagine it
>> >>> > > > would need to be kept the same. Is it just a definition to the
>> SAN
>> >>> > > > (ip address or some such, port number) and perhaps a volume pool
>> >>> > > > name?
>> >>> > > >
>> >>> > > > > If there is a way for me to update the ACL list on the SAN to
>> have
>> >>> > > only a
>> >>> > > > > single KVM host have access to the volume, that would be
>> ideal.
>> >>> > > >
>> >>> > > > That depends on your SAN API.  I was under the impression that
>> the
>> >>> > > > storage plugin framework allowed for acls, or for you to do
>> whatever
>> >>> > > > you want for create/attach/delete/snapshot, etc. You'd just call
>> >>> > > > your
>> >>> > > > SAN API with the host info for the ACLs prior to when the disk
>> is
>> >>> > > > attached (or the VM is started).  I'd have to look more at the
>> >>> > > > framework to know the details, in 4.1 I would do this in
>> >>> > > > getPhysicalDisk just prior to connecting up the LUN.
>> >>> > > >
>> >>> > > >
>> >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> >>> > > > <mi...@solidfire.com> wrote:
>> >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
>> >>> > > > > different
>> >>> > > from
>> >>> > > > how
>> >>> > > > > it works with XenServer and VMware.
>> >>> > > > >
>> >>> > > > > Just to give you an idea how it works in 4.2 with XenServer:
>> >>> > > > >
>> >>> > > > > * The user creates a CS volume (this is just recorded in the
>> >>> > > > cloud.volumes
>> >>> > > > > table).
>> >>> > > > >
>> >>> > > > > * The user attaches the volume as a disk to a VM for the first
>> >>> > > > > time
>> >>> > (if
>> >>> > > > the
>> >>> > > > > storage allocator picks the SolidFire plug-in, the storage
>> >>> > > > > framework
>> >>> > > > invokes
>> >>> > > > > a method on the plug-in that creates a volume on the
>> SAN...info
>> >>> > > > > like
>> >>> > > the
>> >>> > > > IQN
>> >>> > > > > of the SAN volume is recorded in the DB).
>> >>> > > > >
>> >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
>> executed.
>> >>> > > > > It
>> >>> > > > > determines based on a flag passed in that the storage in
>> question
>> >>> > > > > is
>> >>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
>> >>> > preallocated
>> >>> > > > > storage). This tells it to discover the iSCSI target. Once
>> >>> > > > > discovered
>> >>> > > it
>> >>> > > > > determines if the iSCSI target already contains a storage
>> >>> > > > > repository
>> >>> > > (it
>> >>> > > > > would if this were a re-attach situation). If it does contain
>> an
>> >>> > > > > SR
>> >>> > > > already,
>> >>> > > > > then there should already be one VDI, as well. If there is no
>> SR,
>> >>> > > > > an
>> >>> > SR
>> >>> > > > is
>> >>> > > > > created and a single VDI is created within it (that takes up
>> about
>> >>> > > > > as
>> >>> > > > much
>> >>> > > > > space as was requested for the CloudStack volume).
>> >>> > > > >
>> >>> > > > > * The normal attach-volume logic continues (it depends on the
>> >>> > existence
>> >>> > > > of
>> >>> > > > > an SR and a VDI).
>> >>> > > > >
>> >>> > > > > The VMware case is essentially the same (mainly just
>> substitute
>> >>> > > datastore
>> >>> > > > > for SR and VMDK for VDI).
>> >>> > > > >
>> >>> > > > > In both cases, all hosts in the cluster have discovered the
>> iSCSI
>> >>> > > target,
>> >>> > > > > but only the host that is currently running the VM that is
>> using
>> >>> > > > > the
>> >>> > > VDI
>> >>> > > > (or
>> >>> > > > > VMKD) is actually using the disk.
>> >>> > > > >
>> >>> > > > > Live Migration should be OK because the hypervisors
>> communicate
>> >>> > > > > with
>> >>> > > > > whatever metadata they have on the SR (or datastore).
>> >>> > > > >
>> >>> > > > > I see what you're saying with KVM, though.
>> >>> > > > >
>> >>> > > > > In that case, the hosts are clustered only in CloudStack's
>> eyes.
>> >>> > > > > CS
>> >>> > > > controls
>> >>> > > > > Live Migration. You don't really need a clustered filesystem
>> on
>> >>> > > > > the
>> >>> > > LUN.
>> >>> > > > The
>> >>> > > > > LUN could be handed over raw to the VM using it.
>> >>> > > > >
>> >>> > > > > If there is a way for me to update the ACL list on the SAN to
>> have
>> >>> > > only a
>> >>> > > > > single KVM host have access to the volume, that would be
>> ideal.
>> >>> > > > >
>> >>> > > > > Also, I agree I'll need to use iscsiadm to discover and log
>> in to
>> >>> > > > > the
>> >>> > > > iSCSI
>> >>> > > > > target. I'll also need to take the resultant new device and
>> pass
>> >>> > > > > it
>> >>> > > into
>> >>> > > > the
>> >>> > > > > VM.
>> >>> > > > >
>> >>> > > > > Does this sound reasonable? Please call me out on anything I
>> seem
>> >>> > > > incorrect
>> >>> > > > > about. :)
>> >>> > > > >
>> >>> > > > > Thanks for all the thought on this, Marcus!
>> >>> > > > >
>> >>> > > > >
>> >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> >>> > shadowsor@gmail.com>
>> >>> > > > > wrote:
>> >>> > > > >>
>> >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and
>> the
>> >>> > > attach
>> >>> > > > >> the disk def to the vm. You may need to do your own
>> >>> > > > >> StorageAdaptor
>> >>> > and
>> >>> > > > run
>> >>> > > > >> iscsiadm commands to accomplish that, depending on how the
>> >>> > > > >> libvirt
>> >>> > > iscsi
>> >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
>> how it
>> >>> > > works
>> >>> > > > on
>> >>> > > > >> xen at the momen., nor is it ideal.
>> >>> > > > >>
>> >>> > > > >> Your plugin will handle acls as far as which host can see
>> which
>> >>> > > > >> luns
>> >>> > > as
>> >>> > > > >> well, I remember discussing that months ago, so that a disk
>> won't
>> >>> > > > >> be
>> >>> > > > >> connected until the hypervisor has exclusive access, so it
>> will
>> >>> > > > >> be
>> >>> > > safe
>> >>> > > > and
>> >>> > > > >> fence the disk from rogue nodes that cloudstack loses
>> >>> > > > >> connectivity
>> >>> > > > with. It
>> >>> > > > >> should revoke access to everything but the target host...
>> Except
>> >>> > > > >> for
>> >>> > > > during
>> >>> > > > >> migration but we can discuss that later, there's a migration
>> prep
>> >>> > > > process
>> >>> > > > >> where the new host can be added to the acls, and the old
>> host can
>> >>> > > > >> be
>> >>> > > > removed
>> >>> > > > >> post migration.
>> >>> > > > >>
>> >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> >>> > > mike.tutkowski@solidfire.com
>> >>> > > > >
>> >>> > > > >> wrote:
>> >>> > > > >>>
>> >>> > > > >>> Yeah, that would be ideal.
>> >>> > > > >>>
>> >>> > > > >>> So, I would still need to discover the iSCSI target, log in
>> to
>> >>> > > > >>> it,
>> >>> > > then
>> >>> > > > >>> figure out what /dev/sdX was created as a result (and leave
>> it
>> >>> > > > >>> as
>> >>> > is
>> >>> > > -
>> >>> > > > do
>> >>> > > > >>> not format it with any file system...clustered or not). I
>> would
>> >>> > pass
>> >>> > > > that
>> >>> > > > >>> device into the VM.
>> >>> > > > >>>
>> >>> > > > >>> Kind of accurate?
>> >>> > > > >>>
>> >>> > > > >>>
>> >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> >>> > > shadowsor@gmail.com>
>> >>> > > > >>> wrote:
>> >>> > > > >>>>
>> >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
>> definitions.
>> >>> > There
>> >>> > > > are
>> >>> > > > >>>> ones that work for block devices rather than files. You can
>> >>> > > > >>>> piggy
>> >>> > > > back off
>> >>> > > > >>>> of the existing disk definitions and attach it to the vm
>> as a
>> >>> > block
>> >>> > > > device.
>> >>> > > > >>>> The definition is an XML string per libvirt XML format.
>> You may
>> >>> > want
>> >>> > > > to use
>> >>> > > > >>>> an alternate path to the disk rather than just /dev/sdx
>> like I
>> >>> > > > mentioned,
>> >>> > > > >>>> there are by-id paths to the block devices, as well as
>> other
>> >>> > > > >>>> ones
>> >>> > > > that will
>> >>> > > > >>>> be consistent and easier for management, not sure how
>> familiar
>> >>> > > > >>>> you
>> >>> > > > are with
>> >>> > > > >>>> device naming on Linux.
>> >>> > > > >>>>
>> >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> >>> > > > >>>> <sh...@gmail.com>
>> >>> > > > wrote:
>> >>> > > > >>>>>
>> >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
>> initiator
>> >>> > > inside
>> >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>> lun on
>> >>> > > > hypervisor) as
>> >>> > > > >>>>> a disk to the VM, rather than attaching some image file
>> that
>> >>> > > resides
>> >>> > > > on a
>> >>> > > > >>>>> filesystem, mounted on the host, living on a target.
>> >>> > > > >>>>>
>> >>> > > > >>>>> Actually, if you plan on the storage supporting live
>> migration
>> >>> > > > >>>>> I
>> >>> > > > think
>> >>> > > > >>>>> this is the only way. You can't put a filesystem on it and
>> >>> > > > >>>>> mount
>> >>> > it
>> >>> > > > in two
>> >>> > > > >>>>> places to facilitate migration unless its a clustered
>> >>> > > > >>>>> filesystem,
>> >>> > > in
>> >>> > > > which
>> >>> > > > >>>>> case you're back to shared mount point.
>> >>> > > > >>>>>
>> >>> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
>> LVM
>> >>> > with a
>> >>> > > > xen
>> >>> > > > >>>>> specific cluster management, a custom CLVM. They don't
>> use a
>> >>> > > > filesystem
>> >>> > > > >>>>> either.
>> >>> > > > >>>>>
>> >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> >>> > > > >>>>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>
>> >>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
>> you
>> >>> > > > >>>>>> mean
>> >>> > > > >>>>>> circumventing the hypervisor? I didn't think we could do
>> that
>> >>> > > > >>>>>> in
>> >>> > > CS.
>> >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
>> >>> > > > >>>>>> hypervisor,
>> >>> > > as
>> >>> > > > far as I
>> >>> > > > >>>>>> know.
>> >>> > > > >>>>>>
>> >>> > > > >>>>>>
>> >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> >>> > > > shadowsor@gmail.com>
>> >>> > > > >>>>>> wrote:
>> >>> > > > >>>>>>>
>> >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless
>> there is
>> >>> > > > >>>>>>> a
>> >>> > > good
>> >>> > > > >>>>>>> reason not to.
>> >>> > > > >>>>>>>
>> >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> >>> > shadowsor@gmail.com>
>> >>> > > > >>>>>>> wrote:
>> >>> > > > >>>>>>>>
>> >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
>> mistake
>> >>> > > > >>>>>>>> to
>> >>> > go
>> >>> > > to
>> >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
>> luns
>> >>> > and
>> >>> > > > then putting
>> >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
>> QCOW2
>> >>> > > > >>>>>>>> or
>> >>> > > even
>> >>> > > > RAW disk
>> >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
>> along
>> >>> > > > >>>>>>>> the
>> >>> > > > way, and have
>> >>> > > > >>>>>>>> more overhead with the filesystem and its journaling,
>> etc.
>> >>> > > > >>>>>>>>
>> >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
>> with
>> >>> > CS.
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
>> is by
>> >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
>> location of
>> >>> > > > >>>>>>>>> the
>> >>> > > > share.
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
>> >>> > > > >>>>>>>>> discovering
>> >>> > > their
>> >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
>> somewhere
>> >>> > > > >>>>>>>>> on
>> >>> > > > their file
>> >>> > > > >>>>>>>>> system.
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
>> >>> > > > >>>>>>>>> logging
>> >>> > > in,
>> >>> > > > >>>>>>>>> and mounting behind the scenes for them and letting
>> the
>> >>> > current
>> >>> > > > code manage
>> >>> > > > >>>>>>>>> the rest as it currently does?
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>> >>> > > > >>>>>>>>>>
>> >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
>> to
>> >>> > catch
>> >>> > > up
>> >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
>> disk
>> >>> > > > snapshots + memory
>> >>> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably
>> be
>> >>> > handled
>> >>> > > > by the SAN,
>> >>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
>> >>> > something
>> >>> > > > else. This is
>> >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
>> want to
>> >>> > see
>> >>> > > > how others are
>> >>> > > > >>>>>>>>>> planning theirs.
>> >>> > > > >>>>>>>>>>
>> >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> >>> > > shadowsor@gmail.com
>> >>> > > > >
>> >>> > > > >>>>>>>>>> wrote:
>> >>> > > > >>>>>>>>>>>
>> >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> >>> > > > >>>>>>>>>>> style
>> >>> > on
>> >>> > > > an
>> >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>> >>> > > > >>>>>>>>>>> format.
>> >>> > > > Otherwise you're
>> >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
>> creating
>> >>> > > > >>>>>>>>>>> a
>> >>> > > > QCOW2 disk image,
>> >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
>> >>> > > > >>>>>>>>>>>
>> >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>> to the
>> >>> > VM,
>> >>> > > > and
>> >>> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
>> >>> > > > >>>>>>>>>>> plugin
>> >>> > is
>> >>> > > > best. My
>> >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
>> >>> > > > >>>>>>>>>>> there
>> >>> > > was
>> >>> > > > a snapshot
>> >>> > > > >>>>>>>>>>> service that would allow the San to handle
>> snapshots.
>> >>> > > > >>>>>>>>>>>
>> >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> >>> > > > shadowsor@gmail.com>
>> >>> > > > >>>>>>>>>>> wrote:
>> >>> > > > >>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
>> back
>> >>> > end,
>> >>> > > > if
>> >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>> could
>> >>> > > > >>>>>>>>>>>> call
>> >>> > > > your plugin for
>> >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
>> agnostic. As
>> >>> > far
>> >>> > > > as space, that
>> >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
>> ours, we
>> >>> > carve
>> >>> > > > out luns from a
>> >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>> and is
>> >>> > > > independent of the
>> >>> > > > >>>>>>>>>>>> LUN size the host sees.
>> >>> > > > >>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> Hey Marcus,
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>> libvirt
>> >>> > > > >>>>>>>>>>>>> won't
>> >>> > > > work
>> >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
>> snapshots?
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
>> snapshot, the
>> >>> > VDI
>> >>> > > > for
>> >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
>> repository
>> >>> > > > >>>>>>>>>>>>> as
>> >>> > > the
>> >>> > > > volume is on.
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
>> >>> > > > >>>>>>>>>>>>> XenServer
>> >>> > > and
>> >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> >>> > snapshots
>> >>> > > > in 4.2) is I'd
>> >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
>> user
>> >>> > > > requested for the
>> >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>> >>> > > > >>>>>>>>>>>>> thinly
>> >>> > > > provisions volumes,
>> >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs
>> to
>> >>> > > > >>>>>>>>>>>>> be).
>> >>> > > > The CloudStack
>> >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
>> volume
>> >>> > until a
>> >>> > > > hypervisor
>> >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
>> reside on
>> >>> > > > >>>>>>>>>>>>> the
>> >>> > > > SAN volume.
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> >>> > > > >>>>>>>>>>>>> creation
>> >>> > of
>> >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
>> even
>> >>> > > > >>>>>>>>>>>>> if
>> >>> > > > there were support
>> >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>> per
>> >>> > > > >>>>>>>>>>>>> iSCSI
>> >>> > > > target), then I
>> >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
>> this
>> >>> > > works
>> >>> > > > >>>>>>>>>>>>> with DIR?
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> What do you think?
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> Thanks
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>> access
>> >>> > > today.
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
>> might as
>> >>> > well
>> >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>> believe
>> >>> > > > >>>>>>>>>>>>>>> it
>> >>> > > just
>> >>> > > > >>>>>>>>>>>>>>> acts like a
>> >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>> that. The
>> >>> > > > end-user
>> >>> > > > >>>>>>>>>>>>>>> is
>> >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all
>> KVM
>> >>> > hosts
>> >>> > > > can
>> >>> > > > >>>>>>>>>>>>>>> access,
>> >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>> providing the
>> >>> > > > storage.
>> >>> > > > >>>>>>>>>>>>>>> It could
>> >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>> >>> > > > >>>>>>>>>>>>>>> filesystem,
>> >>> > > > >>>>>>>>>>>>>>> cloudstack just
>> >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
>> >>> > > > >>>>>>>>>>>>>>> images.
>> >>> > > > >>>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>> at the
>> >>> > same
>> >>> > > > >>>>>>>>>>>>>>> > time.
>> >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> >>> > > > >>>>>>>>>>>>>>> >
>> >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>> Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>> pools:
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> >>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>> >>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
>> >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
>> >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>> Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>> pool
>> >>> > based
>> >>> > > on
>> >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>> have one
>> >>> > LUN,
>> >>> > > > so
>> >>> > > > >>>>>>>>>>>>>>> >>> there would only
>> >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> >>> > > (libvirt)
>> >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>> >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>> >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>> >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>> >>> > > does
>> >>> > > > >>>>>>>>>>>>>>> >>> not support
>> >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
>> see
>> >>> > > > >>>>>>>>>>>>>>> >>> if
>> >>> > > > libvirt
>> >>> > > > >>>>>>>>>>>>>>> >>> supports
>> >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>> mentioned,
>> >>> > since
>> >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>> >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> >>> > > > targets/LUNs).
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>> Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>     }
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>> >>> > > > >>>>>>>>>>>>>>> >>>> currently
>> >>> > > being
>> >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
>> at.
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> >>> > > > >>>>>>>>>>>>>>> >>>> someone
>> >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>> >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>> iSCSI,
>> >>> > > > >>>>>>>>>>>>>>> >>>> is
>> >>> > > > that
>> >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>> >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>> >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>> iSCSI
>> >>> > server,
>> >>> > > > and
>> >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>> >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>> >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>> >>> > > your
>> >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>> logging
>> >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> >>> > > and
>> >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>> work
>> >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> >>> > the
>> >>> > > > Xen
>> >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>> >>> > a
>> >>> > > > 1:1
>> >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>> device
>> >>> > > > >>>>>>>>>>>>>>> >>>>> as
>> >>> > a
>> >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
>> more
>> >>> > about
>> >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
>> your
>> >>> > own
>> >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>> >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> >>> > >  We
>> >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>> >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>> the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> http://libvirt.org/sources/java/javadoc/Normally,
>> >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>> made to
>> >>> > that
>> >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
>> see
>> >>> > > > >>>>>>>>>>>>>>> >>>>> how
>> >>> > > that
>> >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>> >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>> test
>> >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> >>> > > code
>> >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>> >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>> iscsi
>> >>> > > storage
>> >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>> >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>> >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>> >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>> libvirt
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>> >>> > > but
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
>> the
>> >>> > > classes
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> >>> > Sorensen
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>> the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>> >>> > > for
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>> >>> > > > login.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> >>> > and
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>> Tutkowski"
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>> >>> > I
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>> storage
>> >>> > > > framework
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>> and
>> >>> > delete
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish
>> a
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> >>> > > > mapping
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>> QoS.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>> expected
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >>> > > > admin
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>> volumes
>> >>> > would
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>> friendly).
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>> work, I
>> >>> > needed
>> >>> > > > to
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>> with
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>> >>> > > work
>> >>> > > > on
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>> how I
>> >>> > will
>> >>> > > > need
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>> have to
>> >>> > > expect
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
>> for
>> >>> > this
>> >>> > > to
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>> SolidFire
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
>> Inc.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
>> cloud™
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
>> Inc.
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
>> cloud™
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>>
>> >>> > > > >>>>>>>>>>>>>>> >>>> --
>> >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>>
>> >>> > > > >>>>>>>>>>>>>>> >>> --
>> >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >>
>> >>> > > > >>>>>>>>>>>>>>> >> --
>> >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>> --
>> >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>>
>> >>> > > > >>>>>>>>>>>>> --
>> >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>> >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>> >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>>
>> >>> > > > >>>>>>>>> --
>> >>> > > > >>>>>>>>> Mike Tutkowski
>> >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>>>>> o: 303.746.7302
>> >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
>> >>> > > > >>>>>>
>> >>> > > > >>>>>>
>> >>> > > > >>>>>>
>> >>> > > > >>>>>>
>> >>> > > > >>>>>> --
>> >>> > > > >>>>>> Mike Tutkowski
>> >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>>>>> o: 303.746.7302
>> >>> > > > >>>>>> Advancing the way the world uses the cloud™
>> >>> > > > >>>
>> >>> > > > >>>
>> >>> > > > >>>
>> >>> > > > >>>
>> >>> > > > >>> --
>> >>> > > > >>> Mike Tutkowski
>> >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > >>> e: mike.tutkowski@solidfire.com
>> >>> > > > >>> o: 303.746.7302
>> >>> > > > >>> Advancing the way the world uses the cloud™
>> >>> > > > >
>> >>> > > > >
>> >>> > > > >
>> >>> > > > >
>> >>> > > > > --
>> >>> > > > > Mike Tutkowski
>> >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>> >>> > > > > e: mike.tutkowski@solidfire.com
>> >>> > > > > o: 303.746.7302
>> >>> > > > > Advancing the way the world uses the cloud™
>> >>> > > >
>> >>> > >
>> >>> > >
>> >>> > >
>> >>> > > --
>> >>> > > *Mike Tutkowski*
>> >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> >>> > > e: mike.tutkowski@solidfire.com
>> >>> > > o: 303.746.7302
>> >>> > > Advancing the way the world uses the
>> >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>> > > *™*
>> >>> > >
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> *Mike Tutkowski*
>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the
>> >>> cloud<http://solidfire.com/solution/overview/?video=play>
>> >>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Well, if you've swapped out all of the INFO to DEBUG in
/etc/cloudstack/agent/log4j-cloud.xml and restarted the agent, the
agent will either spew messages about being unable to connect to the
mgmt server, or crash, or run just fine (in which case you have no
problem). The logs in debug should tell you which. If you see periodic
calls coming from the mgmt server (like GetStorageStats) then it's
working fine. You can 'ps -ef | grep jsvc' to see if it's even alive
as a process as well.

You could send your /etc/cloudstack/agent/agent.properties as well...

On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Hey Marcus,
>
> I've been investigating my issue with not being able to add a KVM host to
> CS.
>
> For what it's worth, this comes back successful:
>
> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
> parameters, 3);
>
> This is what the command looks like:
>
> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0 --prvNic=cloudbr0
> --guestNic=cloudbr0
>
> The problem is this method in LibvirtServerDiscoverer never finds a
> matching host in the DB:
>
> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>
> I assume once the KVM host is up and running that it's supposed to call
> into the CS MS so the DB can be updated as such?
>
> If so, the problem must be on the KVM side.
>
> I did run this again (from the KVM host) to see if the connection was in
> place:
>
> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>
> Trying 192.168.233.1...
>
> Connected to 192.168.233.1.
>
> Escape character is '^]'.
> So that looks good.
>
> I turned on more info in the debug log, but nothing obvious jumps out as of
> yet.
>
> If you have any thoughts on this, please shoot them my way. :)
>
> Thanks!
>
>
> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> First step is for me to get this working for KVM, though. :)
>>
>> Once I do that, I can perhaps make modifications to the storage framework
>> and hypervisor plug-ins to refactor the logic and such.
>>
>>
>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Same would work for KVM.
>>>
>>> If CreateCommand and DestroyCommand were called at the appropriate times
>>> by the storage framework, I could move my connect and disconnect logic out
>>> of the attach/detach logic.
>>>
>>>
>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Conversely, if the storage framework called the DestroyCommand for
>>>> managed storage after the DetachCommand, then I could have had my remove
>>>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>>>> DetachCommand handling.
>>>>
>>>>
>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>>>
>>>>> The initial approach that was discussed during 4.2 was for me to modify
>>>>> the attach/detach logic only in the XenServer and VMware hypervisor
>>>>> plug-ins.
>>>>>
>>>>> Now that I think about it more, though, I kind of would have liked to
>>>>> have the storage framework send a CreateCommand to the hypervisor before
>>>>> sending the AttachCommand if the storage in question was managed.
>>>>>
>>>>> Then I could have created my SR/datastore in the CreateCommand and the
>>>>> AttachCommand would have had the SR/datastore that it was always expecting
>>>>> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>>>>>
>>>>>
>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>>>> better position to tell.
>>>>>>
>>>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>>>> that mean that there's no template support? Or is it some other call
>>>>>> that does templating now? I'm still getting up to speed on all of the
>>>>>> 4.2 changes. I was just looking at CreateCommand in
>>>>>> LibvirtComputingResource, since that's the only place
>>>>>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>>>>>> might be skipped altogether when utilizing storage plugins.
>>>>>>
>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>>>> <mi...@solidfire.com> wrote:
>>>>>> > That's an interesting comment, Marcus.
>>>>>> >
>>>>>> > It was my intent that it should work with any CloudStack "managed"
>>>>>> storage
>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>>>> code so
>>>>>> > CHAP didn't have to be used.
>>>>>> >
>>>>>> > As I'm doing my testing, I can try to think about whether it is
>>>>>> generic
>>>>>> > enough to keep those names or not.
>>>>>> >
>>>>>> > My expectation is that it is generic enough.
>>>>>> >
>>>>>> >
>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com>wrote:
>>>>>> >
>>>>>> >> I added a comment to your diff. In general I think it looks good,
>>>>>> >> though I obviously can't vouch for whether or not it will work. One
>>>>>> >> thing I do have reservations about is the adaptor/pool naming. If
>>>>>> you
>>>>>> >> think the code is generic enough that it will work for anyone who
>>>>>> does
>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>>>>> anything
>>>>>> >> about it that's specific to YOUR iscsi target or how it likes to be
>>>>>> >> treated then I'd say that they should be named something less
>>>>>> generic
>>>>>> >> than iScsiAdmStorage.
>>>>>> >>
>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>> >> > Great - thanks!
>>>>>> >> >
>>>>>> >> > Just to give you an overview of what my code does (for when you
>>>>>> get a
>>>>>> >> > chance to review it):
>>>>>> >> >
>>>>>> >> > SolidFireHostListener is registered in
>>>>>> SolidfirePrimaryDataStoreProvider.
>>>>>> >> > Its hostConnect method is invoked when a host connects with the
>>>>>> CS MS. If
>>>>>> >> > the host is running KVM, the listener sends a
>>>>>> ModifyStoragePoolCommand to
>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>>>> >> >
>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>>>> KVMStoragePoolManager
>>>>>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>>>>>> (which
>>>>>> >> was
>>>>>> >> > registered in the constructor for KVMStoragePoolManager under the
>>>>>> key of
>>>>>> >> > StoragePoolType.Iscsi.toString()).
>>>>>> >> >
>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to
>>>>>> the
>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>>>>>> storage
>>>>>> >> > pool.
>>>>>> >> >
>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>>>> managed
>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>>>> iscsiadm to
>>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>>>> follows.
>>>>>> >> >
>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the
>>>>>> IQN of the
>>>>>> >> > volume if the storage pool in question is managed storage.
>>>>>> Otherwise, the
>>>>>> >> > normal vol.getPath() is used.
>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>>>>> detach logic.
>>>>>> >> >
>>>>>> >> > Once the volume has been detached,
>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>>>> removes the
>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com
>>>>>> >> >wrote:
>>>>>> >> >
>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change
>>>>>> all INFO
>>>>>> >> to
>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail
>>>>>> the log
>>>>>> >> when
>>>>>> >> >> you try to start the service, or maybe it will spit something
>>>>>> out into
>>>>>> >> one
>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> >
>>>>>> >> >> wrote:
>>>>>> >> >>
>>>>>> >> >> > This is how I've been trying to query for the status of the
>>>>>> service (I
>>>>>> >> >> > assume it could be started this way, as well, by changing
>>>>>> "status" to
>>>>>> >> >> > "start" or "restart"?):
>>>>>> >> >> >
>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>> /usr/sbin/service
>>>>>> >> >> > cloudstack-agent status
>>>>>> >> >> >
>>>>>> >> >> > I get this back:
>>>>>> >> >> >
>>>>>> >> >> > Failed to execute: * could not access PID file for
>>>>>> cloudstack-agent
>>>>>> >> >> >
>>>>>> >> >> > I've made a bunch of code changes recently, though, so I think
>>>>>> I'm
>>>>>> >> going
>>>>>> >> >> to
>>>>>> >> >> > rebuild and redeploy everything.
>>>>>> >> >> >
>>>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>>>> >> >> >
>>>>>> >> >> > Thanks, Marcus!
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com
>>>>>> >> >> > >wrote:
>>>>>> >> >> >
>>>>>> >> >> > > OK, will check it out in the next few days. As mentioned,
>>>>>> you can
>>>>>> >> set
>>>>>> >> >> up
>>>>>> >> >> > > your Ubuntu vm as the management server as well if all else
>>>>>> fails.
>>>>>> >>  If
>>>>>> >> >> > you
>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>>>> you need
>>>>>> >> to
>>>>>> >> >> > > enable.debug on the agent. It won't run without complaining
>>>>>> loudly
>>>>>> >> if
>>>>>> >> >> it
>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>>>>>> agent
>>>>>> >> log,
>>>>>> >> >> so
>>>>>> >> >> > > perhaps its not running. I assume you know how to stop/start
>>>>>> the
>>>>>> >> agent
>>>>>> >> >> on
>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>>>> >> >> mike.tutkowski@solidfire.com>
>>>>>> >> >> > > wrote:
>>>>>> >> >> > >
>>>>>> >> >> > > > Hey Marcus,
>>>>>> >> >> > > >
>>>>>> >> >> > > > I haven't yet been able to test my new code, but I thought
>>>>>> you
>>>>>> >> would
>>>>>> >> >> > be a
>>>>>> >> >> > > > good person to ask to review it:
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >>
>>>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>>>> >> >> > > >
>>>>>> >> >> > > > All it is supposed to do is attach and detach a data disk
>>>>>> (that
>>>>>> >> has
>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>>>>>> >> happens to
>>>>>> >> >> > be
>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>>>>>> >> between a
>>>>>> >> >> > > > CloudStack volume and a data disk.
>>>>>> >> >> > > >
>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff like
>>>>>> that
>>>>>> >> >> > (likely a
>>>>>> >> >> > > > future release)...just attaching and detaching a data disk
>>>>>> in 4.3.
>>>>>> >> >> > > >
>>>>>> >> >> > > > Thanks!
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > >
>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>>>> cloudstack-agent
>>>>>> >> >> first.
>>>>>> >> >> > > > Would
>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>>>> >> >> > cloudstack-agent.
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >> I get the same error running the command manually:
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>> >> /usr/sbin/service
>>>>>> >> >> > > > >> cloudstack-agent status
>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>> agent.log looks OK to me:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > Agent
>>>>>> >> >> > > > >>> started
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> agent.properties found at
>>>>>> >> /etc/cloudstack/agent/agent.properties
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>>>> >> (main:null)
>>>>>> >> >> > > log4j
>>>>>> >> >> > > > >>> configuration found at
>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>>>> (main:null)
>>>>>> >> id
>>>>>> >> >> > is 3
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>>>> (main:null)
>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>>>> >> >> scripts/network/domr/kvm
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was important.
>>>>>> This
>>>>>> >> seems
>>>>>> >> >> to
>>>>>> >> >> > > be
>>>>>> >> >> > > > a
>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>> cloudstack-agent
>>>>>> >> status
>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>>>> file for
>>>>>> >> >> > > > >>> cloudstack-agent
>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>> cloudstack-agent
>>>>>> >> start
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>>>> >> >> > > shadowsor@gmail.com
>>>>>> >> >> > > > >wrote:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>>>> agent log
>>>>>> >> for
>>>>>> >> >> > > some
>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the place
>>>>>> to
>>>>>> >> look.
>>>>>> >> >> > There
>>>>>> >> >> > > > is
>>>>>> >> >> > > > >>>> an
>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup when it
>>>>>> adds
>>>>>> >> the
>>>>>> >> >> > host,
>>>>>> >> >> > > > >>>> both
>>>>>> >> >> > > > >>>> in /var/log
>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>>>> >> >> > > > >>>> wrote:
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or
>>>>>> the KVM
>>>>>> >> host?
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>>>> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>>>> >> >> host=192.168.233.1
>>>>>> >> >> > > > value.
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>>>> >> >> > > > >>>> > >wrote:
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10?
>>>>>> But you
>>>>>> >> >> tried
>>>>>> >> >> > > to
>>>>>> >> >> > > > >>>> telnet
>>>>>> >> >> > > > >>>> > to
>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that
>>>>>> in
>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>>>> may want
>>>>>> >> to
>>>>>> >> >> > edit
>>>>>> >> >> > > > the
>>>>>> >> >> > > > >>>> > config
>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> > > wrote:
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
>>>>>> looks
>>>>>> >> like, if
>>>>>> >> >> > > that
>>>>>> >> >> > > > >>>> is of
>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>>>>>> network
>>>>>> >> >> > VMware
>>>>>> >> >> > > > >>>> Fusion
>>>>>> >> >> > > > >>>> > set
>>>>>> >> >> > > > >>>> > > > up):
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto lo
>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto eth0
>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>>>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>>>>>> metric 1
>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>>>>> Tutkowski <
>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS
>>>>>> log
>>>>>> >> >> (below).
>>>>>> >> >> > > > >>>> Discovery
>>>>>> >> >> > > > >>>> > > > timed
>>>>>> >> >> > > > >>>> > > > > out.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>>>> settings
>>>>>> >> >> > > shouldn't
>>>>>> >> >> > > > >>>> have
>>>>>> >> >> > > > >>>> > > > changed
>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>>>> host and
>>>>>> >> vice
>>>>>> >> >> > > > versa.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on
>>>>>> the KVM
>>>>>> >> host
>>>>>> >> >> > and
>>>>>> >> >> > > > >>>> ping from
>>>>>> >> >> > > > >>>> > > it
>>>>>> >> >> > > > >>>> > > > > to the MS host.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>>>> running
>>>>>> >> the
>>>>>> >> >> CS
>>>>>> >> >> > > MS)
>>>>>> >> >> > > > >>>> to the
>>>>>> >> >> > > > >>>> > VM
>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Timeout,
>>>>>> >> to
>>>>>> >> >> > wait
>>>>>> >> >> > > > for
>>>>>> >> >> > > > >>>> the
>>>>>> >> >> > > > >>>> > > host
>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Unable to
>>>>>> >> >> find
>>>>>> >> >> > > the
>>>>>> >> >> > > > >>>> server
>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Could not
>>>>>> >> >> find
>>>>>> >> >> > > > >>>> exception:
>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>>>> error code
>>>>>> >> >> list
>>>>>> >> >> > > for
>>>>>> >> >> > > > >>>> > > exceptions
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Exception:
>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>>>>> Unable to add
>>>>>> >> >> the
>>>>>> >> >> > > host
>>>>>> >> >> > > > >>>> > > > > at
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >>
>>>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
>>>>>> host to
>>>>>> >> >> the
>>>>>> >> >> > MS
>>>>>> >> >> > > > >>>> host's
>>>>>> >> >> > > > >>>> > > 8250
>>>>>> >> >> > > > >>>> > > > > port:
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1
>>>>>> 8250
>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > --
>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>>>> >> >> > > > >>>> > > > cloud<
>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>>> > > > *™*
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > --
>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > o: 303.746.7302
>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>>>> >> >> > > > >>>> > cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>>> > *™*
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> --
>>>>>> >> >> > > > >>> *Mike Tutkowski*
>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>> o: 303.746.7302
>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>> *™*
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> --
>>>>>> >> >> > > > >> *Mike Tutkowski*
>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >> o: 303.746.7302
>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >> *™*
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > > --
>>>>>> >> >> > > > > *Mike Tutkowski*
>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > > o: 303.746.7302
>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > > *™*
>>>>>> >> >> > > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > > --
>>>>>> >> >> > > > *Mike Tutkowski*
>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > o: 303.746.7302
>>>>>> >> >> > > > Advancing the way the world uses the
>>>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > *™*
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> > --
>>>>>> >> >> > *Mike Tutkowski*
>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > o: 303.746.7302
>>>>>> >> >> > Advancing the way the world uses the
>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > *™*
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > --
>>>>>> >> > *Mike Tutkowski*
>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>>> >> > o: 303.746.7302
>>>>>> >> > Advancing the way the world uses the
>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > *™*
>>>>>> >>
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > *Mike Tutkowski*
>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> > e: mike.tutkowski@solidfire.com
>>>>>> > o: 303.746.7302
>>>>>> > Advancing the way the world uses the
>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> > *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Again, not so  familiar with Ubuntu. I'd imagine that jna would be set
up as a dependency to the .deb packages.

sudo apt-get install libjna-java

On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Was there a step in the docs I may have missed where I was to install them?
> I don't recall installing them, but there are several steps and I might
> have forgotten that I did install them, too.
>
> I can check.
>
>
> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> are you missing the jna packages?
>>
>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > I basically just leveraged the code you provided to redirect the output
>> on
>> > Ubuntu.
>> >
>> > Here is the standard err:
>> >
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
>> > more info.
>> > java.lang.reflect.InvocationTargetException
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.lang.reflect.Method.invoke(Method.java:606)
>> > at
>> >
>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>> > at org.libvirt.Library.free(Unknown Source)
>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>> > at
>> >
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>> > at
>> >
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>> > at
>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>> > ... 5 more
>> > Cannot start daemon
>> > Service exit with a return value of 5
>> >
>> >
>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> Sounds good.
>> >>
>> >> Thanks, Marcus! :)
>> >>
>> >>
>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >>
>> >>> Ok, so the next step is to track that stdout and see if you can see
>> >>> what jsvc complains about when it fails to start up the service.
>> >>>
>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>> >>> <mi...@solidfire.com> wrote:
>> >>> > These also look good:
>> >>> >
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>> >>> > x86_64
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system
>> list
>> >>> >  Id Name                 State
>> >>> > ----------------------------------
>> >>> >
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>> >>> > /var/run/libvirt/libvirt-sock
>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>> /var/run/libvirt/libvirt-sock
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>> >>> >
>> >>> >
>> >>> >
>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>> >
>> >>> >> This is my new agent.properties file (with comments removed...looks
>> >>> >> decent):
>> >>> >>
>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>> >>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> >>> >> workers=5
>> >>> >> host=192.168.233.1
>> >>> >> port=8250
>> >>> >> cluster=1
>> >>> >> pod=1
>> >>> >> zone=1
>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>> >>> >> private.network.device=cloudbr0
>> >>> >> public.network.device=cloudbr0
>> >>> >> guest.network.device=cloudbr0
>> >>> >>
>> >>> >> Yeah, I was always writing stuff out using the logger. I should look
>> >>> into
>> >>> >> redirecting stdout and stderr.
>> >>> >>
>> >>> >> Here were my steps to start and check the process status:
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >>> >> cloudstack-agent start
>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>> >>> >>                                                      [ OK ]
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto
>> jsvc
>> >>> >>
>> >>> >> Also, this might be of interest:
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>> >>> >> kvm_intel             137721  0
>> >>> >> kvm                   415549  1 kvm_intel
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>> >>> >> /proc/cpuinfo
>> >>> >> 1
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>> >>> >> INFO: /dev/kvm exists
>> >>> >> KVM acceleration can be used
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>> /proc/cpuinfo
>> >>> >> 1
>> >>> >>
>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >>> >wrote:
>> >>> >>
>> >>> >>> So you:
>> >>> >>>
>> >>> >>> 1. run that command
>> >>> >>> 2. get a brand new agent.properties as a result
>> >>> >>> 3. start the service
>> >>> >>>
>> >>> >>> but you don't see it in the process table?
>> >>> >>>
>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff.
>> So
>> >>> >>> if there were an error not printed via logger you'd not see it.
>>  I'm
>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
>> head,
>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>> >>> >>>
>> >>> >>> start() {
>> >>> >>>     echo -n $"Starting $PROGNAME: "
>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>> >>> >>>         RETVAL=$?
>> >>> >>>         echo
>> >>> >>>     else
>> >>> >>>
>> >>> >>>
>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>> >>> >>>
>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ?
>> I
>> >>> >>> know you didn't end up using it, but the devcloud-kvm instructions
>> for
>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
>> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>> >>> >>>
>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>> >>> >>> <mi...@solidfire.com> wrote:
>> >>> >>> > These results look good:
>> >>> >>> >
>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>> 192.168.233.1
>> >>> -z 1
>> >>> >>> -p 1
>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>> >>> >>> > Starting to configure your system:
>> >>> >>> > Configure Apparmor ...        [OK]
>> >>> >>> > Configure Network ...         [OK]
>> >>> >>> > Configure Libvirt ...         [OK]
>> >>> >>> > Configure Firewall ...        [OK]
>> >>> >>> > Configure Nfs ...             [OK]
>> >>> >>> > Configure cloudAgent ...      [OK]
>> >>> >>> > CloudStack Agent setup is done!
>> >>> >>> >
>> >>> >>> > However, these results are the same:
>> >>> >>> >
>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto
>> >>> jsvc
>> >>> >>> >
>> >>> >>> >
>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>> >>> >
>> >>> >>> >> This appears to be the offending method:
>> >>> >>> >>
>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>> >>> >>> >>
>> >>> >>> >>         if (!_initialized) {
>> >>> >>> >>
>> >>> >>> >>             return null;
>> >>> >>> >>
>> >>> >>> >>         }
>> >>> >>> >>
>> >>> >>> >>         try {
>> >>> >>> >>
>> >>> >>> >>             _sp.parse(new InputSource(new StringReader(capXML)),
>> >>> this);
>> >>> >>> >>
>> >>> >>> >>             return _capXML.toString();
>> >>> >>> >>
>> >>> >>> >>         } catch (SAXException se) {
>> >>> >>> >>
>> >>> >>> >>             s_logger.warn(se.getMessage());
>> >>> >>> >>
>> >>> >>> >>         } catch (IOException ie) {
>> >>> >>> >>
>> >>> >>> >>             s_logger.error(ie.getMessage());
>> >>> >>> >>
>> >>> >>> >>         }
>> >>> >>> >>
>> >>> >>> >>         return null;
>> >>> >>> >>
>> >>> >>> >>     }
>> >>> >>> >>
>> >>> >>> >>
>> >>> >>> >> The logging I do from this method (not shown above), however,
>> >>> doesn't
>> >>> >>> seem
>> >>> >>> >> to end up in agent.log. Not sure why that is.
>> >>> >>> >>
>> >>> >>> >> We invoke this method and I log we're in this method as the
>> first
>> >>> >>> thing I
>> >>> >>> >> do, but it doesn't show up in agent.log.
>> >>> >>> >>
>> >>> >>> >> The last message in agent.log is a line saying we are right
>> before
>> >>> the
>> >>> >>> >> call to this method.
>> >>> >>> >>
>> >>> >>> >>
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >> *Mike Tutkowski*
>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> >> e: mike.tutkowski@solidfire.com
>> >>> >> o: 303.746.7302
>> >>> >> Advancing the way the world uses the cloud<
>> >>> http://solidfire.com/solution/overview/?video=play>
>> >>> >> *™*
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > *Mike Tutkowski*
>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>> > e: mike.tutkowski@solidfire.com
>> >>> > o: 303.746.7302
>> >>> > Advancing the way the world uses the
>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>> > *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Sounds good

I can:

1) sudo apt-get remove --purge cloudstack-agent

2) sudo apt-get clean

3) Switch to 4.2 branch

4) mvn -P developer,systemvm clean install

5) mvn -P developer -pl developer,tools/devcloud -Ddeploydb

6) Regenerate DEBs and install them



On Wed, Sep 25, 2013 at 6:13 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> You'll need to either remove the old debs or force install the new
> ones. Also, if any jars have moved location, you may have to delete
> the old ones in case they end up in the classpath of your jsvc
> command. I'd first try to generate 4.2 debs (or use the release
> artifacts), remove the old packages, install the new ones, then you'll
> have to clear the database and start fresh on 4.2.
>
> On Wed, Sep 25, 2013 at 6:09 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > By simply switching to 4.2, will CS use the proper version of Libvirt or
> is
> > there more I need to do since I've already run 4.3 on this Ubuntu
> install?
> >
> > Thanks
> >
> >
> > On Wed, Sep 25, 2013 at 6:07 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> It's been a bit rough getting this up and running, but at least I've
> been
> >> learning about how CloudStack works on KVM, so that's really good.
> >>
> >>
> >> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
> >> mike.tutkowski@solidfire.com> wrote:
> >>
> >>> I mean switch over to 4.2 from master. :)
> >>>
> >>>
> >>> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
> >>> mike.tutkowski@solidfire.com> wrote:
> >>>
> >>>> I can switch my branch over to master. I'm afraid master is not
> working
> >>>> with Libvirt on Ubuntu, as well.
> >>>>
> >>>>
> >>>> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>>>
> >>>>> It's harder still that you're trying to use master. I know 4.2 works
> >>>>> on ubuntu, but master is a minefield sometimes. Maybe that's not the
> >>>>> problem, but I do see emails going back and forth about libvirt/jna
> >>>>> versions, just need to read them in detail.
> >>>>>
> >>>>>  It's a shame that you haven't gotten a working config up yet prior
> to
> >>>>> development work (say a 4.2 that we know works), because we don't
> have
> >>>>> any clues as to whether it's your setup or master.
> >>>>>
> >>>>> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> >>>>> wrote:
> >>>>> > ok, just a guess. I'm assuming it's still this:
> >>>>> >
> >>>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> >>>>> >
> >>>>> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
> >>>>> > <mi...@solidfire.com> wrote:
> >>>>> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
> >>>>> >> Reading package lists... Done
> >>>>> >> Building dependency tree
> >>>>> >> Reading state information... Done
> >>>>> >> libjna-java is already the newest version.
> >>>>> >> libjna-java set to manually installed.
> >>>>> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
> >>>>> >>
> >>>>> >>
> >>>>> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
> >>>>> >> mike.tutkowski@solidfire.com> wrote:
> >>>>> >>
> >>>>> >>> Was there a step in the docs I may have missed where I was to
> >>>>> install
> >>>>> >>> them? I don't recall installing them, but there are several steps
> >>>>> and I
> >>>>> >>> might have forgotten that I did install them, too.
> >>>>> >>>
> >>>>> >>> I can check.
> >>>>> >>>
> >>>>> >>>
> >>>>> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <
> >>>>> shadowsor@gmail.com>wrote:
> >>>>> >>>
> >>>>> >>>> are you missing the jna packages?
> >>>>> >>>>
> >>>>> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
> >>>>> >>>> <mi...@solidfire.com> wrote:
> >>>>> >>>> > I basically just leveraged the code you provided to redirect
> the
> >>>>> output
> >>>>> >>>> on
> >>>>> >>>> > Ubuntu.
> >>>>> >>>> >
> >>>>> >>>> > Here is the standard err:
> >>>>> >>>> >
> >>>>> >>>> > log4j:WARN No appenders could be found for logger
> >>>>> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
> >>>>> >>>> > log4j:WARN Please initialize the log4j system properly.
> >>>>> >>>> > log4j:WARN See
> >>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
> >>>>> >>>> > more info.
> >>>>> >>>> > java.lang.reflect.InvocationTargetException
> >>>>> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>>> >>>> > at
> >>>>> >>>> >
> >>>>> >>>>
> >>>>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>>>> >>>> > at
> >>>>> >>>> >
> >>>>> >>>>
> >>>>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>>> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
> >>>>> >>>> > at
> >>>>> >>>> >
> >>>>> >>>>
> >>>>>
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> >>>>> >>>> > Caused by: java.lang.NoSuchMethodError:
> >>>>> com.sun.jna.Native.free(J)V
> >>>>> >>>> > at org.libvirt.Library.free(Unknown Source)
> >>>>> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
> >>>>> >>>> > at
> >>>>> >>>> >
> >>>>> >>>>
> >>>>>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
> >>>>> >>>> > at
> >>>>> >>>> >
> >>>>> >>>>
> >>>>>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
> >>>>> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
> >>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
> >>>>> >>>> > at
> >>>>> >>>>
> >>>>>
> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
> >>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
> >>>>> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
> >>>>> >>>> > ... 5 more
> >>>>> >>>> > Cannot start daemon
> >>>>> >>>> > Service exit with a return value of 5
> >>>>> >>>> >
> >>>>> >>>> >
> >>>>> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
> >>>>> >>>> > mike.tutkowski@solidfire.com> wrote:
> >>>>> >>>> >
> >>>>> >>>> >> Sounds good.
> >>>>> >>>> >>
> >>>>> >>>> >> Thanks, Marcus! :)
> >>>>> >>>> >>
> >>>>> >>>> >>
> >>>>> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
> >>>>> shadowsor@gmail.com
> >>>>> >>>> >wrote:
> >>>>> >>>> >>
> >>>>> >>>> >>> Ok, so the next step is to track that stdout and see if you
> >>>>> can see
> >>>>> >>>> >>> what jsvc complains about when it fails to start up the
> >>>>> service.
> >>>>> >>>> >>>
> >>>>> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> >>>>> >>>> >>> <mi...@solidfire.com> wrote:
> >>>>> >>>> >>> > These also look good:
> >>>>> >>>> >>> >
> >>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> >>>>> >>>> >>> > x86_64
> >>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
> >>>>> qemu:///system
> >>>>> >>>> list
> >>>>> >>>> >>> >  Id Name                 State
> >>>>> >>>> >>> > ----------------------------------
> >>>>> >>>> >>> >
> >>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
> >>>>> >>>> >>> > /var/run/libvirt/libvirt-sock
> >>>>> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
> >>>>> >>>> /var/run/libvirt/libvirt-sock
> >>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
> >>>>> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
> >>>>> >>>> >>> >
> >>>>> >>>> >>> >
> >>>>> >>>> >>> >
> >>>>> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
> >>>>> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>>>> >>>> >>> >
> >>>>> >>>> >>> >> This is my new agent.properties file (with comments
> >>>>> removed...looks
> >>>>> >>>> >>> >> decent):
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
> >>>>> >>>> >>> >>
> >>>>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >>>>> >>>> >>> >> workers=5
> >>>>> >>>> >>> >> host=192.168.233.1
> >>>>> >>>> >>> >> port=8250
> >>>>> >>>> >>> >> cluster=1
> >>>>> >>>> >>> >> pod=1
> >>>>> >>>> >>> >> zone=1
> >>>>> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
> >>>>> >>>> >>> >> private.network.device=cloudbr0
> >>>>> >>>> >>> >> public.network.device=cloudbr0
> >>>>> >>>> >>> >> guest.network.device=cloudbr0
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I
> >>>>> should
> >>>>> >>>> look
> >>>>> >>>> >>> into
> >>>>> >>>> >>> >> redirecting stdout and stderr.
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> Here were my steps to start and check the process status:
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >>>>> /usr/sbin/service
> >>>>> >>>> >>> >> cloudstack-agent start
> >>>>> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
> >>>>> >>>> >>> >>                                                      [
> OK ]
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef |
> >>>>> grep jsvc
> >>>>> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
> >>>>> --color=auto
> >>>>> >>>> jsvc
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> Also, this might be of interest:
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep
> kvm
> >>>>> >>>> >>> >> kvm_intel             137721  0
> >>>>> >>>> >>> >> kvm                   415549  1 kvm_intel
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c
> >>>>> '(vmx|svm)'
> >>>>> >>>> >>> >> /proc/cpuinfo
> >>>>> >>>> >>> >> 1
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
> >>>>> >>>> >>> >> INFO: /dev/kvm exists
> >>>>> >>>> >>> >> KVM acceleration can be used
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
> >>>>> >>>> /proc/cpuinfo
> >>>>> >>>> >>> >> 1
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
> >>>>> >>>> shadowsor@gmail.com
> >>>>> >>>> >>> >wrote:
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >>> So you:
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> 1. run that command
> >>>>> >>>> >>> >>> 2. get a brand new agent.properties as a result
> >>>>> >>>> >>> >>> 3. start the service
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> but you don't see it in the process table?
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only
> log4j
> >>>>> stuff.
> >>>>> >>>> So
> >>>>> >>>> >>> >>> if there were an error not printed via logger you'd not
> >>>>> see it.
> >>>>> >>>>  I'm
> >>>>> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top
> >>>>> of my
> >>>>> >>>> head,
> >>>>> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> start() {
> >>>>> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
> >>>>> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
> >>>>> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
> >>>>> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err
> -outfile
> >>>>> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
> >>>>> >>>> >>> >>>         RETVAL=$?
> >>>>> >>>> >>> >>>         echo
> >>>>> >>>> >>> >>>     else
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
> >>>>> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu
> does.
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod |
> >>>>> grep kvm'
> >>>>> >>>> ? I
> >>>>> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
> >>>>> >>>> instructions for
> >>>>> >>>> >>> >>> vmware fusion tell you to ensure that your guest has
> >>>>> hardware
> >>>>> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it
> >>>>> isn't.
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
> >>>>> >>>> >>> >>> <mi...@solidfire.com> wrote:
> >>>>> >>>> >>> >>> > These results look good:
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
> >>>>> >>>> 192.168.233.1
> >>>>> >>>> >>> -z 1
> >>>>> >>>> >>> >>> -p 1
> >>>>> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
> >>>>> >>>> --pubNic=cloudbr0
> >>>>> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
> >>>>> >>>> >>> >>> > Starting to configure your system:
> >>>>> >>>> >>> >>> > Configure Apparmor ...        [OK]
> >>>>> >>>> >>> >>> > Configure Network ...         [OK]
> >>>>> >>>> >>> >>> > Configure Libvirt ...         [OK]
> >>>>> >>>> >>> >>> > Configure Firewall ...        [OK]
> >>>>> >>>> >>> >>> > Configure Nfs ...             [OK]
> >>>>> >>>> >>> >>> > Configure cloudAgent ...      [OK]
> >>>>> >>>> >>> >>> > CloudStack Agent setup is done!
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> > However, these results are the same:
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >>>>> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
> >>>>> >>>> --color=auto
> >>>>> >>>> >>> jsvc
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> >>>>> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>>>> >>>> >>> >>> >
> >>>>> >>>> >>> >>> >> This appears to be the offending method:
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>     public String parseCapabilitiesXML(String
> capXML) {
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         if (!_initialized) {
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>             return null;
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         }
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         try {
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>             _sp.parse(new InputSource(new
> >>>>> >>>> StringReader(capXML)),
> >>>>> >>>> >>> this);
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>             return _capXML.toString();
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         } catch (SAXException se) {
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         } catch (IOException ie) {
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         }
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>         return null;
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>     }
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >> The logging I do from this method (not shown above),
> >>>>> however,
> >>>>> >>>> >>> doesn't
> >>>>> >>>> >>> >>> seem
> >>>>> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >> We invoke this method and I log we're in this method
> as
> >>>>> the
> >>>>> >>>> first
> >>>>> >>>> >>> >>> thing I
> >>>>> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >> The last message in agent.log is a line saying we are
> >>>>> right
> >>>>> >>>> before
> >>>>> >>>> >>> the
> >>>>> >>>> >>> >>> >> call to this method.
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>> >>
> >>>>> >>>> >>> >>>
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >> --
> >>>>> >>>> >>> >> *Mike Tutkowski*
> >>>>> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >>>> >>> >> e: mike.tutkowski@solidfire.com
> >>>>> >>>> >>> >> o: 303.746.7302
> >>>>> >>>> >>> >> Advancing the way the world uses the cloud<
> >>>>> >>>> >>> http://solidfire.com/solution/overview/?video=play>
> >>>>> >>>> >>> >> *™*
> >>>>> >>>> >>> >>
> >>>>> >>>> >>> >
> >>>>> >>>> >>> >
> >>>>> >>>> >>> >
> >>>>> >>>> >>> > --
> >>>>> >>>> >>> > *Mike Tutkowski*
> >>>>> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >>>> >>> > e: mike.tutkowski@solidfire.com
> >>>>> >>>> >>> > o: 303.746.7302
> >>>>> >>>> >>> > Advancing the way the world uses the
> >>>>> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>> >>>> >>> > *™*
> >>>>> >>>> >>>
> >>>>> >>>> >>
> >>>>> >>>> >>
> >>>>> >>>> >>
> >>>>> >>>> >> --
> >>>>> >>>> >> *Mike Tutkowski*
> >>>>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >>>> >> e: mike.tutkowski@solidfire.com
> >>>>> >>>> >> o: 303.746.7302
> >>>>> >>>> >> Advancing the way the world uses the cloud<
> >>>>> >>>> http://solidfire.com/solution/overview/?video=play>
> >>>>> >>>> >> *™*
> >>>>> >>>> >>
> >>>>> >>>> >
> >>>>> >>>> >
> >>>>> >>>> >
> >>>>> >>>> > --
> >>>>> >>>> > *Mike Tutkowski*
> >>>>> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >>>> > e: mike.tutkowski@solidfire.com
> >>>>> >>>> > o: 303.746.7302
> >>>>> >>>> > Advancing the way the world uses the
> >>>>> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>> >>>> > *™*
> >>>>> >>>>
> >>>>> >>>
> >>>>> >>>
> >>>>> >>>
> >>>>> >>> --
> >>>>> >>> *Mike Tutkowski*
> >>>>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >>> e: mike.tutkowski@solidfire.com
> >>>>> >>> o: 303.746.7302
> >>>>> >>> Advancing the way the world uses the cloud<
> >>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>> >>> *™*
> >>>>> >>>
> >>>>> >>
> >>>>> >>
> >>>>> >>
> >>>>> >> --
> >>>>> >> *Mike Tutkowski*
> >>>>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> >> e: mike.tutkowski@solidfire.com
> >>>>> >> o: 303.746.7302
> >>>>> >> Advancing the way the world uses the
> >>>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>> >> *™*
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkowski@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>> *™*
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>> *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
You'll need to either remove the old debs or force install the new
ones. Also, if any jars have moved location, you may have to delete
the old ones in case they end up in the classpath of your jsvc
command. I'd first try to generate 4.2 debs (or use the release
artifacts), remove the old packages, install the new ones, then you'll
have to clear the database and start fresh on 4.2.

On Wed, Sep 25, 2013 at 6:09 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> By simply switching to 4.2, will CS use the proper version of Libvirt or is
> there more I need to do since I've already run 4.3 on this Ubuntu install?
>
> Thanks
>
>
> On Wed, Sep 25, 2013 at 6:07 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> It's been a bit rough getting this up and running, but at least I've been
>> learning about how CloudStack works on KVM, so that's really good.
>>
>>
>> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> I mean switch over to 4.2 from master. :)
>>>
>>>
>>> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> I can switch my branch over to master. I'm afraid master is not working
>>>> with Libvirt on Ubuntu, as well.
>>>>
>>>>
>>>> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> It's harder still that you're trying to use master. I know 4.2 works
>>>>> on ubuntu, but master is a minefield sometimes. Maybe that's not the
>>>>> problem, but I do see emails going back and forth about libvirt/jna
>>>>> versions, just need to read them in detail.
>>>>>
>>>>>  It's a shame that you haven't gotten a working config up yet prior to
>>>>> development work (say a 4.2 that we know works), because we don't have
>>>>> any clues as to whether it's your setup or master.
>>>>>
>>>>> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com>
>>>>> wrote:
>>>>> > ok, just a guess. I'm assuming it's still this:
>>>>> >
>>>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>>>>> >
>>>>> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
>>>>> > <mi...@solidfire.com> wrote:
>>>>> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
>>>>> >> Reading package lists... Done
>>>>> >> Building dependency tree
>>>>> >> Reading state information... Done
>>>>> >> libjna-java is already the newest version.
>>>>> >> libjna-java set to manually installed.
>>>>> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>>>>> >>
>>>>> >>
>>>>> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
>>>>> >> mike.tutkowski@solidfire.com> wrote:
>>>>> >>
>>>>> >>> Was there a step in the docs I may have missed where I was to
>>>>> install
>>>>> >>> them? I don't recall installing them, but there are several steps
>>>>> and I
>>>>> >>> might have forgotten that I did install them, too.
>>>>> >>>
>>>>> >>> I can check.
>>>>> >>>
>>>>> >>>
>>>>> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com>wrote:
>>>>> >>>
>>>>> >>>> are you missing the jna packages?
>>>>> >>>>
>>>>> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>>>>> >>>> <mi...@solidfire.com> wrote:
>>>>> >>>> > I basically just leveraged the code you provided to redirect the
>>>>> output
>>>>> >>>> on
>>>>> >>>> > Ubuntu.
>>>>> >>>> >
>>>>> >>>> > Here is the standard err:
>>>>> >>>> >
>>>>> >>>> > log4j:WARN No appenders could be found for logger
>>>>> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>>>>> >>>> > log4j:WARN Please initialize the log4j system properly.
>>>>> >>>> > log4j:WARN See
>>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>>>>> >>>> > more info.
>>>>> >>>> > java.lang.reflect.InvocationTargetException
>>>>> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> >>>> > at
>>>>> >>>> >
>>>>> >>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>> >>>> > at
>>>>> >>>> >
>>>>> >>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>>>>> >>>> > at
>>>>> >>>> >
>>>>> >>>>
>>>>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>>>>> >>>> > Caused by: java.lang.NoSuchMethodError:
>>>>> com.sun.jna.Native.free(J)V
>>>>> >>>> > at org.libvirt.Library.free(Unknown Source)
>>>>> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>>>>> >>>> > at
>>>>> >>>> >
>>>>> >>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>>>>> >>>> > at
>>>>> >>>> >
>>>>> >>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>>>>> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>>>>> >>>> > at
>>>>> >>>>
>>>>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>>>>> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>>>>> >>>> > ... 5 more
>>>>> >>>> > Cannot start daemon
>>>>> >>>> > Service exit with a return value of 5
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>>>>> >>>> > mike.tutkowski@solidfire.com> wrote:
>>>>> >>>> >
>>>>> >>>> >> Sounds good.
>>>>> >>>> >>
>>>>> >>>> >> Thanks, Marcus! :)
>>>>> >>>> >>
>>>>> >>>> >>
>>>>> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com
>>>>> >>>> >wrote:
>>>>> >>>> >>
>>>>> >>>> >>> Ok, so the next step is to track that stdout and see if you
>>>>> can see
>>>>> >>>> >>> what jsvc complains about when it fails to start up the
>>>>> service.
>>>>> >>>> >>>
>>>>> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>>>> >>>> >>> <mi...@solidfire.com> wrote:
>>>>> >>>> >>> > These also look good:
>>>>> >>>> >>> >
>>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>>>> >>>> >>> > x86_64
>>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
>>>>> qemu:///system
>>>>> >>>> list
>>>>> >>>> >>> >  Id Name                 State
>>>>> >>>> >>> > ----------------------------------
>>>>> >>>> >>> >
>>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>>>> >>>> >>> > /var/run/libvirt/libvirt-sock
>>>>> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>>>>> >>>> /var/run/libvirt/libvirt-sock
>>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>>>> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>>>> >>>> >>> >
>>>>> >>>> >>> >
>>>>> >>>> >>> >
>>>>> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>>>> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>>> >>>> >>> >
>>>>> >>>> >>> >> This is my new agent.properties file (with comments
>>>>> removed...looks
>>>>> >>>> >>> >> decent):
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>>>> >>>> >>> >>
>>>>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>>>> >>>> >>> >> workers=5
>>>>> >>>> >>> >> host=192.168.233.1
>>>>> >>>> >>> >> port=8250
>>>>> >>>> >>> >> cluster=1
>>>>> >>>> >>> >> pod=1
>>>>> >>>> >>> >> zone=1
>>>>> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>>>> >>>> >>> >> private.network.device=cloudbr0
>>>>> >>>> >>> >> public.network.device=cloudbr0
>>>>> >>>> >>> >> guest.network.device=cloudbr0
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I
>>>>> should
>>>>> >>>> look
>>>>> >>>> >>> into
>>>>> >>>> >>> >> redirecting stdout and stderr.
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> Here were my steps to start and check the process status:
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>> /usr/sbin/service
>>>>> >>>> >>> >> cloudstack-agent start
>>>>> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>>>>> >>>> >>> >>                                                      [ OK ]
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef |
>>>>> grep jsvc
>>>>> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
>>>>> --color=auto
>>>>> >>>> jsvc
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> Also, this might be of interest:
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>>>> >>>> >>> >> kvm_intel             137721  0
>>>>> >>>> >>> >> kvm                   415549  1 kvm_intel
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c
>>>>> '(vmx|svm)'
>>>>> >>>> >>> >> /proc/cpuinfo
>>>>> >>>> >>> >> 1
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>>>> >>>> >>> >> INFO: /dev/kvm exists
>>>>> >>>> >>> >> KVM acceleration can be used
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>>>>> >>>> /proc/cpuinfo
>>>>> >>>> >>> >> 1
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>>>>> >>>> shadowsor@gmail.com
>>>>> >>>> >>> >wrote:
>>>>> >>>> >>> >>
>>>>> >>>> >>> >>> So you:
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> 1. run that command
>>>>> >>>> >>> >>> 2. get a brand new agent.properties as a result
>>>>> >>>> >>> >>> 3. start the service
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> but you don't see it in the process table?
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j
>>>>> stuff.
>>>>> >>>> So
>>>>> >>>> >>> >>> if there were an error not printed via logger you'd not
>>>>> see it.
>>>>> >>>>  I'm
>>>>> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top
>>>>> of my
>>>>> >>>> head,
>>>>> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> start() {
>>>>> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>>>>> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>>>> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>>>> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>>>> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>>>> >>>> >>> >>>         RETVAL=$?
>>>>> >>>> >>> >>>         echo
>>>>> >>>> >>> >>>     else
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>>>> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod |
>>>>> grep kvm'
>>>>> >>>> ? I
>>>>> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>>>>> >>>> instructions for
>>>>> >>>> >>> >>> vmware fusion tell you to ensure that your guest has
>>>>> hardware
>>>>> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it
>>>>> isn't.
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>>>> >>>> >>> >>> <mi...@solidfire.com> wrote:
>>>>> >>>> >>> >>> > These results look good:
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>>>>> >>>> 192.168.233.1
>>>>> >>>> >>> -z 1
>>>>> >>>> >>> >>> -p 1
>>>>> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>>>>> >>>> --pubNic=cloudbr0
>>>>> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>>>> >>>> >>> >>> > Starting to configure your system:
>>>>> >>>> >>> >>> > Configure Apparmor ...        [OK]
>>>>> >>>> >>> >>> > Configure Network ...         [OK]
>>>>> >>>> >>> >>> > Configure Libvirt ...         [OK]
>>>>> >>>> >>> >>> > Configure Firewall ...        [OK]
>>>>> >>>> >>> >>> > Configure Nfs ...             [OK]
>>>>> >>>> >>> >>> > Configure cloudAgent ...      [OK]
>>>>> >>>> >>> >>> > CloudStack Agent setup is done!
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> > However, these results are the same:
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>>>> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>>>>> >>>> --color=auto
>>>>> >>>> >>> jsvc
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>>>> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>>> >>>> >>> >>> >
>>>>> >>>> >>> >>> >> This appears to be the offending method:
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         if (!_initialized) {
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>             return null;
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         }
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         try {
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>             _sp.parse(new InputSource(new
>>>>> >>>> StringReader(capXML)),
>>>>> >>>> >>> this);
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>             return _capXML.toString();
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         } catch (SAXException se) {
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         } catch (IOException ie) {
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         }
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>         return null;
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>     }
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >> The logging I do from this method (not shown above),
>>>>> however,
>>>>> >>>> >>> doesn't
>>>>> >>>> >>> >>> seem
>>>>> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >> We invoke this method and I log we're in this method as
>>>>> the
>>>>> >>>> first
>>>>> >>>> >>> >>> thing I
>>>>> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >> The last message in agent.log is a line saying we are
>>>>> right
>>>>> >>>> before
>>>>> >>>> >>> the
>>>>> >>>> >>> >>> >> call to this method.
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>> >>
>>>>> >>>> >>> >>>
>>>>> >>>> >>> >>
>>>>> >>>> >>> >>
>>>>> >>>> >>> >>
>>>>> >>>> >>> >> --
>>>>> >>>> >>> >> *Mike Tutkowski*
>>>>> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >>>> >>> >> e: mike.tutkowski@solidfire.com
>>>>> >>>> >>> >> o: 303.746.7302
>>>>> >>>> >>> >> Advancing the way the world uses the cloud<
>>>>> >>>> >>> http://solidfire.com/solution/overview/?video=play>
>>>>> >>>> >>> >> *™*
>>>>> >>>> >>> >>
>>>>> >>>> >>> >
>>>>> >>>> >>> >
>>>>> >>>> >>> >
>>>>> >>>> >>> > --
>>>>> >>>> >>> > *Mike Tutkowski*
>>>>> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >>>> >>> > e: mike.tutkowski@solidfire.com
>>>>> >>>> >>> > o: 303.746.7302
>>>>> >>>> >>> > Advancing the way the world uses the
>>>>> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >>>> >>> > *™*
>>>>> >>>> >>>
>>>>> >>>> >>
>>>>> >>>> >>
>>>>> >>>> >>
>>>>> >>>> >> --
>>>>> >>>> >> *Mike Tutkowski*
>>>>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >>>> >> e: mike.tutkowski@solidfire.com
>>>>> >>>> >> o: 303.746.7302
>>>>> >>>> >> Advancing the way the world uses the cloud<
>>>>> >>>> http://solidfire.com/solution/overview/?video=play>
>>>>> >>>> >> *™*
>>>>> >>>> >>
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> >
>>>>> >>>> > --
>>>>> >>>> > *Mike Tutkowski*
>>>>> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >>>> > e: mike.tutkowski@solidfire.com
>>>>> >>>> > o: 303.746.7302
>>>>> >>>> > Advancing the way the world uses the
>>>>> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >>>> > *™*
>>>>> >>>>
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>> --
>>>>> >>> *Mike Tutkowski*
>>>>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>> >>> o: 303.746.7302
>>>>> >>> Advancing the way the world uses the cloud<
>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>> >>> *™*
>>>>> >>>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> --
>>>>> >> *Mike Tutkowski*
>>>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> e: mike.tutkowski@solidfire.com
>>>>> >> o: 303.746.7302
>>>>> >> Advancing the way the world uses the
>>>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
By simply switching to 4.2, will CS use the proper version of Libvirt or is
there more I need to do since I've already run 4.3 on this Ubuntu install?

Thanks


On Wed, Sep 25, 2013 at 6:07 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> It's been a bit rough getting this up and running, but at least I've been
> learning about how CloudStack works on KVM, so that's really good.
>
>
> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> I mean switch over to 4.2 from master. :)
>>
>>
>> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> I can switch my branch over to master. I'm afraid master is not working
>>> with Libvirt on Ubuntu, as well.
>>>
>>>
>>> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> It's harder still that you're trying to use master. I know 4.2 works
>>>> on ubuntu, but master is a minefield sometimes. Maybe that's not the
>>>> problem, but I do see emails going back and forth about libvirt/jna
>>>> versions, just need to read them in detail.
>>>>
>>>>  It's a shame that you haven't gotten a working config up yet prior to
>>>> development work (say a 4.2 that we know works), because we don't have
>>>> any clues as to whether it's your setup or master.
>>>>
>>>> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com>
>>>> wrote:
>>>> > ok, just a guess. I'm assuming it's still this:
>>>> >
>>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>>>> >
>>>> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
>>>> > <mi...@solidfire.com> wrote:
>>>> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
>>>> >> Reading package lists... Done
>>>> >> Building dependency tree
>>>> >> Reading state information... Done
>>>> >> libjna-java is already the newest version.
>>>> >> libjna-java set to manually installed.
>>>> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>>>> >>
>>>> >>
>>>> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
>>>> >> mike.tutkowski@solidfire.com> wrote:
>>>> >>
>>>> >>> Was there a step in the docs I may have missed where I was to
>>>> install
>>>> >>> them? I don't recall installing them, but there are several steps
>>>> and I
>>>> >>> might have forgotten that I did install them, too.
>>>> >>>
>>>> >>> I can check.
>>>> >>>
>>>> >>>
>>>> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com>wrote:
>>>> >>>
>>>> >>>> are you missing the jna packages?
>>>> >>>>
>>>> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>>>> >>>> <mi...@solidfire.com> wrote:
>>>> >>>> > I basically just leveraged the code you provided to redirect the
>>>> output
>>>> >>>> on
>>>> >>>> > Ubuntu.
>>>> >>>> >
>>>> >>>> > Here is the standard err:
>>>> >>>> >
>>>> >>>> > log4j:WARN No appenders could be found for logger
>>>> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>>>> >>>> > log4j:WARN Please initialize the log4j system properly.
>>>> >>>> > log4j:WARN See
>>>> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>>>> >>>> > more info.
>>>> >>>> > java.lang.reflect.InvocationTargetException
>>>> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>>> > at
>>>> >>>> >
>>>> >>>>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> >>>> > at
>>>> >>>> >
>>>> >>>>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>>>> >>>> > at
>>>> >>>> >
>>>> >>>>
>>>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>>>> >>>> > Caused by: java.lang.NoSuchMethodError:
>>>> com.sun.jna.Native.free(J)V
>>>> >>>> > at org.libvirt.Library.free(Unknown Source)
>>>> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>>>> >>>> > at
>>>> >>>> >
>>>> >>>>
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>>>> >>>> > at
>>>> >>>> >
>>>> >>>>
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>>>> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>>>> >>>> > at
>>>> >>>>
>>>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>>>> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>>>> >>>> > ... 5 more
>>>> >>>> > Cannot start daemon
>>>> >>>> > Service exit with a return value of 5
>>>> >>>> >
>>>> >>>> >
>>>> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>>>> >>>> > mike.tutkowski@solidfire.com> wrote:
>>>> >>>> >
>>>> >>>> >> Sounds good.
>>>> >>>> >>
>>>> >>>> >> Thanks, Marcus! :)
>>>> >>>> >>
>>>> >>>> >>
>>>> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> >>>> >wrote:
>>>> >>>> >>
>>>> >>>> >>> Ok, so the next step is to track that stdout and see if you
>>>> can see
>>>> >>>> >>> what jsvc complains about when it fails to start up the
>>>> service.
>>>> >>>> >>>
>>>> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>>> >>>> >>> <mi...@solidfire.com> wrote:
>>>> >>>> >>> > These also look good:
>>>> >>>> >>> >
>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>>> >>>> >>> > x86_64
>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
>>>> qemu:///system
>>>> >>>> list
>>>> >>>> >>> >  Id Name                 State
>>>> >>>> >>> > ----------------------------------
>>>> >>>> >>> >
>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>>> >>>> >>> > /var/run/libvirt/libvirt-sock
>>>> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>>>> >>>> /var/run/libvirt/libvirt-sock
>>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>>> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>>> >>>> >>> >
>>>> >>>> >>> >
>>>> >>>> >>> >
>>>> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>>> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>> >>>> >>> >
>>>> >>>> >>> >> This is my new agent.properties file (with comments
>>>> removed...looks
>>>> >>>> >>> >> decent):
>>>> >>>> >>> >>
>>>> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>>> >>>> >>> >>
>>>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>>> >>>> >>> >> workers=5
>>>> >>>> >>> >> host=192.168.233.1
>>>> >>>> >>> >> port=8250
>>>> >>>> >>> >> cluster=1
>>>> >>>> >>> >> pod=1
>>>> >>>> >>> >> zone=1
>>>> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>>> >>>> >>> >> private.network.device=cloudbr0
>>>> >>>> >>> >> public.network.device=cloudbr0
>>>> >>>> >>> >> guest.network.device=cloudbr0
>>>> >>>> >>> >>
>>>> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I
>>>> should
>>>> >>>> look
>>>> >>>> >>> into
>>>> >>>> >>> >> redirecting stdout and stderr.
>>>> >>>> >>> >>
>>>> >>>> >>> >> Here were my steps to start and check the process status:
>>>> >>>> >>> >>
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>> /usr/sbin/service
>>>> >>>> >>> >> cloudstack-agent start
>>>> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>>>> >>>> >>> >>                                                      [ OK ]
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef |
>>>> grep jsvc
>>>> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
>>>> --color=auto
>>>> >>>> jsvc
>>>> >>>> >>> >>
>>>> >>>> >>> >> Also, this might be of interest:
>>>> >>>> >>> >>
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>>> >>>> >>> >> kvm_intel             137721  0
>>>> >>>> >>> >> kvm                   415549  1 kvm_intel
>>>> >>>> >>> >>
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c
>>>> '(vmx|svm)'
>>>> >>>> >>> >> /proc/cpuinfo
>>>> >>>> >>> >> 1
>>>> >>>> >>> >>
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>>> >>>> >>> >> INFO: /dev/kvm exists
>>>> >>>> >>> >> KVM acceleration can be used
>>>> >>>> >>> >>
>>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>>>> >>>> /proc/cpuinfo
>>>> >>>> >>> >> 1
>>>> >>>> >>> >>
>>>> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>>>> >>>> shadowsor@gmail.com
>>>> >>>> >>> >wrote:
>>>> >>>> >>> >>
>>>> >>>> >>> >>> So you:
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> 1. run that command
>>>> >>>> >>> >>> 2. get a brand new agent.properties as a result
>>>> >>>> >>> >>> 3. start the service
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> but you don't see it in the process table?
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j
>>>> stuff.
>>>> >>>> So
>>>> >>>> >>> >>> if there were an error not printed via logger you'd not
>>>> see it.
>>>> >>>>  I'm
>>>> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top
>>>> of my
>>>> >>>> head,
>>>> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> start() {
>>>> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>>>> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>>> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>>> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>>> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>>> >>>> >>> >>>         RETVAL=$?
>>>> >>>> >>> >>>         echo
>>>> >>>> >>> >>>     else
>>>> >>>> >>> >>>
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>>> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod |
>>>> grep kvm'
>>>> >>>> ? I
>>>> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>>>> >>>> instructions for
>>>> >>>> >>> >>> vmware fusion tell you to ensure that your guest has
>>>> hardware
>>>> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it
>>>> isn't.
>>>> >>>> >>> >>>
>>>> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>>> >>>> >>> >>> <mi...@solidfire.com> wrote:
>>>> >>>> >>> >>> > These results look good:
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>>>> >>>> 192.168.233.1
>>>> >>>> >>> -z 1
>>>> >>>> >>> >>> -p 1
>>>> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>>>> >>>> --pubNic=cloudbr0
>>>> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>>> >>>> >>> >>> > Starting to configure your system:
>>>> >>>> >>> >>> > Configure Apparmor ...        [OK]
>>>> >>>> >>> >>> > Configure Network ...         [OK]
>>>> >>>> >>> >>> > Configure Libvirt ...         [OK]
>>>> >>>> >>> >>> > Configure Firewall ...        [OK]
>>>> >>>> >>> >>> > Configure Nfs ...             [OK]
>>>> >>>> >>> >>> > Configure cloudAgent ...      [OK]
>>>> >>>> >>> >>> > CloudStack Agent setup is done!
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> > However, these results are the same:
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>>> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>>>> >>>> --color=auto
>>>> >>>> >>> jsvc
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>>> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>> >>>> >>> >>> >
>>>> >>>> >>> >>> >> This appears to be the offending method:
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         if (!_initialized) {
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>             return null;
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         }
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         try {
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>             _sp.parse(new InputSource(new
>>>> >>>> StringReader(capXML)),
>>>> >>>> >>> this);
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>             return _capXML.toString();
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         } catch (SAXException se) {
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         } catch (IOException ie) {
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         }
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>         return null;
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>     }
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >> The logging I do from this method (not shown above),
>>>> however,
>>>> >>>> >>> doesn't
>>>> >>>> >>> >>> seem
>>>> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >> We invoke this method and I log we're in this method as
>>>> the
>>>> >>>> first
>>>> >>>> >>> >>> thing I
>>>> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >> The last message in agent.log is a line saying we are
>>>> right
>>>> >>>> before
>>>> >>>> >>> the
>>>> >>>> >>> >>> >> call to this method.
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>> >>
>>>> >>>> >>> >>>
>>>> >>>> >>> >>
>>>> >>>> >>> >>
>>>> >>>> >>> >>
>>>> >>>> >>> >> --
>>>> >>>> >>> >> *Mike Tutkowski*
>>>> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>>> >>> >> e: mike.tutkowski@solidfire.com
>>>> >>>> >>> >> o: 303.746.7302
>>>> >>>> >>> >> Advancing the way the world uses the cloud<
>>>> >>>> >>> http://solidfire.com/solution/overview/?video=play>
>>>> >>>> >>> >> *™*
>>>> >>>> >>> >>
>>>> >>>> >>> >
>>>> >>>> >>> >
>>>> >>>> >>> >
>>>> >>>> >>> > --
>>>> >>>> >>> > *Mike Tutkowski*
>>>> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>>> >>> > e: mike.tutkowski@solidfire.com
>>>> >>>> >>> > o: 303.746.7302
>>>> >>>> >>> > Advancing the way the world uses the
>>>> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >>>> >>> > *™*
>>>> >>>> >>>
>>>> >>>> >>
>>>> >>>> >>
>>>> >>>> >>
>>>> >>>> >> --
>>>> >>>> >> *Mike Tutkowski*
>>>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>>> >> e: mike.tutkowski@solidfire.com
>>>> >>>> >> o: 303.746.7302
>>>> >>>> >> Advancing the way the world uses the cloud<
>>>> >>>> http://solidfire.com/solution/overview/?video=play>
>>>> >>>> >> *™*
>>>> >>>> >>
>>>> >>>> >
>>>> >>>> >
>>>> >>>> >
>>>> >>>> > --
>>>> >>>> > *Mike Tutkowski*
>>>> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>>> > e: mike.tutkowski@solidfire.com
>>>> >>>> > o: 303.746.7302
>>>> >>>> > Advancing the way the world uses the
>>>> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >>>> > *™*
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> --
>>>> >>> *Mike Tutkowski*
>>>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>> e: mike.tutkowski@solidfire.com
>>>> >>> o: 303.746.7302
>>>> >>> Advancing the way the world uses the cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>> >>> *™*
>>>> >>>
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> *Mike Tutkowski*
>>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> e: mike.tutkowski@solidfire.com
>>>> >> o: 303.746.7302
>>>> >> Advancing the way the world uses the
>>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
It's been a bit rough getting this up and running, but at least I've been
learning about how CloudStack works on KVM, so that's really good.


On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> I mean switch over to 4.2 from master. :)
>
>
> On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> I can switch my branch over to master. I'm afraid master is not working
>> with Libvirt on Ubuntu, as well.
>>
>>
>> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> It's harder still that you're trying to use master. I know 4.2 works
>>> on ubuntu, but master is a minefield sometimes. Maybe that's not the
>>> problem, but I do see emails going back and forth about libvirt/jna
>>> versions, just need to read them in detail.
>>>
>>>  It's a shame that you haven't gotten a working config up yet prior to
>>> development work (say a 4.2 that we know works), because we don't have
>>> any clues as to whether it's your setup or master.
>>>
>>> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>> > ok, just a guess. I'm assuming it's still this:
>>> >
>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>>> >
>>> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
>>> > <mi...@solidfire.com> wrote:
>>> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
>>> >> Reading package lists... Done
>>> >> Building dependency tree
>>> >> Reading state information... Done
>>> >> libjna-java is already the newest version.
>>> >> libjna-java set to manually installed.
>>> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>>> >>
>>> >>
>>> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
>>> >> mike.tutkowski@solidfire.com> wrote:
>>> >>
>>> >>> Was there a step in the docs I may have missed where I was to install
>>> >>> them? I don't recall installing them, but there are several steps
>>> and I
>>> >>> might have forgotten that I did install them, too.
>>> >>>
>>> >>> I can check.
>>> >>>
>>> >>>
>>> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <
>>> shadowsor@gmail.com>wrote:
>>> >>>
>>> >>>> are you missing the jna packages?
>>> >>>>
>>> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>>> >>>> <mi...@solidfire.com> wrote:
>>> >>>> > I basically just leveraged the code you provided to redirect the
>>> output
>>> >>>> on
>>> >>>> > Ubuntu.
>>> >>>> >
>>> >>>> > Here is the standard err:
>>> >>>> >
>>> >>>> > log4j:WARN No appenders could be found for logger
>>> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>>> >>>> > log4j:WARN Please initialize the log4j system properly.
>>> >>>> > log4j:WARN See
>>> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>>> >>>> > more info.
>>> >>>> > java.lang.reflect.InvocationTargetException
>>> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>>> > at
>>> >>>> >
>>> >>>>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> >>>> > at
>>> >>>> >
>>> >>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>>> >>>> > at
>>> >>>> >
>>> >>>>
>>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>>> >>>> > Caused by: java.lang.NoSuchMethodError:
>>> com.sun.jna.Native.free(J)V
>>> >>>> > at org.libvirt.Library.free(Unknown Source)
>>> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>>> >>>> > at
>>> >>>> >
>>> >>>>
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>>> >>>> > at
>>> >>>> >
>>> >>>>
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>>> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>>> >>>> > at
>>> >>>>
>>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>>> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>>> >>>> > ... 5 more
>>> >>>> > Cannot start daemon
>>> >>>> > Service exit with a return value of 5
>>> >>>> >
>>> >>>> >
>>> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>>> >>>> > mike.tutkowski@solidfire.com> wrote:
>>> >>>> >
>>> >>>> >> Sounds good.
>>> >>>> >>
>>> >>>> >> Thanks, Marcus! :)
>>> >>>> >>
>>> >>>> >>
>>> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
>>> shadowsor@gmail.com
>>> >>>> >wrote:
>>> >>>> >>
>>> >>>> >>> Ok, so the next step is to track that stdout and see if you can
>>> see
>>> >>>> >>> what jsvc complains about when it fails to start up the service.
>>> >>>> >>>
>>> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>> >>>> >>> <mi...@solidfire.com> wrote:
>>> >>>> >>> > These also look good:
>>> >>>> >>> >
>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>> >>>> >>> > x86_64
>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
>>> qemu:///system
>>> >>>> list
>>> >>>> >>> >  Id Name                 State
>>> >>>> >>> > ----------------------------------
>>> >>>> >>> >
>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>> >>>> >>> > /var/run/libvirt/libvirt-sock
>>> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>>> >>>> /var/run/libvirt/libvirt-sock
>>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>> >>>> >>> >
>>> >>>> >>> >
>>> >>>> >>> >
>>> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>> >>>> >>> >
>>> >>>> >>> >> This is my new agent.properties file (with comments
>>> removed...looks
>>> >>>> >>> >> decent):
>>> >>>> >>> >>
>>> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>> >>>> >>> >>
>>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>> >>>> >>> >> workers=5
>>> >>>> >>> >> host=192.168.233.1
>>> >>>> >>> >> port=8250
>>> >>>> >>> >> cluster=1
>>> >>>> >>> >> pod=1
>>> >>>> >>> >> zone=1
>>> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>> >>>> >>> >> private.network.device=cloudbr0
>>> >>>> >>> >> public.network.device=cloudbr0
>>> >>>> >>> >> guest.network.device=cloudbr0
>>> >>>> >>> >>
>>> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I
>>> should
>>> >>>> look
>>> >>>> >>> into
>>> >>>> >>> >> redirecting stdout and stderr.
>>> >>>> >>> >>
>>> >>>> >>> >> Here were my steps to start and check the process status:
>>> >>>> >>> >>
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>> /usr/sbin/service
>>> >>>> >>> >> cloudstack-agent start
>>> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>>> >>>> >>> >>                                                      [ OK ]
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep
>>> jsvc
>>> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
>>> --color=auto
>>> >>>> jsvc
>>> >>>> >>> >>
>>> >>>> >>> >> Also, this might be of interest:
>>> >>>> >>> >>
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>> >>>> >>> >> kvm_intel             137721  0
>>> >>>> >>> >> kvm                   415549  1 kvm_intel
>>> >>>> >>> >>
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c
>>> '(vmx|svm)'
>>> >>>> >>> >> /proc/cpuinfo
>>> >>>> >>> >> 1
>>> >>>> >>> >>
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>> >>>> >>> >> INFO: /dev/kvm exists
>>> >>>> >>> >> KVM acceleration can be used
>>> >>>> >>> >>
>>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>>> >>>> /proc/cpuinfo
>>> >>>> >>> >> 1
>>> >>>> >>> >>
>>> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>>> >>>> shadowsor@gmail.com
>>> >>>> >>> >wrote:
>>> >>>> >>> >>
>>> >>>> >>> >>> So you:
>>> >>>> >>> >>>
>>> >>>> >>> >>> 1. run that command
>>> >>>> >>> >>> 2. get a brand new agent.properties as a result
>>> >>>> >>> >>> 3. start the service
>>> >>>> >>> >>>
>>> >>>> >>> >>> but you don't see it in the process table?
>>> >>>> >>> >>>
>>> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j
>>> stuff.
>>> >>>> So
>>> >>>> >>> >>> if there were an error not printed via logger you'd not see
>>> it.
>>> >>>>  I'm
>>> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of
>>> my
>>> >>>> head,
>>> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>> >>>> >>> >>>
>>> >>>> >>> >>> start() {
>>> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>>> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>> >>>> >>> >>>         RETVAL=$?
>>> >>>> >>> >>>         echo
>>> >>>> >>> >>>     else
>>> >>>> >>> >>>
>>> >>>> >>> >>>
>>> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>> >>>> >>> >>>
>>> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep
>>> kvm'
>>> >>>> ? I
>>> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>>> >>>> instructions for
>>> >>>> >>> >>> vmware fusion tell you to ensure that your guest has
>>> hardware
>>> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it
>>> isn't.
>>> >>>> >>> >>>
>>> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>> >>>> >>> >>> <mi...@solidfire.com> wrote:
>>> >>>> >>> >>> > These results look good:
>>> >>>> >>> >>> >
>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>>> >>>> 192.168.233.1
>>> >>>> >>> -z 1
>>> >>>> >>> >>> -p 1
>>> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>>> >>>> --pubNic=cloudbr0
>>> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>> >>>> >>> >>> > Starting to configure your system:
>>> >>>> >>> >>> > Configure Apparmor ...        [OK]
>>> >>>> >>> >>> > Configure Network ...         [OK]
>>> >>>> >>> >>> > Configure Libvirt ...         [OK]
>>> >>>> >>> >>> > Configure Firewall ...        [OK]
>>> >>>> >>> >>> > Configure Nfs ...             [OK]
>>> >>>> >>> >>> > Configure cloudAgent ...      [OK]
>>> >>>> >>> >>> > CloudStack Agent setup is done!
>>> >>>> >>> >>> >
>>> >>>> >>> >>> > However, these results are the same:
>>> >>>> >>> >>> >
>>> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>>> >>>> --color=auto
>>> >>>> >>> jsvc
>>> >>>> >>> >>> >
>>> >>>> >>> >>> >
>>> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>>> >>>> >>> >>> >
>>> >>>> >>> >>> >> This appears to be the offending method:
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         if (!_initialized) {
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>             return null;
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         }
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         try {
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>             _sp.parse(new InputSource(new
>>> >>>> StringReader(capXML)),
>>> >>>> >>> this);
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>             return _capXML.toString();
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         } catch (SAXException se) {
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         } catch (IOException ie) {
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         }
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>         return null;
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>     }
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >> The logging I do from this method (not shown above),
>>> however,
>>> >>>> >>> doesn't
>>> >>>> >>> >>> seem
>>> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >> We invoke this method and I log we're in this method as
>>> the
>>> >>>> first
>>> >>>> >>> >>> thing I
>>> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >> The last message in agent.log is a line saying we are
>>> right
>>> >>>> before
>>> >>>> >>> the
>>> >>>> >>> >>> >> call to this method.
>>> >>>> >>> >>> >>
>>> >>>> >>> >>> >>
>>> >>>> >>> >>>
>>> >>>> >>> >>
>>> >>>> >>> >>
>>> >>>> >>> >>
>>> >>>> >>> >> --
>>> >>>> >>> >> *Mike Tutkowski*
>>> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> >>> >> e: mike.tutkowski@solidfire.com
>>> >>>> >>> >> o: 303.746.7302
>>> >>>> >>> >> Advancing the way the world uses the cloud<
>>> >>>> >>> http://solidfire.com/solution/overview/?video=play>
>>> >>>> >>> >> *™*
>>> >>>> >>> >>
>>> >>>> >>> >
>>> >>>> >>> >
>>> >>>> >>> >
>>> >>>> >>> > --
>>> >>>> >>> > *Mike Tutkowski*
>>> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> >>> > e: mike.tutkowski@solidfire.com
>>> >>>> >>> > o: 303.746.7302
>>> >>>> >>> > Advancing the way the world uses the
>>> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>> >>> > *™*
>>> >>>> >>>
>>> >>>> >>
>>> >>>> >>
>>> >>>> >>
>>> >>>> >> --
>>> >>>> >> *Mike Tutkowski*
>>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> >> e: mike.tutkowski@solidfire.com
>>> >>>> >> o: 303.746.7302
>>> >>>> >> Advancing the way the world uses the cloud<
>>> >>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>> >> *™*
>>> >>>> >>
>>> >>>> >
>>> >>>> >
>>> >>>> >
>>> >>>> > --
>>> >>>> > *Mike Tutkowski*
>>> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> > e: mike.tutkowski@solidfire.com
>>> >>>> > o: 303.746.7302
>>> >>>> > Advancing the way the world uses the
>>> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>> > *™*
>>> >>>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> *Mike Tutkowski*
>>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>> e: mike.tutkowski@solidfire.com
>>> >>> o: 303.746.7302
>>> >>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>> *™*
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> *Mike Tutkowski*
>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the
>>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I mean switch over to 4.2 from master. :)


On Wed, Sep 25, 2013 at 6:03 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> I can switch my branch over to master. I'm afraid master is not working
> with Libvirt on Ubuntu, as well.
>
>
> On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> It's harder still that you're trying to use master. I know 4.2 works
>> on ubuntu, but master is a minefield sometimes. Maybe that's not the
>> problem, but I do see emails going back and forth about libvirt/jna
>> versions, just need to read them in detail.
>>
>>  It's a shame that you haven't gotten a working config up yet prior to
>> development work (say a 4.2 that we know works), because we don't have
>> any clues as to whether it's your setup or master.
>>
>> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>> > ok, just a guess. I'm assuming it's still this:
>> >
>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>> >
>> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
>> > <mi...@solidfire.com> wrote:
>> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
>> >> Reading package lists... Done
>> >> Building dependency tree
>> >> Reading state information... Done
>> >> libjna-java is already the newest version.
>> >> libjna-java set to manually installed.
>> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>> >>
>> >>
>> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
>> >> mike.tutkowski@solidfire.com> wrote:
>> >>
>> >>> Was there a step in the docs I may have missed where I was to install
>> >>> them? I don't recall installing them, but there are several steps and
>> I
>> >>> might have forgotten that I did install them, too.
>> >>>
>> >>> I can check.
>> >>>
>> >>>
>> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >>>
>> >>>> are you missing the jna packages?
>> >>>>
>> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>> >>>> <mi...@solidfire.com> wrote:
>> >>>> > I basically just leveraged the code you provided to redirect the
>> output
>> >>>> on
>> >>>> > Ubuntu.
>> >>>> >
>> >>>> > Here is the standard err:
>> >>>> >
>> >>>> > log4j:WARN No appenders could be found for logger
>> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>> >>>> > log4j:WARN Please initialize the log4j system properly.
>> >>>> > log4j:WARN See
>> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>> >>>> > more info.
>> >>>> > java.lang.reflect.InvocationTargetException
>> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>>> > at
>> >>>> >
>> >>>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >>>> > at
>> >>>> >
>> >>>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>> >>>> > at
>> >>>> >
>> >>>>
>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>> >>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>> >>>> > at org.libvirt.Library.free(Unknown Source)
>> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>> >>>> > at
>> >>>> >
>> >>>>
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>> >>>> > at
>> >>>> >
>> >>>>
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>> >>>> > at
>> >>>>
>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>> >>>> > ... 5 more
>> >>>> > Cannot start daemon
>> >>>> > Service exit with a return value of 5
>> >>>> >
>> >>>> >
>> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>> >>>> > mike.tutkowski@solidfire.com> wrote:
>> >>>> >
>> >>>> >> Sounds good.
>> >>>> >>
>> >>>> >> Thanks, Marcus! :)
>> >>>> >>
>> >>>> >>
>> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >>>> >wrote:
>> >>>> >>
>> >>>> >>> Ok, so the next step is to track that stdout and see if you can
>> see
>> >>>> >>> what jsvc complains about when it fails to start up the service.
>> >>>> >>>
>> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>> >>>> >>> <mi...@solidfire.com> wrote:
>> >>>> >>> > These also look good:
>> >>>> >>> >
>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>> >>>> >>> > x86_64
>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
>> qemu:///system
>> >>>> list
>> >>>> >>> >  Id Name                 State
>> >>>> >>> > ----------------------------------
>> >>>> >>> >
>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>> >>>> >>> > /var/run/libvirt/libvirt-sock
>> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>> >>>> /var/run/libvirt/libvirt-sock
>> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>> >>>> >>> >
>> >>>> >>> >
>> >>>> >>> >
>> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>>> >>> >
>> >>>> >>> >> This is my new agent.properties file (with comments
>> removed...looks
>> >>>> >>> >> decent):
>> >>>> >>> >>
>> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>> >>>> >>> >>
>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> >>>> >>> >> workers=5
>> >>>> >>> >> host=192.168.233.1
>> >>>> >>> >> port=8250
>> >>>> >>> >> cluster=1
>> >>>> >>> >> pod=1
>> >>>> >>> >> zone=1
>> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>> >>>> >>> >> private.network.device=cloudbr0
>> >>>> >>> >> public.network.device=cloudbr0
>> >>>> >>> >> guest.network.device=cloudbr0
>> >>>> >>> >>
>> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I
>> should
>> >>>> look
>> >>>> >>> into
>> >>>> >>> >> redirecting stdout and stderr.
>> >>>> >>> >>
>> >>>> >>> >> Here were my steps to start and check the process status:
>> >>>> >>> >>
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> /usr/sbin/service
>> >>>> >>> >> cloudstack-agent start
>> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>> >>>> >>> >>                                                      [ OK ]
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep
>> jsvc
>> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
>> --color=auto
>> >>>> jsvc
>> >>>> >>> >>
>> >>>> >>> >> Also, this might be of interest:
>> >>>> >>> >>
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>> >>>> >>> >> kvm_intel             137721  0
>> >>>> >>> >> kvm                   415549  1 kvm_intel
>> >>>> >>> >>
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>> >>>> >>> >> /proc/cpuinfo
>> >>>> >>> >> 1
>> >>>> >>> >>
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>> >>>> >>> >> INFO: /dev/kvm exists
>> >>>> >>> >> KVM acceleration can be used
>> >>>> >>> >>
>> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>> >>>> /proc/cpuinfo
>> >>>> >>> >> 1
>> >>>> >>> >>
>> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>> >>>> shadowsor@gmail.com
>> >>>> >>> >wrote:
>> >>>> >>> >>
>> >>>> >>> >>> So you:
>> >>>> >>> >>>
>> >>>> >>> >>> 1. run that command
>> >>>> >>> >>> 2. get a brand new agent.properties as a result
>> >>>> >>> >>> 3. start the service
>> >>>> >>> >>>
>> >>>> >>> >>> but you don't see it in the process table?
>> >>>> >>> >>>
>> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j
>> stuff.
>> >>>> So
>> >>>> >>> >>> if there were an error not printed via logger you'd not see
>> it.
>> >>>>  I'm
>> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of
>> my
>> >>>> head,
>> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>> >>>> >>> >>>
>> >>>> >>> >>> start() {
>> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>> >>>> >>> >>>         RETVAL=$?
>> >>>> >>> >>>         echo
>> >>>> >>> >>>     else
>> >>>> >>> >>>
>> >>>> >>> >>>
>> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>> >>>> >>> >>>
>> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep
>> kvm'
>> >>>> ? I
>> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>> >>>> instructions for
>> >>>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
>> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it
>> isn't.
>> >>>> >>> >>>
>> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>> >>>> >>> >>> <mi...@solidfire.com> wrote:
>> >>>> >>> >>> > These results look good:
>> >>>> >>> >>> >
>> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>> >>>> 192.168.233.1
>> >>>> >>> -z 1
>> >>>> >>> >>> -p 1
>> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>> >>>> --pubNic=cloudbr0
>> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>> >>>> >>> >>> > Starting to configure your system:
>> >>>> >>> >>> > Configure Apparmor ...        [OK]
>> >>>> >>> >>> > Configure Network ...         [OK]
>> >>>> >>> >>> > Configure Libvirt ...         [OK]
>> >>>> >>> >>> > Configure Firewall ...        [OK]
>> >>>> >>> >>> > Configure Nfs ...             [OK]
>> >>>> >>> >>> > Configure cloudAgent ...      [OK]
>> >>>> >>> >>> > CloudStack Agent setup is done!
>> >>>> >>> >>> >
>> >>>> >>> >>> > However, these results are the same:
>> >>>> >>> >>> >
>> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>> >>>> --color=auto
>> >>>> >>> jsvc
>> >>>> >>> >>> >
>> >>>> >>> >>> >
>> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>>> >>> >>> >
>> >>>> >>> >>> >> This appears to be the offending method:
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         if (!_initialized) {
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>             return null;
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         }
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         try {
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>             _sp.parse(new InputSource(new
>> >>>> StringReader(capXML)),
>> >>>> >>> this);
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>             return _capXML.toString();
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         } catch (SAXException se) {
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         } catch (IOException ie) {
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         }
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>         return null;
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>     }
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>
>> >>>> >>> >>> >> The logging I do from this method (not shown above),
>> however,
>> >>>> >>> doesn't
>> >>>> >>> >>> seem
>> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>> >>>> >>> >>> >>
>> >>>> >>> >>> >> We invoke this method and I log we're in this method as
>> the
>> >>>> first
>> >>>> >>> >>> thing I
>> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
>> >>>> >>> >>> >>
>> >>>> >>> >>> >> The last message in agent.log is a line saying we are
>> right
>> >>>> before
>> >>>> >>> the
>> >>>> >>> >>> >> call to this method.
>> >>>> >>> >>> >>
>> >>>> >>> >>> >>
>> >>>> >>> >>>
>> >>>> >>> >>
>> >>>> >>> >>
>> >>>> >>> >>
>> >>>> >>> >> --
>> >>>> >>> >> *Mike Tutkowski*
>> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> >>> >> e: mike.tutkowski@solidfire.com
>> >>>> >>> >> o: 303.746.7302
>> >>>> >>> >> Advancing the way the world uses the cloud<
>> >>>> >>> http://solidfire.com/solution/overview/?video=play>
>> >>>> >>> >> *™*
>> >>>> >>> >>
>> >>>> >>> >
>> >>>> >>> >
>> >>>> >>> >
>> >>>> >>> > --
>> >>>> >>> > *Mike Tutkowski*
>> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> >>> > e: mike.tutkowski@solidfire.com
>> >>>> >>> > o: 303.746.7302
>> >>>> >>> > Advancing the way the world uses the
>> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>> >>> > *™*
>> >>>> >>>
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >> --
>> >>>> >> *Mike Tutkowski*
>> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >> o: 303.746.7302
>> >>>> >> Advancing the way the world uses the cloud<
>> >>>> http://solidfire.com/solution/overview/?video=play>
>> >>>> >> *™*
>> >>>> >>
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > --
>> >>>> > *Mike Tutkowski*
>> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> > e: mike.tutkowski@solidfire.com
>> >>>> > o: 303.746.7302
>> >>>> > Advancing the way the world uses the
>> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>> > *™*
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> *Mike Tutkowski*
>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>> *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the
>> >> cloud<http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I can switch my branch over to master. I'm afraid master is not working
with Libvirt on Ubuntu, as well.


On Wed, Sep 25, 2013 at 5:55 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> It's harder still that you're trying to use master. I know 4.2 works
> on ubuntu, but master is a minefield sometimes. Maybe that's not the
> problem, but I do see emails going back and forth about libvirt/jna
> versions, just need to read them in detail.
>
>  It's a shame that you haven't gotten a working config up yet prior to
> development work (say a 4.2 that we know works), because we don't have
> any clues as to whether it's your setup or master.
>
> On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> > ok, just a guess. I'm assuming it's still this:
> >
> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> >
> > On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> >> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
> >> Reading package lists... Done
> >> Building dependency tree
> >> Reading state information... Done
> >> libjna-java is already the newest version.
> >> libjna-java set to manually installed.
> >> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
> >>
> >>
> >> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
> >> mike.tutkowski@solidfire.com> wrote:
> >>
> >>> Was there a step in the docs I may have missed where I was to install
> >>> them? I don't recall installing them, but there are several steps and I
> >>> might have forgotten that I did install them, too.
> >>>
> >>> I can check.
> >>>
> >>>
> >>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>>
> >>>> are you missing the jna packages?
> >>>>
> >>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
> >>>> <mi...@solidfire.com> wrote:
> >>>> > I basically just leveraged the code you provided to redirect the
> output
> >>>> on
> >>>> > Ubuntu.
> >>>> >
> >>>> > Here is the standard err:
> >>>> >
> >>>> > log4j:WARN No appenders could be found for logger
> >>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
> >>>> > log4j:WARN Please initialize the log4j system properly.
> >>>> > log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
> >>>> > more info.
> >>>> > java.lang.reflect.InvocationTargetException
> >>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>> > at
> >>>> >
> >>>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>>> > at
> >>>> >
> >>>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>> > at java.lang.reflect.Method.invoke(Method.java:606)
> >>>> > at
> >>>> >
> >>>>
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> >>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> >>>> > at org.libvirt.Library.free(Unknown Source)
> >>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
> >>>> > at
> >>>> >
> >>>>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
> >>>> > at
> >>>> >
> >>>>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
> >>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
> >>>> > at
> >>>>
> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
> >>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
> >>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
> >>>> > ... 5 more
> >>>> > Cannot start daemon
> >>>> > Service exit with a return value of 5
> >>>> >
> >>>> >
> >>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
> >>>> > mike.tutkowski@solidfire.com> wrote:
> >>>> >
> >>>> >> Sounds good.
> >>>> >>
> >>>> >> Thanks, Marcus! :)
> >>>> >>
> >>>> >>
> >>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >>>> >wrote:
> >>>> >>
> >>>> >>> Ok, so the next step is to track that stdout and see if you can
> see
> >>>> >>> what jsvc complains about when it fails to start up the service.
> >>>> >>>
> >>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> >>>> >>> <mi...@solidfire.com> wrote:
> >>>> >>> > These also look good:
> >>>> >>> >
> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> >>>> >>> > x86_64
> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c
> qemu:///system
> >>>> list
> >>>> >>> >  Id Name                 State
> >>>> >>> > ----------------------------------
> >>>> >>> >
> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
> >>>> >>> > /var/run/libvirt/libvirt-sock
> >>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
> >>>> /var/run/libvirt/libvirt-sock
> >>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
> >>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
> >>>> >>> >
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
> >>>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>>> >>> >
> >>>> >>> >> This is my new agent.properties file (with comments
> removed...looks
> >>>> >>> >> decent):
> >>>> >>> >>
> >>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
> >>>> >>> >>
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >>>> >>> >> workers=5
> >>>> >>> >> host=192.168.233.1
> >>>> >>> >> port=8250
> >>>> >>> >> cluster=1
> >>>> >>> >> pod=1
> >>>> >>> >> zone=1
> >>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
> >>>> >>> >> private.network.device=cloudbr0
> >>>> >>> >> public.network.device=cloudbr0
> >>>> >>> >> guest.network.device=cloudbr0
> >>>> >>> >>
> >>>> >>> >> Yeah, I was always writing stuff out using the logger. I should
> >>>> look
> >>>> >>> into
> >>>> >>> >> redirecting stdout and stderr.
> >>>> >>> >>
> >>>> >>> >> Here were my steps to start and check the process status:
> >>>> >>> >>
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> /usr/sbin/service
> >>>> >>> >> cloudstack-agent start
> >>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
> >>>> >>> >>                                                      [ OK ]
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep
> jsvc
> >>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep
> --color=auto
> >>>> jsvc
> >>>> >>> >>
> >>>> >>> >> Also, this might be of interest:
> >>>> >>> >>
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
> >>>> >>> >> kvm_intel             137721  0
> >>>> >>> >> kvm                   415549  1 kvm_intel
> >>>> >>> >>
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
> >>>> >>> >> /proc/cpuinfo
> >>>> >>> >> 1
> >>>> >>> >>
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
> >>>> >>> >> INFO: /dev/kvm exists
> >>>> >>> >> KVM acceleration can be used
> >>>> >>> >>
> >>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
> >>>> /proc/cpuinfo
> >>>> >>> >> 1
> >>>> >>> >>
> >>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
> >>>> shadowsor@gmail.com
> >>>> >>> >wrote:
> >>>> >>> >>
> >>>> >>> >>> So you:
> >>>> >>> >>>
> >>>> >>> >>> 1. run that command
> >>>> >>> >>> 2. get a brand new agent.properties as a result
> >>>> >>> >>> 3. start the service
> >>>> >>> >>>
> >>>> >>> >>> but you don't see it in the process table?
> >>>> >>> >>>
> >>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j
> stuff.
> >>>> So
> >>>> >>> >>> if there were an error not printed via logger you'd not see
> it.
> >>>>  I'm
> >>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
> >>>> head,
> >>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
> >>>> >>> >>>
> >>>> >>> >>> start() {
> >>>> >>> >>>     echo -n $"Starting $PROGNAME: "
> >>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
> >>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
> >>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
> >>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
> >>>> >>> >>>         RETVAL=$?
> >>>> >>> >>>         echo
> >>>> >>> >>>     else
> >>>> >>> >>>
> >>>> >>> >>>
> >>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
> >>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
> >>>> >>> >>>
> >>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep
> kvm'
> >>>> ? I
> >>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
> >>>> instructions for
> >>>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
> >>>> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
> >>>> >>> >>>
> >>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
> >>>> >>> >>> <mi...@solidfire.com> wrote:
> >>>> >>> >>> > These results look good:
> >>>> >>> >>> >
> >>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
> >>>> 192.168.233.1
> >>>> >>> -z 1
> >>>> >>> >>> -p 1
> >>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
> >>>> --pubNic=cloudbr0
> >>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
> >>>> >>> >>> > Starting to configure your system:
> >>>> >>> >>> > Configure Apparmor ...        [OK]
> >>>> >>> >>> > Configure Network ...         [OK]
> >>>> >>> >>> > Configure Libvirt ...         [OK]
> >>>> >>> >>> > Configure Firewall ...        [OK]
> >>>> >>> >>> > Configure Nfs ...             [OK]
> >>>> >>> >>> > Configure cloudAgent ...      [OK]
> >>>> >>> >>> > CloudStack Agent setup is done!
> >>>> >>> >>> >
> >>>> >>> >>> > However, these results are the same:
> >>>> >>> >>> >
> >>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
> >>>> --color=auto
> >>>> >>> jsvc
> >>>> >>> >>> >
> >>>> >>> >>> >
> >>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> >>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>>> >>> >>> >
> >>>> >>> >>> >> This appears to be the offending method:
> >>>> >>> >>> >>
> >>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
> >>>> >>> >>> >>
> >>>> >>> >>> >>         if (!_initialized) {
> >>>> >>> >>> >>
> >>>> >>> >>> >>             return null;
> >>>> >>> >>> >>
> >>>> >>> >>> >>         }
> >>>> >>> >>> >>
> >>>> >>> >>> >>         try {
> >>>> >>> >>> >>
> >>>> >>> >>> >>             _sp.parse(new InputSource(new
> >>>> StringReader(capXML)),
> >>>> >>> this);
> >>>> >>> >>> >>
> >>>> >>> >>> >>             return _capXML.toString();
> >>>> >>> >>> >>
> >>>> >>> >>> >>         } catch (SAXException se) {
> >>>> >>> >>> >>
> >>>> >>> >>> >>             s_logger.warn(se.getMessage());
> >>>> >>> >>> >>
> >>>> >>> >>> >>         } catch (IOException ie) {
> >>>> >>> >>> >>
> >>>> >>> >>> >>             s_logger.error(ie.getMessage());
> >>>> >>> >>> >>
> >>>> >>> >>> >>         }
> >>>> >>> >>> >>
> >>>> >>> >>> >>         return null;
> >>>> >>> >>> >>
> >>>> >>> >>> >>     }
> >>>> >>> >>> >>
> >>>> >>> >>> >>
> >>>> >>> >>> >> The logging I do from this method (not shown above),
> however,
> >>>> >>> doesn't
> >>>> >>> >>> seem
> >>>> >>> >>> >> to end up in agent.log. Not sure why that is.
> >>>> >>> >>> >>
> >>>> >>> >>> >> We invoke this method and I log we're in this method as the
> >>>> first
> >>>> >>> >>> thing I
> >>>> >>> >>> >> do, but it doesn't show up in agent.log.
> >>>> >>> >>> >>
> >>>> >>> >>> >> The last message in agent.log is a line saying we are right
> >>>> before
> >>>> >>> the
> >>>> >>> >>> >> call to this method.
> >>>> >>> >>> >>
> >>>> >>> >>> >>
> >>>> >>> >>>
> >>>> >>> >>
> >>>> >>> >>
> >>>> >>> >>
> >>>> >>> >> --
> >>>> >>> >> *Mike Tutkowski*
> >>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> >>> >> e: mike.tutkowski@solidfire.com
> >>>> >>> >> o: 303.746.7302
> >>>> >>> >> Advancing the way the world uses the cloud<
> >>>> >>> http://solidfire.com/solution/overview/?video=play>
> >>>> >>> >> *™*
> >>>> >>> >>
> >>>> >>> >
> >>>> >>> >
> >>>> >>> >
> >>>> >>> > --
> >>>> >>> > *Mike Tutkowski*
> >>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>> >>> > e: mike.tutkowski@solidfire.com
> >>>> >>> > o: 303.746.7302
> >>>> >>> > Advancing the way the world uses the
> >>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> >>> > *™*
> >>>> >>>
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> --
> >>>> >> *Mike Tutkowski*
> >>>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> >> e: mike.tutkowski@solidfire.com
> >>>> >> o: 303.746.7302
> >>>> >> Advancing the way the world uses the cloud<
> >>>> http://solidfire.com/solution/overview/?video=play>
> >>>> >> *™*
> >>>> >>
> >>>> >
> >>>> >
> >>>> >
> >>>> > --
> >>>> > *Mike Tutkowski*
> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>> > e: mike.tutkowski@solidfire.com
> >>>> > o: 303.746.7302
> >>>> > Advancing the way the world uses the
> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> > *™*
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>> *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the
> >> cloud<http://solidfire.com/solution/overview/?video=play>
> >> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
It's harder still that you're trying to use master. I know 4.2 works
on ubuntu, but master is a minefield sometimes. Maybe that's not the
problem, but I do see emails going back and forth about libvirt/jna
versions, just need to read them in detail.

 It's a shame that you haven't gotten a working config up yet prior to
development work (say a 4.2 that we know works), because we don't have
any clues as to whether it's your setup or master.

On Wed, Sep 25, 2013 at 5:49 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> ok, just a guess. I'm assuming it's still this:
>
> Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>
> On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
>> Reading package lists... Done
>> Building dependency tree
>> Reading state information... Done
>> libjna-java is already the newest version.
>> libjna-java set to manually installed.
>> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>>
>>
>> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Was there a step in the docs I may have missed where I was to install
>>> them? I don't recall installing them, but there are several steps and I
>>> might have forgotten that I did install them, too.
>>>
>>> I can check.
>>>
>>>
>>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> are you missing the jna packages?
>>>>
>>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>>>> <mi...@solidfire.com> wrote:
>>>> > I basically just leveraged the code you provided to redirect the output
>>>> on
>>>> > Ubuntu.
>>>> >
>>>> > Here is the standard err:
>>>> >
>>>> > log4j:WARN No appenders could be found for logger
>>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>>>> > log4j:WARN Please initialize the log4j system properly.
>>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>>>> > more info.
>>>> > java.lang.reflect.InvocationTargetException
>>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> > at
>>>> >
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> > at
>>>> >
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>>>> > at
>>>> >
>>>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>>>> > at org.libvirt.Library.free(Unknown Source)
>>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>>>> > at
>>>> >
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>>>> > at
>>>> >
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>>>> > at
>>>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>>>> > ... 5 more
>>>> > Cannot start daemon
>>>> > Service exit with a return value of 5
>>>> >
>>>> >
>>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>>>> > mike.tutkowski@solidfire.com> wrote:
>>>> >
>>>> >> Sounds good.
>>>> >>
>>>> >> Thanks, Marcus! :)
>>>> >>
>>>> >>
>>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <shadowsor@gmail.com
>>>> >wrote:
>>>> >>
>>>> >>> Ok, so the next step is to track that stdout and see if you can see
>>>> >>> what jsvc complains about when it fails to start up the service.
>>>> >>>
>>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>>> >>> <mi...@solidfire.com> wrote:
>>>> >>> > These also look good:
>>>> >>> >
>>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>>> >>> > x86_64
>>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system
>>>> list
>>>> >>> >  Id Name                 State
>>>> >>> > ----------------------------------
>>>> >>> >
>>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>>> >>> > /var/run/libvirt/libvirt-sock
>>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>>>> /var/run/libvirt/libvirt-sock
>>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>> >>> >
>>>> >>> >> This is my new agent.properties file (with comments removed...looks
>>>> >>> >> decent):
>>>> >>> >>
>>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>>> >>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>>> >>> >> workers=5
>>>> >>> >> host=192.168.233.1
>>>> >>> >> port=8250
>>>> >>> >> cluster=1
>>>> >>> >> pod=1
>>>> >>> >> zone=1
>>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>>> >>> >> private.network.device=cloudbr0
>>>> >>> >> public.network.device=cloudbr0
>>>> >>> >> guest.network.device=cloudbr0
>>>> >>> >>
>>>> >>> >> Yeah, I was always writing stuff out using the logger. I should
>>>> look
>>>> >>> into
>>>> >>> >> redirecting stdout and stderr.
>>>> >>> >>
>>>> >>> >> Here were my steps to start and check the process status:
>>>> >>> >>
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>>> >>> >> cloudstack-agent start
>>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>>>> >>> >>                                                      [ OK ]
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto
>>>> jsvc
>>>> >>> >>
>>>> >>> >> Also, this might be of interest:
>>>> >>> >>
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>>> >>> >> kvm_intel             137721  0
>>>> >>> >> kvm                   415549  1 kvm_intel
>>>> >>> >>
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>>>> >>> >> /proc/cpuinfo
>>>> >>> >> 1
>>>> >>> >>
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>>> >>> >> INFO: /dev/kvm exists
>>>> >>> >> KVM acceleration can be used
>>>> >>> >>
>>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>>>> /proc/cpuinfo
>>>> >>> >> 1
>>>> >>> >>
>>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> >>> >wrote:
>>>> >>> >>
>>>> >>> >>> So you:
>>>> >>> >>>
>>>> >>> >>> 1. run that command
>>>> >>> >>> 2. get a brand new agent.properties as a result
>>>> >>> >>> 3. start the service
>>>> >>> >>>
>>>> >>> >>> but you don't see it in the process table?
>>>> >>> >>>
>>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff.
>>>> So
>>>> >>> >>> if there were an error not printed via logger you'd not see it.
>>>>  I'm
>>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
>>>> head,
>>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>>> >>> >>>
>>>> >>> >>> start() {
>>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>>> >>> >>>         RETVAL=$?
>>>> >>> >>>         echo
>>>> >>> >>>     else
>>>> >>> >>>
>>>> >>> >>>
>>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>>> >>> >>>
>>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm'
>>>> ? I
>>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>>>> instructions for
>>>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
>>>> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>>>> >>> >>>
>>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>>> >>> >>> <mi...@solidfire.com> wrote:
>>>> >>> >>> > These results look good:
>>>> >>> >>> >
>>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>>>> 192.168.233.1
>>>> >>> -z 1
>>>> >>> >>> -p 1
>>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>>>> --pubNic=cloudbr0
>>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>>> >>> >>> > Starting to configure your system:
>>>> >>> >>> > Configure Apparmor ...        [OK]
>>>> >>> >>> > Configure Network ...         [OK]
>>>> >>> >>> > Configure Libvirt ...         [OK]
>>>> >>> >>> > Configure Firewall ...        [OK]
>>>> >>> >>> > Configure Nfs ...             [OK]
>>>> >>> >>> > Configure cloudAgent ...      [OK]
>>>> >>> >>> > CloudStack Agent setup is done!
>>>> >>> >>> >
>>>> >>> >>> > However, these results are the same:
>>>> >>> >>> >
>>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>>>> --color=auto
>>>> >>> jsvc
>>>> >>> >>> >
>>>> >>> >>> >
>>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>>>> >>> >>> >
>>>> >>> >>> >> This appears to be the offending method:
>>>> >>> >>> >>
>>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>>> >>> >>> >>
>>>> >>> >>> >>         if (!_initialized) {
>>>> >>> >>> >>
>>>> >>> >>> >>             return null;
>>>> >>> >>> >>
>>>> >>> >>> >>         }
>>>> >>> >>> >>
>>>> >>> >>> >>         try {
>>>> >>> >>> >>
>>>> >>> >>> >>             _sp.parse(new InputSource(new
>>>> StringReader(capXML)),
>>>> >>> this);
>>>> >>> >>> >>
>>>> >>> >>> >>             return _capXML.toString();
>>>> >>> >>> >>
>>>> >>> >>> >>         } catch (SAXException se) {
>>>> >>> >>> >>
>>>> >>> >>> >>             s_logger.warn(se.getMessage());
>>>> >>> >>> >>
>>>> >>> >>> >>         } catch (IOException ie) {
>>>> >>> >>> >>
>>>> >>> >>> >>             s_logger.error(ie.getMessage());
>>>> >>> >>> >>
>>>> >>> >>> >>         }
>>>> >>> >>> >>
>>>> >>> >>> >>         return null;
>>>> >>> >>> >>
>>>> >>> >>> >>     }
>>>> >>> >>> >>
>>>> >>> >>> >>
>>>> >>> >>> >> The logging I do from this method (not shown above), however,
>>>> >>> doesn't
>>>> >>> >>> seem
>>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>>>> >>> >>> >>
>>>> >>> >>> >> We invoke this method and I log we're in this method as the
>>>> first
>>>> >>> >>> thing I
>>>> >>> >>> >> do, but it doesn't show up in agent.log.
>>>> >>> >>> >>
>>>> >>> >>> >> The last message in agent.log is a line saying we are right
>>>> before
>>>> >>> the
>>>> >>> >>> >> call to this method.
>>>> >>> >>> >>
>>>> >>> >>> >>
>>>> >>> >>>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >>
>>>> >>> >> --
>>>> >>> >> *Mike Tutkowski*
>>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>> >> e: mike.tutkowski@solidfire.com
>>>> >>> >> o: 303.746.7302
>>>> >>> >> Advancing the way the world uses the cloud<
>>>> >>> http://solidfire.com/solution/overview/?video=play>
>>>> >>> >> *™*
>>>> >>> >>
>>>> >>> >
>>>> >>> >
>>>> >>> >
>>>> >>> > --
>>>> >>> > *Mike Tutkowski*
>>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >>> > e: mike.tutkowski@solidfire.com
>>>> >>> > o: 303.746.7302
>>>> >>> > Advancing the way the world uses the
>>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >>> > *™*
>>>> >>>
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> *Mike Tutkowski*
>>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> e: mike.tutkowski@solidfire.com
>>>> >> o: 303.746.7302
>>>> >> Advancing the way the world uses the cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>> >> *™*
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > *Mike Tutkowski*
>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > e: mike.tutkowski@solidfire.com
>>>> > o: 303.746.7302
>>>> > Advancing the way the world uses the
>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> > *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
ok, just a guess. I'm assuming it's still this:

Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V

On Wed, Sep 25, 2013 at 5:48 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> libjna-java is already the newest version.
> libjna-java set to manually installed.
> 0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.
>
>
> On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Was there a step in the docs I may have missed where I was to install
>> them? I don't recall installing them, but there are several steps and I
>> might have forgotten that I did install them, too.
>>
>> I can check.
>>
>>
>> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> are you missing the jna packages?
>>>
>>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > I basically just leveraged the code you provided to redirect the output
>>> on
>>> > Ubuntu.
>>> >
>>> > Here is the standard err:
>>> >
>>> > log4j:WARN No appenders could be found for logger
>>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>>> > log4j:WARN Please initialize the log4j system properly.
>>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>>> > more info.
>>> > java.lang.reflect.InvocationTargetException
>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> > at
>>> >
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> > at
>>> >
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> > at java.lang.reflect.Method.invoke(Method.java:606)
>>> > at
>>> >
>>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>>> > at org.libvirt.Library.free(Unknown Source)
>>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>>> > at
>>> >
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>>> > at
>>> >
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>>> > at
>>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>>> > ... 5 more
>>> > Cannot start daemon
>>> > Service exit with a return value of 5
>>> >
>>> >
>>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>>> > mike.tutkowski@solidfire.com> wrote:
>>> >
>>> >> Sounds good.
>>> >>
>>> >> Thanks, Marcus! :)
>>> >>
>>> >>
>>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <shadowsor@gmail.com
>>> >wrote:
>>> >>
>>> >>> Ok, so the next step is to track that stdout and see if you can see
>>> >>> what jsvc complains about when it fails to start up the service.
>>> >>>
>>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>> >>> <mi...@solidfire.com> wrote:
>>> >>> > These also look good:
>>> >>> >
>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>> >>> > x86_64
>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system
>>> list
>>> >>> >  Id Name                 State
>>> >>> > ----------------------------------
>>> >>> >
>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>> >>> > /var/run/libvirt/libvirt-sock
>>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>>> /var/run/libvirt/libvirt-sock
>>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>> >>> >
>>> >>> >> This is my new agent.properties file (with comments removed...looks
>>> >>> >> decent):
>>> >>> >>
>>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>> >>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>> >>> >> workers=5
>>> >>> >> host=192.168.233.1
>>> >>> >> port=8250
>>> >>> >> cluster=1
>>> >>> >> pod=1
>>> >>> >> zone=1
>>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>> >>> >> private.network.device=cloudbr0
>>> >>> >> public.network.device=cloudbr0
>>> >>> >> guest.network.device=cloudbr0
>>> >>> >>
>>> >>> >> Yeah, I was always writing stuff out using the logger. I should
>>> look
>>> >>> into
>>> >>> >> redirecting stdout and stderr.
>>> >>> >>
>>> >>> >> Here were my steps to start and check the process status:
>>> >>> >>
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>> >>> >> cloudstack-agent start
>>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>>> >>> >>                                                      [ OK ]
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto
>>> jsvc
>>> >>> >>
>>> >>> >> Also, this might be of interest:
>>> >>> >>
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>> >>> >> kvm_intel             137721  0
>>> >>> >> kvm                   415549  1 kvm_intel
>>> >>> >>
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>>> >>> >> /proc/cpuinfo
>>> >>> >> 1
>>> >>> >>
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>> >>> >> INFO: /dev/kvm exists
>>> >>> >> KVM acceleration can be used
>>> >>> >>
>>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>>> /proc/cpuinfo
>>> >>> >> 1
>>> >>> >>
>>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>>> shadowsor@gmail.com
>>> >>> >wrote:
>>> >>> >>
>>> >>> >>> So you:
>>> >>> >>>
>>> >>> >>> 1. run that command
>>> >>> >>> 2. get a brand new agent.properties as a result
>>> >>> >>> 3. start the service
>>> >>> >>>
>>> >>> >>> but you don't see it in the process table?
>>> >>> >>>
>>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff.
>>> So
>>> >>> >>> if there were an error not printed via logger you'd not see it.
>>>  I'm
>>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
>>> head,
>>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>> >>> >>>
>>> >>> >>> start() {
>>> >>> >>>     echo -n $"Starting $PROGNAME: "
>>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>> >>> >>>         RETVAL=$?
>>> >>> >>>         echo
>>> >>> >>>     else
>>> >>> >>>
>>> >>> >>>
>>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>> >>> >>>
>>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm'
>>> ? I
>>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>>> instructions for
>>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
>>> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>>> >>> >>>
>>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>> >>> >>> <mi...@solidfire.com> wrote:
>>> >>> >>> > These results look good:
>>> >>> >>> >
>>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>>> 192.168.233.1
>>> >>> -z 1
>>> >>> >>> -p 1
>>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>>> --pubNic=cloudbr0
>>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>> >>> >>> > Starting to configure your system:
>>> >>> >>> > Configure Apparmor ...        [OK]
>>> >>> >>> > Configure Network ...         [OK]
>>> >>> >>> > Configure Libvirt ...         [OK]
>>> >>> >>> > Configure Firewall ...        [OK]
>>> >>> >>> > Configure Nfs ...             [OK]
>>> >>> >>> > Configure cloudAgent ...      [OK]
>>> >>> >>> > CloudStack Agent setup is done!
>>> >>> >>> >
>>> >>> >>> > However, these results are the same:
>>> >>> >>> >
>>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>>> --color=auto
>>> >>> jsvc
>>> >>> >>> >
>>> >>> >>> >
>>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>>> >>> >>> >
>>> >>> >>> >> This appears to be the offending method:
>>> >>> >>> >>
>>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>> >>> >>> >>
>>> >>> >>> >>         if (!_initialized) {
>>> >>> >>> >>
>>> >>> >>> >>             return null;
>>> >>> >>> >>
>>> >>> >>> >>         }
>>> >>> >>> >>
>>> >>> >>> >>         try {
>>> >>> >>> >>
>>> >>> >>> >>             _sp.parse(new InputSource(new
>>> StringReader(capXML)),
>>> >>> this);
>>> >>> >>> >>
>>> >>> >>> >>             return _capXML.toString();
>>> >>> >>> >>
>>> >>> >>> >>         } catch (SAXException se) {
>>> >>> >>> >>
>>> >>> >>> >>             s_logger.warn(se.getMessage());
>>> >>> >>> >>
>>> >>> >>> >>         } catch (IOException ie) {
>>> >>> >>> >>
>>> >>> >>> >>             s_logger.error(ie.getMessage());
>>> >>> >>> >>
>>> >>> >>> >>         }
>>> >>> >>> >>
>>> >>> >>> >>         return null;
>>> >>> >>> >>
>>> >>> >>> >>     }
>>> >>> >>> >>
>>> >>> >>> >>
>>> >>> >>> >> The logging I do from this method (not shown above), however,
>>> >>> doesn't
>>> >>> >>> seem
>>> >>> >>> >> to end up in agent.log. Not sure why that is.
>>> >>> >>> >>
>>> >>> >>> >> We invoke this method and I log we're in this method as the
>>> first
>>> >>> >>> thing I
>>> >>> >>> >> do, but it doesn't show up in agent.log.
>>> >>> >>> >>
>>> >>> >>> >> The last message in agent.log is a line saying we are right
>>> before
>>> >>> the
>>> >>> >>> >> call to this method.
>>> >>> >>> >>
>>> >>> >>> >>
>>> >>> >>>
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> --
>>> >>> >> *Mike Tutkowski*
>>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>> >> e: mike.tutkowski@solidfire.com
>>> >>> >> o: 303.746.7302
>>> >>> >> Advancing the way the world uses the cloud<
>>> >>> http://solidfire.com/solution/overview/?video=play>
>>> >>> >> *™*
>>> >>> >>
>>> >>> >
>>> >>> >
>>> >>> >
>>> >>> > --
>>> >>> > *Mike Tutkowski*
>>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>> > e: mike.tutkowski@solidfire.com
>>> >>> > o: 303.746.7302
>>> >>> > Advancing the way the world uses the
>>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>> > *™*
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> *Mike Tutkowski*
>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
mtutkowski@ubuntu:~$ sudo apt-get install libjna-java
Reading package lists... Done
Building dependency tree
Reading state information... Done
libjna-java is already the newest version.
libjna-java set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 468 not upgraded.


On Wed, Sep 25, 2013 at 5:46 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Was there a step in the docs I may have missed where I was to install
> them? I don't recall installing them, but there are several steps and I
> might have forgotten that I did install them, too.
>
> I can check.
>
>
> On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> are you missing the jna packages?
>>
>> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > I basically just leveraged the code you provided to redirect the output
>> on
>> > Ubuntu.
>> >
>> > Here is the standard err:
>> >
>> > log4j:WARN No appenders could be found for logger
>> > (org.apache.commons.httpclient.params.DefaultHttpParams).
>> > log4j:WARN Please initialize the log4j system properly.
>> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfigfor
>> > more info.
>> > java.lang.reflect.InvocationTargetException
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.lang.reflect.Method.invoke(Method.java:606)
>> > at
>> >
>> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
>> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
>> > at org.libvirt.Library.free(Unknown Source)
>> > at org.libvirt.Connect.getCapabilities(Unknown Source)
>> > at
>> >
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
>> > at
>> >
>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
>> > at com.cloud.agent.Agent.<init>(Agent.java:168)
>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
>> > at
>> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
>> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
>> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
>> > ... 5 more
>> > Cannot start daemon
>> > Service exit with a return value of 5
>> >
>> >
>> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> Sounds good.
>> >>
>> >> Thanks, Marcus! :)
>> >>
>> >>
>> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >>
>> >>> Ok, so the next step is to track that stdout and see if you can see
>> >>> what jsvc complains about when it fails to start up the service.
>> >>>
>> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>> >>> <mi...@solidfire.com> wrote:
>> >>> > These also look good:
>> >>> >
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>> >>> > x86_64
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system
>> list
>> >>> >  Id Name                 State
>> >>> > ----------------------------------
>> >>> >
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>> >>> > /var/run/libvirt/libvirt-sock
>> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
>> /var/run/libvirt/libvirt-sock
>> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>> >>> >
>> >>> >
>> >>> >
>> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>> >
>> >>> >> This is my new agent.properties file (with comments removed...looks
>> >>> >> decent):
>> >>> >>
>> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>> >>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> >>> >> workers=5
>> >>> >> host=192.168.233.1
>> >>> >> port=8250
>> >>> >> cluster=1
>> >>> >> pod=1
>> >>> >> zone=1
>> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>> >>> >> private.network.device=cloudbr0
>> >>> >> public.network.device=cloudbr0
>> >>> >> guest.network.device=cloudbr0
>> >>> >>
>> >>> >> Yeah, I was always writing stuff out using the logger. I should
>> look
>> >>> into
>> >>> >> redirecting stdout and stderr.
>> >>> >>
>> >>> >> Here were my steps to start and check the process status:
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >>> >> cloudstack-agent start
>> >>> >>  * Starting CloudStack Agent cloudstack-agent
>> >>> >>                                                      [ OK ]
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto
>> jsvc
>> >>> >>
>> >>> >> Also, this might be of interest:
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>> >>> >> kvm_intel             137721  0
>> >>> >> kvm                   415549  1 kvm_intel
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>> >>> >> /proc/cpuinfo
>> >>> >> 1
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>> >>> >> INFO: /dev/kvm exists
>> >>> >> KVM acceleration can be used
>> >>> >>
>> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
>> /proc/cpuinfo
>> >>> >> 1
>> >>> >>
>> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >>> >wrote:
>> >>> >>
>> >>> >>> So you:
>> >>> >>>
>> >>> >>> 1. run that command
>> >>> >>> 2. get a brand new agent.properties as a result
>> >>> >>> 3. start the service
>> >>> >>>
>> >>> >>> but you don't see it in the process table?
>> >>> >>>
>> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff.
>> So
>> >>> >>> if there were an error not printed via logger you'd not see it.
>>  I'm
>> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
>> head,
>> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>> >>> >>>
>> >>> >>> start() {
>> >>> >>>     echo -n $"Starting $PROGNAME: "
>> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>> >>> >>>         RETVAL=$?
>> >>> >>>         echo
>> >>> >>>     else
>> >>> >>>
>> >>> >>>
>> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>> >>> >>>
>> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm'
>> ? I
>> >>> >>> know you didn't end up using it, but the devcloud-kvm
>> instructions for
>> >>> >>> vmware fusion tell you to ensure that your guest has hardware
>> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>> >>> >>>
>> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>> >>> >>> <mi...@solidfire.com> wrote:
>> >>> >>> > These results look good:
>> >>> >>> >
>> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
>> 192.168.233.1
>> >>> -z 1
>> >>> >>> -p 1
>> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>> --pubNic=cloudbr0
>> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>> >>> >>> > Starting to configure your system:
>> >>> >>> > Configure Apparmor ...        [OK]
>> >>> >>> > Configure Network ...         [OK]
>> >>> >>> > Configure Libvirt ...         [OK]
>> >>> >>> > Configure Firewall ...        [OK]
>> >>> >>> > Configure Nfs ...             [OK]
>> >>> >>> > Configure cloudAgent ...      [OK]
>> >>> >>> > CloudStack Agent setup is done!
>> >>> >>> >
>> >>> >>> > However, these results are the same:
>> >>> >>> >
>> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep
>> --color=auto
>> >>> jsvc
>> >>> >>> >
>> >>> >>> >
>> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>> >>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>> >>> >
>> >>> >>> >> This appears to be the offending method:
>> >>> >>> >>
>> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>> >>> >>> >>
>> >>> >>> >>         if (!_initialized) {
>> >>> >>> >>
>> >>> >>> >>             return null;
>> >>> >>> >>
>> >>> >>> >>         }
>> >>> >>> >>
>> >>> >>> >>         try {
>> >>> >>> >>
>> >>> >>> >>             _sp.parse(new InputSource(new
>> StringReader(capXML)),
>> >>> this);
>> >>> >>> >>
>> >>> >>> >>             return _capXML.toString();
>> >>> >>> >>
>> >>> >>> >>         } catch (SAXException se) {
>> >>> >>> >>
>> >>> >>> >>             s_logger.warn(se.getMessage());
>> >>> >>> >>
>> >>> >>> >>         } catch (IOException ie) {
>> >>> >>> >>
>> >>> >>> >>             s_logger.error(ie.getMessage());
>> >>> >>> >>
>> >>> >>> >>         }
>> >>> >>> >>
>> >>> >>> >>         return null;
>> >>> >>> >>
>> >>> >>> >>     }
>> >>> >>> >>
>> >>> >>> >>
>> >>> >>> >> The logging I do from this method (not shown above), however,
>> >>> doesn't
>> >>> >>> seem
>> >>> >>> >> to end up in agent.log. Not sure why that is.
>> >>> >>> >>
>> >>> >>> >> We invoke this method and I log we're in this method as the
>> first
>> >>> >>> thing I
>> >>> >>> >> do, but it doesn't show up in agent.log.
>> >>> >>> >>
>> >>> >>> >> The last message in agent.log is a line saying we are right
>> before
>> >>> the
>> >>> >>> >> call to this method.
>> >>> >>> >>
>> >>> >>> >>
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >> *Mike Tutkowski*
>> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> >> e: mike.tutkowski@solidfire.com
>> >>> >> o: 303.746.7302
>> >>> >> Advancing the way the world uses the cloud<
>> >>> http://solidfire.com/solution/overview/?video=play>
>> >>> >> *™*
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > *Mike Tutkowski*
>> >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>> > e: mike.tutkowski@solidfire.com
>> >>> > o: 303.746.7302
>> >>> > Advancing the way the world uses the
>> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>> > *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Was there a step in the docs I may have missed where I was to install them?
I don't recall installing them, but there are several steps and I might
have forgotten that I did install them, too.

I can check.


On Wed, Sep 25, 2013 at 5:44 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> are you missing the jna packages?
>
> On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > I basically just leveraged the code you provided to redirect the output
> on
> > Ubuntu.
> >
> > Here is the standard err:
> >
> > log4j:WARN No appenders could be found for logger
> > (org.apache.commons.httpclient.params.DefaultHttpParams).
> > log4j:WARN Please initialize the log4j system properly.
> > log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> > more info.
> > java.lang.reflect.InvocationTargetException
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:606)
> > at
> >
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> > Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> > at org.libvirt.Library.free(Unknown Source)
> > at org.libvirt.Connect.getCapabilities(Unknown Source)
> > at
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
> > at
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
> > at com.cloud.agent.Agent.<init>(Agent.java:168)
> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
> > at
> com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
> > at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
> > at com.cloud.agent.AgentShell.start(AgentShell.java:473)
> > ... 5 more
> > Cannot start daemon
> > Service exit with a return value of 5
> >
> >
> > On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> Sounds good.
> >>
> >> Thanks, Marcus! :)
> >>
> >>
> >> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>
> >>> Ok, so the next step is to track that stdout and see if you can see
> >>> what jsvc complains about when it fails to start up the service.
> >>>
> >>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> >>> <mi...@solidfire.com> wrote:
> >>> > These also look good:
> >>> >
> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> >>> > x86_64
> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system
> list
> >>> >  Id Name                 State
> >>> > ----------------------------------
> >>> >
> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
> >>> > /var/run/libvirt/libvirt-sock
> >>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05
> /var/run/libvirt/libvirt-sock
> >>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
> >>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
> >>> >
> >>> >
> >>> >
> >>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >
> >>> >> This is my new agent.properties file (with comments removed...looks
> >>> >> decent):
> >>> >>
> >>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
> >>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >>> >> workers=5
> >>> >> host=192.168.233.1
> >>> >> port=8250
> >>> >> cluster=1
> >>> >> pod=1
> >>> >> zone=1
> >>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
> >>> >> private.network.device=cloudbr0
> >>> >> public.network.device=cloudbr0
> >>> >> guest.network.device=cloudbr0
> >>> >>
> >>> >> Yeah, I was always writing stuff out using the logger. I should look
> >>> into
> >>> >> redirecting stdout and stderr.
> >>> >>
> >>> >> Here were my steps to start and check the process status:
> >>> >>
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >>> >> cloudstack-agent start
> >>> >>  * Starting CloudStack Agent cloudstack-agent
> >>> >>                                                      [ OK ]
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
> >>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto
> jsvc
> >>> >>
> >>> >> Also, this might be of interest:
> >>> >>
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
> >>> >> kvm_intel             137721  0
> >>> >> kvm                   415549  1 kvm_intel
> >>> >>
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
> >>> >> /proc/cpuinfo
> >>> >> 1
> >>> >>
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
> >>> >> INFO: /dev/kvm exists
> >>> >> KVM acceleration can be used
> >>> >>
> >>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm '
> /proc/cpuinfo
> >>> >> 1
> >>> >>
> >>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >>> >wrote:
> >>> >>
> >>> >>> So you:
> >>> >>>
> >>> >>> 1. run that command
> >>> >>> 2. get a brand new agent.properties as a result
> >>> >>> 3. start the service
> >>> >>>
> >>> >>> but you don't see it in the process table?
> >>> >>>
> >>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff.
> So
> >>> >>> if there were an error not printed via logger you'd not see it.
>  I'm
> >>> >>> not as familiar with the debian/ubuntu stuff off the top of my
> head,
> >>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
> >>> >>>
> >>> >>> start() {
> >>> >>>     echo -n $"Starting $PROGNAME: "
> >>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
> >>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
> >>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
> >>> >>> $LOGDIR/cloudstack-agent.out $CLASS
> >>> >>>         RETVAL=$?
> >>> >>>         echo
> >>> >>>     else
> >>> >>>
> >>> >>>
> >>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
> >>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
> >>> >>>
> >>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ?
> I
> >>> >>> know you didn't end up using it, but the devcloud-kvm instructions
> for
> >>> >>> vmware fusion tell you to ensure that your guest has hardware
> >>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
> >>> >>>
> >>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
> >>> >>> <mi...@solidfire.com> wrote:
> >>> >>> > These results look good:
> >>> >>> >
> >>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m
> 192.168.233.1
> >>> -z 1
> >>> >>> -p 1
> >>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
> >>> >>> > Starting to configure your system:
> >>> >>> > Configure Apparmor ...        [OK]
> >>> >>> > Configure Network ...         [OK]
> >>> >>> > Configure Libvirt ...         [OK]
> >>> >>> > Configure Firewall ...        [OK]
> >>> >>> > Configure Nfs ...             [OK]
> >>> >>> > Configure cloudAgent ...      [OK]
> >>> >>> > CloudStack Agent setup is done!
> >>> >>> >
> >>> >>> > However, these results are the same:
> >>> >>> >
> >>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto
> >>> jsvc
> >>> >>> >
> >>> >>> >
> >>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> >>> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >>> >
> >>> >>> >> This appears to be the offending method:
> >>> >>> >>
> >>> >>> >>     public String parseCapabilitiesXML(String capXML) {
> >>> >>> >>
> >>> >>> >>         if (!_initialized) {
> >>> >>> >>
> >>> >>> >>             return null;
> >>> >>> >>
> >>> >>> >>         }
> >>> >>> >>
> >>> >>> >>         try {
> >>> >>> >>
> >>> >>> >>             _sp.parse(new InputSource(new StringReader(capXML)),
> >>> this);
> >>> >>> >>
> >>> >>> >>             return _capXML.toString();
> >>> >>> >>
> >>> >>> >>         } catch (SAXException se) {
> >>> >>> >>
> >>> >>> >>             s_logger.warn(se.getMessage());
> >>> >>> >>
> >>> >>> >>         } catch (IOException ie) {
> >>> >>> >>
> >>> >>> >>             s_logger.error(ie.getMessage());
> >>> >>> >>
> >>> >>> >>         }
> >>> >>> >>
> >>> >>> >>         return null;
> >>> >>> >>
> >>> >>> >>     }
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> The logging I do from this method (not shown above), however,
> >>> doesn't
> >>> >>> seem
> >>> >>> >> to end up in agent.log. Not sure why that is.
> >>> >>> >>
> >>> >>> >> We invoke this method and I log we're in this method as the
> first
> >>> >>> thing I
> >>> >>> >> do, but it doesn't show up in agent.log.
> >>> >>> >>
> >>> >>> >> The last message in agent.log is a line saying we are right
> before
> >>> the
> >>> >>> >> call to this method.
> >>> >>> >>
> >>> >>> >>
> >>> >>>
> >>> >>
> >>> >>
> >>> >>
> >>> >> --
> >>> >> *Mike Tutkowski*
> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >> e: mike.tutkowski@solidfire.com
> >>> >> o: 303.746.7302
> >>> >> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >> *™*
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > *Mike Tutkowski*
> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> > e: mike.tutkowski@solidfire.com
> >>> > o: 303.746.7302
> >>> > Advancing the way the world uses the
> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> > *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
are you missing the jna packages?

On Wed, Sep 25, 2013 at 5:40 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> I basically just leveraged the code you provided to redirect the output on
> Ubuntu.
>
> Here is the standard err:
>
> log4j:WARN No appenders could be found for logger
> (org.apache.commons.httpclient.params.DefaultHttpParams).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
> at org.libvirt.Library.free(Unknown Source)
> at org.libvirt.Connect.getCapabilities(Unknown Source)
> at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
> at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
> at com.cloud.agent.Agent.<init>(Agent.java:168)
> at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
> at com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
> at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
> at com.cloud.agent.AgentShell.start(AgentShell.java:473)
> ... 5 more
> Cannot start daemon
> Service exit with a return value of 5
>
>
> On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Sounds good.
>>
>> Thanks, Marcus! :)
>>
>>
>> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Ok, so the next step is to track that stdout and see if you can see
>>> what jsvc complains about when it fails to start up the service.
>>>
>>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > These also look good:
>>> >
>>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>>> > x86_64
>>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
>>> >  Id Name                 State
>>> > ----------------------------------
>>> >
>>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>>> > /var/run/libvirt/libvirt-sock
>>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
>>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>> >
>>> >
>>> >
>>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>>> > mike.tutkowski@solidfire.com> wrote:
>>> >
>>> >> This is my new agent.properties file (with comments removed...looks
>>> >> decent):
>>> >>
>>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>> >> workers=5
>>> >> host=192.168.233.1
>>> >> port=8250
>>> >> cluster=1
>>> >> pod=1
>>> >> zone=1
>>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>> >> private.network.device=cloudbr0
>>> >> public.network.device=cloudbr0
>>> >> guest.network.device=cloudbr0
>>> >>
>>> >> Yeah, I was always writing stuff out using the logger. I should look
>>> into
>>> >> redirecting stdout and stderr.
>>> >>
>>> >> Here were my steps to start and check the process status:
>>> >>
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>> >> cloudstack-agent start
>>> >>  * Starting CloudStack Agent cloudstack-agent
>>> >>                                                      [ OK ]
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
>>> >>
>>> >> Also, this might be of interest:
>>> >>
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>> >> kvm_intel             137721  0
>>> >> kvm                   415549  1 kvm_intel
>>> >>
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>>> >> /proc/cpuinfo
>>> >> 1
>>> >>
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>> >> INFO: /dev/kvm exists
>>> >> KVM acceleration can be used
>>> >>
>>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
>>> >> 1
>>> >>
>>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <shadowsor@gmail.com
>>> >wrote:
>>> >>
>>> >>> So you:
>>> >>>
>>> >>> 1. run that command
>>> >>> 2. get a brand new agent.properties as a result
>>> >>> 3. start the service
>>> >>>
>>> >>> but you don't see it in the process table?
>>> >>>
>>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
>>> >>> if there were an error not printed via logger you'd not see it.  I'm
>>> >>> not as familiar with the debian/ubuntu stuff off the top of my head,
>>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>> >>>
>>> >>> start() {
>>> >>>     echo -n $"Starting $PROGNAME: "
>>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>>> >>>         RETVAL=$?
>>> >>>         echo
>>> >>>     else
>>> >>>
>>> >>>
>>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>> >>>
>>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
>>> >>> know you didn't end up using it, but the devcloud-kvm instructions for
>>> >>> vmware fusion tell you to ensure that your guest has hardware
>>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>>> >>>
>>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>> >>> <mi...@solidfire.com> wrote:
>>> >>> > These results look good:
>>> >>> >
>>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1
>>> -z 1
>>> >>> -p 1
>>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>> >>> > Starting to configure your system:
>>> >>> > Configure Apparmor ...        [OK]
>>> >>> > Configure Network ...         [OK]
>>> >>> > Configure Libvirt ...         [OK]
>>> >>> > Configure Firewall ...        [OK]
>>> >>> > Configure Nfs ...             [OK]
>>> >>> > Configure cloudAgent ...      [OK]
>>> >>> > CloudStack Agent setup is done!
>>> >>> >
>>> >>> > However, these results are the same:
>>> >>> >
>>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto
>>> jsvc
>>> >>> >
>>> >>> >
>>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>> >>> > mike.tutkowski@solidfire.com> wrote:
>>> >>> >
>>> >>> >> This appears to be the offending method:
>>> >>> >>
>>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>>> >>> >>
>>> >>> >>         if (!_initialized) {
>>> >>> >>
>>> >>> >>             return null;
>>> >>> >>
>>> >>> >>         }
>>> >>> >>
>>> >>> >>         try {
>>> >>> >>
>>> >>> >>             _sp.parse(new InputSource(new StringReader(capXML)),
>>> this);
>>> >>> >>
>>> >>> >>             return _capXML.toString();
>>> >>> >>
>>> >>> >>         } catch (SAXException se) {
>>> >>> >>
>>> >>> >>             s_logger.warn(se.getMessage());
>>> >>> >>
>>> >>> >>         } catch (IOException ie) {
>>> >>> >>
>>> >>> >>             s_logger.error(ie.getMessage());
>>> >>> >>
>>> >>> >>         }
>>> >>> >>
>>> >>> >>         return null;
>>> >>> >>
>>> >>> >>     }
>>> >>> >>
>>> >>> >>
>>> >>> >> The logging I do from this method (not shown above), however,
>>> doesn't
>>> >>> seem
>>> >>> >> to end up in agent.log. Not sure why that is.
>>> >>> >>
>>> >>> >> We invoke this method and I log we're in this method as the first
>>> >>> thing I
>>> >>> >> do, but it doesn't show up in agent.log.
>>> >>> >>
>>> >>> >> The last message in agent.log is a line saying we are right before
>>> the
>>> >>> >> call to this method.
>>> >>> >>
>>> >>> >>
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> *Mike Tutkowski*
>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I basically just leveraged the code you provided to redirect the output on
Ubuntu.

Here is the standard err:

log4j:WARN No appenders could be found for logger
(org.apache.commons.httpclient.params.DefaultHttpParams).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
Caused by: java.lang.NoSuchMethodError: com.sun.jna.Native.free(J)V
at org.libvirt.Library.free(Unknown Source)
at org.libvirt.Connect.getCapabilities(Unknown Source)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.IsHVMEnabled(LibvirtComputingResource.java:4524)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.configure(LibvirtComputingResource.java:753)
at com.cloud.agent.Agent.<init>(Agent.java:168)
at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:439)
at com.cloud.agent.AgentShell.launchAgentFromClassInfo(AgentShell.java:386)
at com.cloud.agent.AgentShell.launchAgent(AgentShell.java:361)
at com.cloud.agent.AgentShell.start(AgentShell.java:473)
... 5 more
Cannot start daemon
Service exit with a return value of 5


On Wed, Sep 25, 2013 at 5:07 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Sounds good.
>
> Thanks, Marcus! :)
>
>
> On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Ok, so the next step is to track that stdout and see if you can see
>> what jsvc complains about when it fails to start up the service.
>>
>> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > These also look good:
>> >
>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>> > x86_64
>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
>> >  Id Name                 State
>> > ----------------------------------
>> >
>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>> > /var/run/libvirt/libvirt-sock
>> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>> >
>> >
>> >
>> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> This is my new agent.properties file (with comments removed...looks
>> >> decent):
>> >>
>> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> >> workers=5
>> >> host=192.168.233.1
>> >> port=8250
>> >> cluster=1
>> >> pod=1
>> >> zone=1
>> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>> >> private.network.device=cloudbr0
>> >> public.network.device=cloudbr0
>> >> guest.network.device=cloudbr0
>> >>
>> >> Yeah, I was always writing stuff out using the logger. I should look
>> into
>> >> redirecting stdout and stderr.
>> >>
>> >> Here were my steps to start and check the process status:
>> >>
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >> cloudstack-agent start
>> >>  * Starting CloudStack Agent cloudstack-agent
>> >>                                                      [ OK ]
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
>> >>
>> >> Also, this might be of interest:
>> >>
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>> >> kvm_intel             137721  0
>> >> kvm                   415549  1 kvm_intel
>> >>
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>> >> /proc/cpuinfo
>> >> 1
>> >>
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>> >> INFO: /dev/kvm exists
>> >> KVM acceleration can be used
>> >>
>> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
>> >> 1
>> >>
>> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >>
>> >>> So you:
>> >>>
>> >>> 1. run that command
>> >>> 2. get a brand new agent.properties as a result
>> >>> 3. start the service
>> >>>
>> >>> but you don't see it in the process table?
>> >>>
>> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
>> >>> if there were an error not printed via logger you'd not see it.  I'm
>> >>> not as familiar with the debian/ubuntu stuff off the top of my head,
>> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>> >>>
>> >>> start() {
>> >>>     echo -n $"Starting $PROGNAME: "
>> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
>> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>> >>> $LOGDIR/cloudstack-agent.out $CLASS
>> >>>         RETVAL=$?
>> >>>         echo
>> >>>     else
>> >>>
>> >>>
>> >>> Which sends STDOUT to cloudstack-agent.out and errors to
>> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
>> >>>
>> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
>> >>> know you didn't end up using it, but the devcloud-kvm instructions for
>> >>> vmware fusion tell you to ensure that your guest has hardware
>> >>> virtualization passthrough enabled, I'm wondering if it isn't.
>> >>>
>> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>> >>> <mi...@solidfire.com> wrote:
>> >>> > These results look good:
>> >>> >
>> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1
>> -z 1
>> >>> -p 1
>> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>> >>> > Starting to configure your system:
>> >>> > Configure Apparmor ...        [OK]
>> >>> > Configure Network ...         [OK]
>> >>> > Configure Libvirt ...         [OK]
>> >>> > Configure Firewall ...        [OK]
>> >>> > Configure Nfs ...             [OK]
>> >>> > Configure cloudAgent ...      [OK]
>> >>> > CloudStack Agent setup is done!
>> >>> >
>> >>> > However, these results are the same:
>> >>> >
>> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto
>> jsvc
>> >>> >
>> >>> >
>> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>> >>> > mike.tutkowski@solidfire.com> wrote:
>> >>> >
>> >>> >> This appears to be the offending method:
>> >>> >>
>> >>> >>     public String parseCapabilitiesXML(String capXML) {
>> >>> >>
>> >>> >>         if (!_initialized) {
>> >>> >>
>> >>> >>             return null;
>> >>> >>
>> >>> >>         }
>> >>> >>
>> >>> >>         try {
>> >>> >>
>> >>> >>             _sp.parse(new InputSource(new StringReader(capXML)),
>> this);
>> >>> >>
>> >>> >>             return _capXML.toString();
>> >>> >>
>> >>> >>         } catch (SAXException se) {
>> >>> >>
>> >>> >>             s_logger.warn(se.getMessage());
>> >>> >>
>> >>> >>         } catch (IOException ie) {
>> >>> >>
>> >>> >>             s_logger.error(ie.getMessage());
>> >>> >>
>> >>> >>         }
>> >>> >>
>> >>> >>         return null;
>> >>> >>
>> >>> >>     }
>> >>> >>
>> >>> >>
>> >>> >> The logging I do from this method (not shown above), however,
>> doesn't
>> >>> seem
>> >>> >> to end up in agent.log. Not sure why that is.
>> >>> >>
>> >>> >> We invoke this method and I log we're in this method as the first
>> >>> thing I
>> >>> >> do, but it doesn't show up in agent.log.
>> >>> >>
>> >>> >> The last message in agent.log is a line saying we are right before
>> the
>> >>> >> call to this method.
>> >>> >>
>> >>> >>
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Sounds good.

Thanks, Marcus! :)


On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Ok, so the next step is to track that stdout and see if you can see
> what jsvc complains about when it fails to start up the service.
>
> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > These also look good:
> >
> > mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> > x86_64
> > mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
> >  Id Name                 State
> > ----------------------------------
> >
> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
> > /var/run/libvirt/libvirt-sock
> > srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
> > mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
> > crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
> >
> >
> >
> > On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> This is my new agent.properties file (with comments removed...looks
> >> decent):
> >>
> >> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
> >> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >> workers=5
> >> host=192.168.233.1
> >> port=8250
> >> cluster=1
> >> pod=1
> >> zone=1
> >> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
> >> private.network.device=cloudbr0
> >> public.network.device=cloudbr0
> >> guest.network.device=cloudbr0
> >>
> >> Yeah, I was always writing stuff out using the logger. I should look
> into
> >> redirecting stdout and stderr.
> >>
> >> Here were my steps to start and check the process status:
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >> cloudstack-agent start
> >>  * Starting CloudStack Agent cloudstack-agent
> >>                                                      [ OK ]
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
> >> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
> >>
> >> Also, this might be of interest:
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
> >> kvm_intel             137721  0
> >> kvm                   415549  1 kvm_intel
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
> >> /proc/cpuinfo
> >> 1
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
> >> INFO: /dev/kvm exists
> >> KVM acceleration can be used
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
> >> 1
> >>
> >> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>
> >>> So you:
> >>>
> >>> 1. run that command
> >>> 2. get a brand new agent.properties as a result
> >>> 3. start the service
> >>>
> >>> but you don't see it in the process table?
> >>>
> >>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
> >>> if there were an error not printed via logger you'd not see it.  I'm
> >>> not as familiar with the debian/ubuntu stuff off the top of my head,
> >>> but in /etc/init.d/cloudstack-agent on CentOS we do:
> >>>
> >>> start() {
> >>>     echo -n $"Starting $PROGNAME: "
> >>>     if hostname --fqdn >/dev/null 2>&1 ; then
> >>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
> >>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
> >>> $LOGDIR/cloudstack-agent.out $CLASS
> >>>         RETVAL=$?
> >>>         echo
> >>>     else
> >>>
> >>>
> >>> Which sends STDOUT to cloudstack-agent.out and errors to
> >>> cloudstack-agent.err. You can look to see what Ubuntu does.
> >>>
> >>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
> >>> know you didn't end up using it, but the devcloud-kvm instructions for
> >>> vmware fusion tell you to ensure that your guest has hardware
> >>> virtualization passthrough enabled, I'm wondering if it isn't.
> >>>
> >>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
> >>> <mi...@solidfire.com> wrote:
> >>> > These results look good:
> >>> >
> >>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1
> -z 1
> >>> -p 1
> >>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >>> > --prvNic=cloudbr0 --guestNic=cloudbr0
> >>> > Starting to configure your system:
> >>> > Configure Apparmor ...        [OK]
> >>> > Configure Network ...         [OK]
> >>> > Configure Libvirt ...         [OK]
> >>> > Configure Firewall ...        [OK]
> >>> > Configure Nfs ...             [OK]
> >>> > Configure cloudAgent ...      [OK]
> >>> > CloudStack Agent setup is done!
> >>> >
> >>> > However, these results are the same:
> >>> >
> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto
> jsvc
> >>> >
> >>> >
> >>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >
> >>> >> This appears to be the offending method:
> >>> >>
> >>> >>     public String parseCapabilitiesXML(String capXML) {
> >>> >>
> >>> >>         if (!_initialized) {
> >>> >>
> >>> >>             return null;
> >>> >>
> >>> >>         }
> >>> >>
> >>> >>         try {
> >>> >>
> >>> >>             _sp.parse(new InputSource(new StringReader(capXML)),
> this);
> >>> >>
> >>> >>             return _capXML.toString();
> >>> >>
> >>> >>         } catch (SAXException se) {
> >>> >>
> >>> >>             s_logger.warn(se.getMessage());
> >>> >>
> >>> >>         } catch (IOException ie) {
> >>> >>
> >>> >>             s_logger.error(ie.getMessage());
> >>> >>
> >>> >>         }
> >>> >>
> >>> >>         return null;
> >>> >>
> >>> >>     }
> >>> >>
> >>> >>
> >>> >> The logging I do from this method (not shown above), however,
> doesn't
> >>> seem
> >>> >> to end up in agent.log. Not sure why that is.
> >>> >>
> >>> >> We invoke this method and I log we're in this method as the first
> >>> thing I
> >>> >> do, but it doesn't show up in agent.log.
> >>> >>
> >>> >> The last message in agent.log is a line saying we are right before
> the
> >>> >> call to this method.
> >>> >>
> >>> >>
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
In the past, prior to the addition to the CentOS init script that I
mentioned, I'd modify the init script to echo out the jsvc command it
was going to run, then I'd run that manually instead of the init. Then
I could see where it died.

On Wed, Sep 25, 2013 at 5:04 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> Ok, so the next step is to track that stdout and see if you can see
> what jsvc complains about when it fails to start up the service.
>
> On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>> These also look good:
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
>> x86_64
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
>>  Id Name                 State
>> ----------------------------------
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
>> /var/run/libvirt/libvirt-sock
>> srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
>> crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>>
>>
>>
>> On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> This is my new agent.properties file (with comments removed...looks
>>> decent):
>>>
>>> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>>> workers=5
>>> host=192.168.233.1
>>> port=8250
>>> cluster=1
>>> pod=1
>>> zone=1
>>> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>>> private.network.device=cloudbr0
>>> public.network.device=cloudbr0
>>> guest.network.device=cloudbr0
>>>
>>> Yeah, I was always writing stuff out using the logger. I should look into
>>> redirecting stdout and stderr.
>>>
>>> Here were my steps to start and check the process status:
>>>
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>> cloudstack-agent start
>>>  * Starting CloudStack Agent cloudstack-agent
>>>                                                      [ OK ]
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>>> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
>>>
>>> Also, this might be of interest:
>>>
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>>> kvm_intel             137721  0
>>> kvm                   415549  1 kvm_intel
>>>
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>>> /proc/cpuinfo
>>> 1
>>>
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>>> INFO: /dev/kvm exists
>>> KVM acceleration can be used
>>>
>>> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
>>> 1
>>>
>>> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> So you:
>>>>
>>>> 1. run that command
>>>> 2. get a brand new agent.properties as a result
>>>> 3. start the service
>>>>
>>>> but you don't see it in the process table?
>>>>
>>>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
>>>> if there were an error not printed via logger you'd not see it.  I'm
>>>> not as familiar with the debian/ubuntu stuff off the top of my head,
>>>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>>>
>>>> start() {
>>>>     echo -n $"Starting $PROGNAME: "
>>>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>>> $LOGDIR/cloudstack-agent.out $CLASS
>>>>         RETVAL=$?
>>>>         echo
>>>>     else
>>>>
>>>>
>>>> Which sends STDOUT to cloudstack-agent.out and errors to
>>>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>>>
>>>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
>>>> know you didn't end up using it, but the devcloud-kvm instructions for
>>>> vmware fusion tell you to ensure that your guest has hardware
>>>> virtualization passthrough enabled, I'm wondering if it isn't.
>>>>
>>>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>>> <mi...@solidfire.com> wrote:
>>>> > These results look good:
>>>> >
>>>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1
>>>> -p 1
>>>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>>>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>>> > Starting to configure your system:
>>>> > Configure Apparmor ...        [OK]
>>>> > Configure Network ...         [OK]
>>>> > Configure Libvirt ...         [OK]
>>>> > Configure Firewall ...        [OK]
>>>> > Configure Nfs ...             [OK]
>>>> > Configure cloudAgent ...      [OK]
>>>> > CloudStack Agent setup is done!
>>>> >
>>>> > However, these results are the same:
>>>> >
>>>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc
>>>> >
>>>> >
>>>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>>> > mike.tutkowski@solidfire.com> wrote:
>>>> >
>>>> >> This appears to be the offending method:
>>>> >>
>>>> >>     public String parseCapabilitiesXML(String capXML) {
>>>> >>
>>>> >>         if (!_initialized) {
>>>> >>
>>>> >>             return null;
>>>> >>
>>>> >>         }
>>>> >>
>>>> >>         try {
>>>> >>
>>>> >>             _sp.parse(new InputSource(new StringReader(capXML)), this);
>>>> >>
>>>> >>             return _capXML.toString();
>>>> >>
>>>> >>         } catch (SAXException se) {
>>>> >>
>>>> >>             s_logger.warn(se.getMessage());
>>>> >>
>>>> >>         } catch (IOException ie) {
>>>> >>
>>>> >>             s_logger.error(ie.getMessage());
>>>> >>
>>>> >>         }
>>>> >>
>>>> >>         return null;
>>>> >>
>>>> >>     }
>>>> >>
>>>> >>
>>>> >> The logging I do from this method (not shown above), however, doesn't
>>>> seem
>>>> >> to end up in agent.log. Not sure why that is.
>>>> >>
>>>> >> We invoke this method and I log we're in this method as the first
>>>> thing I
>>>> >> do, but it doesn't show up in agent.log.
>>>> >>
>>>> >> The last message in agent.log is a line saying we are right before the
>>>> >> call to this method.
>>>> >>
>>>> >>
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Ok, so the next step is to track that stdout and see if you can see
what jsvc complains about when it fails to start up the service.

On Wed, Sep 25, 2013 at 4:56 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> These also look good:
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
> x86_64
> mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
>  Id Name                 State
> ----------------------------------
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
> /var/run/libvirt/libvirt-sock
> srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
> mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
> crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm
>
>
>
> On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> This is my new agent.properties file (with comments removed...looks
>> decent):
>>
>> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
>> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> workers=5
>> host=192.168.233.1
>> port=8250
>> cluster=1
>> pod=1
>> zone=1
>> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
>> private.network.device=cloudbr0
>> public.network.device=cloudbr0
>> guest.network.device=cloudbr0
>>
>> Yeah, I was always writing stuff out using the logger. I should look into
>> redirecting stdout and stderr.
>>
>> Here were my steps to start and check the process status:
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> cloudstack-agent start
>>  * Starting CloudStack Agent cloudstack-agent
>>                                                      [ OK ]
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
>> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
>>
>> Also, this might be of interest:
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
>> kvm_intel             137721  0
>> kvm                   415549  1 kvm_intel
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
>> /proc/cpuinfo
>> 1
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
>> INFO: /dev/kvm exists
>> KVM acceleration can be used
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
>> 1
>>
>> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> So you:
>>>
>>> 1. run that command
>>> 2. get a brand new agent.properties as a result
>>> 3. start the service
>>>
>>> but you don't see it in the process table?
>>>
>>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
>>> if there were an error not printed via logger you'd not see it.  I'm
>>> not as familiar with the debian/ubuntu stuff off the top of my head,
>>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>>
>>> start() {
>>>     echo -n $"Starting $PROGNAME: "
>>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>>> $LOGDIR/cloudstack-agent.out $CLASS
>>>         RETVAL=$?
>>>         echo
>>>     else
>>>
>>>
>>> Which sends STDOUT to cloudstack-agent.out and errors to
>>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>>
>>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
>>> know you didn't end up using it, but the devcloud-kvm instructions for
>>> vmware fusion tell you to ensure that your guest has hardware
>>> virtualization passthrough enabled, I'm wondering if it isn't.
>>>
>>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > These results look good:
>>> >
>>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1
>>> -p 1
>>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>>> > Starting to configure your system:
>>> > Configure Apparmor ...        [OK]
>>> > Configure Network ...         [OK]
>>> > Configure Libvirt ...         [OK]
>>> > Configure Firewall ...        [OK]
>>> > Configure Nfs ...             [OK]
>>> > Configure cloudAgent ...      [OK]
>>> > CloudStack Agent setup is done!
>>> >
>>> > However, these results are the same:
>>> >
>>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc
>>> >
>>> >
>>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>>> > mike.tutkowski@solidfire.com> wrote:
>>> >
>>> >> This appears to be the offending method:
>>> >>
>>> >>     public String parseCapabilitiesXML(String capXML) {
>>> >>
>>> >>         if (!_initialized) {
>>> >>
>>> >>             return null;
>>> >>
>>> >>         }
>>> >>
>>> >>         try {
>>> >>
>>> >>             _sp.parse(new InputSource(new StringReader(capXML)), this);
>>> >>
>>> >>             return _capXML.toString();
>>> >>
>>> >>         } catch (SAXException se) {
>>> >>
>>> >>             s_logger.warn(se.getMessage());
>>> >>
>>> >>         } catch (IOException ie) {
>>> >>
>>> >>             s_logger.error(ie.getMessage());
>>> >>
>>> >>         }
>>> >>
>>> >>         return null;
>>> >>
>>> >>     }
>>> >>
>>> >>
>>> >> The logging I do from this method (not shown above), however, doesn't
>>> seem
>>> >> to end up in agent.log. Not sure why that is.
>>> >>
>>> >> We invoke this method and I log we're in this method as the first
>>> thing I
>>> >> do, but it doesn't show up in agent.log.
>>> >>
>>> >> The last message in agent.log is a line saying we are right before the
>>> >> call to this method.
>>> >>
>>> >>
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
These also look good:

mtutkowski@ubuntu:/etc/cloudstack/agent$ uname -m
x86_64
mtutkowski@ubuntu:/etc/cloudstack/agent$ virsh -c qemu:///system list
 Id Name                 State
----------------------------------

mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ls -la
/var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 Sep 25 16:05 /var/run/libvirt/libvirt-sock
mtutkowski@ubuntu:/etc/cloudstack/agent$ ls -l /dev/kvm
crw-rw----+ 1 root kvm 10, 232 Sep 25 15:22 /dev/kvm



On Wed, Sep 25, 2013 at 4:53 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> This is my new agent.properties file (with comments removed...looks
> decent):
>
> guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> workers=5
> host=192.168.233.1
> port=8250
> cluster=1
> pod=1
> zone=1
> local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
> private.network.device=cloudbr0
> public.network.device=cloudbr0
> guest.network.device=cloudbr0
>
> Yeah, I was always writing stuff out using the logger. I should look into
> redirecting stdout and stderr.
>
> Here were my steps to start and check the process status:
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> cloudstack-agent start
>  * Starting CloudStack Agent cloudstack-agent
>                                                      [ OK ]
> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
> 1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc
>
> Also, this might be of interest:
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
> kvm_intel             137721  0
> kvm                   415549  1 kvm_intel
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)'
> /proc/cpuinfo
> 1
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
> INFO: /dev/kvm exists
> KVM acceleration can be used
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
> 1
>
> On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> So you:
>>
>> 1. run that command
>> 2. get a brand new agent.properties as a result
>> 3. start the service
>>
>> but you don't see it in the process table?
>>
>> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
>> if there were an error not printed via logger you'd not see it.  I'm
>> not as familiar with the debian/ubuntu stuff off the top of my head,
>> but in /etc/init.d/cloudstack-agent on CentOS we do:
>>
>> start() {
>>     echo -n $"Starting $PROGNAME: "
>>     if hostname --fqdn >/dev/null 2>&1 ; then
>>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>>             -errfile $LOGDIR/cloudstack-agent.err -outfile
>> $LOGDIR/cloudstack-agent.out $CLASS
>>         RETVAL=$?
>>         echo
>>     else
>>
>>
>> Which sends STDOUT to cloudstack-agent.out and errors to
>> cloudstack-agent.err. You can look to see what Ubuntu does.
>>
>> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
>> know you didn't end up using it, but the devcloud-kvm instructions for
>> vmware fusion tell you to ensure that your guest has hardware
>> virtualization passthrough enabled, I'm wondering if it isn't.
>>
>> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > These results look good:
>> >
>> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1
>> -p 1
>> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> > --prvNic=cloudbr0 --guestNic=cloudbr0
>> > Starting to configure your system:
>> > Configure Apparmor ...        [OK]
>> > Configure Network ...         [OK]
>> > Configure Libvirt ...         [OK]
>> > Configure Firewall ...        [OK]
>> > Configure Nfs ...             [OK]
>> > Configure cloudAgent ...      [OK]
>> > CloudStack Agent setup is done!
>> >
>> > However, these results are the same:
>> >
>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc
>> >
>> >
>> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> This appears to be the offending method:
>> >>
>> >>     public String parseCapabilitiesXML(String capXML) {
>> >>
>> >>         if (!_initialized) {
>> >>
>> >>             return null;
>> >>
>> >>         }
>> >>
>> >>         try {
>> >>
>> >>             _sp.parse(new InputSource(new StringReader(capXML)), this);
>> >>
>> >>             return _capXML.toString();
>> >>
>> >>         } catch (SAXException se) {
>> >>
>> >>             s_logger.warn(se.getMessage());
>> >>
>> >>         } catch (IOException ie) {
>> >>
>> >>             s_logger.error(ie.getMessage());
>> >>
>> >>         }
>> >>
>> >>         return null;
>> >>
>> >>     }
>> >>
>> >>
>> >> The logging I do from this method (not shown above), however, doesn't
>> seem
>> >> to end up in agent.log. Not sure why that is.
>> >>
>> >> We invoke this method and I log we're in this method as the first
>> thing I
>> >> do, but it doesn't show up in agent.log.
>> >>
>> >> The last message in agent.log is a line saying we are right before the
>> >> call to this method.
>> >>
>> >>
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
This is my new agent.properties file (with comments removed...looks decent):

guid=6b4aa1c2-2ac9-3c60-aabe-704aed40c684
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
workers=5
host=192.168.233.1
port=8250
cluster=1
pod=1
zone=1
local.storage.uuid=aced86a2-2dd6-450a-93e5-1bc0ec3c73be
private.network.device=cloudbr0
public.network.device=cloudbr0
guest.network.device=cloudbr0

Yeah, I was always writing stuff out using the logger. I should look into
redirecting stdout and stderr.

Here were my steps to start and check the process status:

mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent start
 * Starting CloudStack Agent cloudstack-agent
                                                   [ OK ]
mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo ps -ef | grep jsvc
1000      4605  3725  0 16:47 pts/1    00:00:00 grep --color=auto jsvc

Also, this might be of interest:

mtutkowski@ubuntu:/etc/cloudstack/agent$ lsmod | grep kvm
kvm_intel             137721  0
kvm                   415549  1 kvm_intel

mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c '(vmx|svm)' /proc/cpuinfo
1

mtutkowski@ubuntu:/etc/cloudstack/agent$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

mtutkowski@ubuntu:/etc/cloudstack/agent$ egrep -c ' lm ' /proc/cpuinfo
1

On Wed, Sep 25, 2013 at 4:39 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> So you:
>
> 1. run that command
> 2. get a brand new agent.properties as a result
> 3. start the service
>
> but you don't see it in the process table?
>
> The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
> if there were an error not printed via logger you'd not see it.  I'm
> not as familiar with the debian/ubuntu stuff off the top of my head,
> but in /etc/init.d/cloudstack-agent on CentOS we do:
>
> start() {
>     echo -n $"Starting $PROGNAME: "
>     if hostname --fqdn >/dev/null 2>&1 ; then
>         $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
>             -errfile $LOGDIR/cloudstack-agent.err -outfile
> $LOGDIR/cloudstack-agent.out $CLASS
>         RETVAL=$?
>         echo
>     else
>
>
> Which sends STDOUT to cloudstack-agent.out and errors to
> cloudstack-agent.err. You can look to see what Ubuntu does.
>
> Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
> know you didn't end up using it, but the devcloud-kvm instructions for
> vmware fusion tell you to ensure that your guest has hardware
> virtualization passthrough enabled, I'm wondering if it isn't.
>
> On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > These results look good:
> >
> > mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1
> -p 1
> > -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> > --prvNic=cloudbr0 --guestNic=cloudbr0
> > Starting to configure your system:
> > Configure Apparmor ...        [OK]
> > Configure Network ...         [OK]
> > Configure Libvirt ...         [OK]
> > Configure Firewall ...        [OK]
> > Configure Nfs ...             [OK]
> > Configure cloudAgent ...      [OK]
> > CloudStack Agent setup is done!
> >
> > However, these results are the same:
> >
> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> > 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc
> >
> >
> > On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> This appears to be the offending method:
> >>
> >>     public String parseCapabilitiesXML(String capXML) {
> >>
> >>         if (!_initialized) {
> >>
> >>             return null;
> >>
> >>         }
> >>
> >>         try {
> >>
> >>             _sp.parse(new InputSource(new StringReader(capXML)), this);
> >>
> >>             return _capXML.toString();
> >>
> >>         } catch (SAXException se) {
> >>
> >>             s_logger.warn(se.getMessage());
> >>
> >>         } catch (IOException ie) {
> >>
> >>             s_logger.error(ie.getMessage());
> >>
> >>         }
> >>
> >>         return null;
> >>
> >>     }
> >>
> >>
> >> The logging I do from this method (not shown above), however, doesn't
> seem
> >> to end up in agent.log. Not sure why that is.
> >>
> >> We invoke this method and I log we're in this method as the first thing
> I
> >> do, but it doesn't show up in agent.log.
> >>
> >> The last message in agent.log is a line saying we are right before the
> >> call to this method.
> >>
> >>
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
So you:

1. run that command
2. get a brand new agent.properties as a result
3. start the service

but you don't see it in the process table?

The agent's STDOUT doesn't go to the agent log, only log4j stuff. So
if there were an error not printed via logger you'd not see it.  I'm
not as familiar with the debian/ubuntu stuff off the top of my head,
but in /etc/init.d/cloudstack-agent on CentOS we do:

start() {
    echo -n $"Starting $PROGNAME: "
    if hostname --fqdn >/dev/null 2>&1 ; then
        $JSVC -cp "$CLASSPATH" -pidfile "$PIDFILE" \
            -errfile $LOGDIR/cloudstack-agent.err -outfile
$LOGDIR/cloudstack-agent.out $CLASS
        RETVAL=$?
        echo
    else


Which sends STDOUT to cloudstack-agent.out and errors to
cloudstack-agent.err. You can look to see what Ubuntu does.

Out of curiosity, what do you get when you do 'lsmod | grep kvm' ? I
know you didn't end up using it, but the devcloud-kvm instructions for
vmware fusion tell you to ensure that your guest has hardware
virtualization passthrough enabled, I'm wondering if it isn't.

On Wed, Sep 25, 2013 at 4:11 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> These results look good:
>
> mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1 -p 1
> -c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> --prvNic=cloudbr0 --guestNic=cloudbr0
> Starting to configure your system:
> Configure Apparmor ...        [OK]
> Configure Network ...         [OK]
> Configure Libvirt ...         [OK]
> Configure Firewall ...        [OK]
> Configure Nfs ...             [OK]
> Configure cloudAgent ...      [OK]
> CloudStack Agent setup is done!
>
> However, these results are the same:
>
> mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> 1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc
>
>
> On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> This appears to be the offending method:
>>
>>     public String parseCapabilitiesXML(String capXML) {
>>
>>         if (!_initialized) {
>>
>>             return null;
>>
>>         }
>>
>>         try {
>>
>>             _sp.parse(new InputSource(new StringReader(capXML)), this);
>>
>>             return _capXML.toString();
>>
>>         } catch (SAXException se) {
>>
>>             s_logger.warn(se.getMessage());
>>
>>         } catch (IOException ie) {
>>
>>             s_logger.error(ie.getMessage());
>>
>>         }
>>
>>         return null;
>>
>>     }
>>
>>
>> The logging I do from this method (not shown above), however, doesn't seem
>> to end up in agent.log. Not sure why that is.
>>
>> We invoke this method and I log we're in this method as the first thing I
>> do, but it doesn't show up in agent.log.
>>
>> The last message in agent.log is a line saying we are right before the
>> call to this method.
>>
>>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
These results look good:

mtutkowski@ubuntu:~$ sudo cloudstack-setup-agent -m 192.168.233.1 -z 1 -p 1
-c 1 -g 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
--prvNic=cloudbr0 --guestNic=cloudbr0
Starting to configure your system:
Configure Apparmor ...        [OK]
Configure Network ...         [OK]
Configure Libvirt ...         [OK]
Configure Firewall ...        [OK]
Configure Nfs ...             [OK]
Configure cloudAgent ...      [OK]
CloudStack Agent setup is done!

However, these results are the same:

mtutkowski@ubuntu:~$ ps -ef | grep jsvc
1000      4314  3725  0 16:10 pts/1    00:00:00 grep --color=auto jsvc


On Wed, Sep 25, 2013 at 3:48 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> This appears to be the offending method:
>
>     public String parseCapabilitiesXML(String capXML) {
>
>         if (!_initialized) {
>
>             return null;
>
>         }
>
>         try {
>
>             _sp.parse(new InputSource(new StringReader(capXML)), this);
>
>             return _capXML.toString();
>
>         } catch (SAXException se) {
>
>             s_logger.warn(se.getMessage());
>
>         } catch (IOException ie) {
>
>             s_logger.error(ie.getMessage());
>
>         }
>
>         return null;
>
>     }
>
>
> The logging I do from this method (not shown above), however, doesn't seem
> to end up in agent.log. Not sure why that is.
>
> We invoke this method and I log we're in this method as the first thing I
> do, but it doesn't show up in agent.log.
>
> The last message in agent.log is a line saying we are right before the
> call to this method.
>
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
This appears to be the offending method:

    public String parseCapabilitiesXML(String capXML) {

        if (!_initialized) {

            return null;

        }

        try {

            _sp.parse(new InputSource(new StringReader(capXML)), this);

            return _capXML.toString();

        } catch (SAXException se) {

            s_logger.warn(se.getMessage());

        } catch (IOException ie) {

            s_logger.error(ie.getMessage());

        }

        return null;

    }


The logging I do from this method (not shown above), however, doesn't seem
to end up in agent.log. Not sure why that is.

We invoke this method and I log we're in this method as the first thing I
do, but it doesn't show up in agent.log.

The last message in agent.log is a line saying we are right before the call
to this method.


On Tue, Sep 24, 2013 at 10:53 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Right...I've seen it rewritten before. I just have to get a little time
> today to resume my testing and I can re-add the host to CS and see how the
> file looks then.
>
>
> On Tue, Sep 24, 2013 at 10:05 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> If that's what your agent.properties looks like after you try to add
>> the host then that's your problem. It should be completely rewritten
>> after cloudstack-setup-agent, with all of the parameters passed to it.
>> you should run it manually and see what is failing, there's probably a
>> step missed in the ubuntu setup instructions.
>>
>> On Tue, Sep 24, 2013 at 10:00 AM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > This is what a fresh agent.properties file looks like on my system.
>> >
>> > I expect if I try to add it to a cluster, the empty, localhost, and
>> default
>> > values below should be filled in.
>> >
>> > I plan to try to add it to a cluster in a bit.
>> >
>> > # The GUID to identify the agent with, this is mandatory!
>> > # Generate with "uuidgen"
>> > guid=
>> >
>> > #resource= the java class, which agent load to execute
>> > resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>> >
>> > #workers= number of threads running in agent
>> > workers=5
>> >
>> > #host= The IP address of management server
>> > host=localhost
>> >
>> > #port = The port management server listening on, default is 8250
>> > port=8250
>> >
>> > #cluster= The cluster which the agent belongs to
>> > cluster=default
>> >
>> > #pod= The pod which the agent belongs to
>> > pod=default
>> >
>> > #zone= The zone which the agent belongs to
>> > zone=default
>> >
>> >
>> >
>> > On Tue, Sep 24, 2013 at 8:55 AM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> I stull haven't seen your agent.properties. This would tell me if your
>> >> setup succeeded.  At this point my best guess is that
>> >> "cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> >> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> >> --prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fashion. You can
>> >> run it manually at any time to see. Once that is run, then the agent
>> >> should come up. The resource name in your code is pulled from
>> >> agent.properties (I believe) and is usually
>> >> "resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
>> >>
>> >> On Tue, Sep 24, 2013 at 12:12 AM, Mike Tutkowski
>> >> <mi...@solidfire.com> wrote:
>> >> > I've been narrowing it down by putting in a million print-to-log
>> >> statements.
>> >> >
>> >> > Do you know if it is a problem that value ends up null (in a
>> constructor
>> >> > for Agent)?
>> >> >
>> >> > String value = _shell.getPersistentProperty(getResourceName(), "id");
>> >> >
>> >> > In that same constructor, this line never finishes:
>> >> >
>> >> > if (!_resource.configure(getResourceName(), params)) {
>> >> >
>> >> > I need to dig into the configure method to see what's going on there.
>> >> >
>> >> >
>> >> > On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> >wrote:
>> >> >
>> >> >> It might be a centos specific thing. These are created by the init
>> >> scripts.
>> >> >> Check your agent init script on Ubuntu and see I'd you can decipher
>> >> where
>> >> >> it sends stdout.
>> >> >> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com
>> >> >
>> >> >> wrote:
>> >> >>
>> >> >> > Weird...no such file exists.
>> >> >> >
>> >> >> >
>> >> >> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> >> > >wrote:
>> >> >> >
>> >> >> > > maybe cloudstack-agent.out
>> >> >> > >
>> >> >> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
>> >> >> > > <mi...@solidfire.com> wrote:
>> >> >> > > > OK, so, nothing is screaming out in the logs. I did notice the
>> >> >> > following:
>> >> >> > > >
>> >> >> > > > From setup.log:
>> >> >> > > >
>> >> >> > > > DEBUG:root:execute:apparmor_status |grep libvirt
>> >> >> > > >
>> >> >> > > > DEBUG:root:Failed to execute:
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> status
>> >> >> > > >
>> >> >> > > > DEBUG:root:Failed to execute: * could not access PID file for
>> >> >> > > > cloudstack-agent
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > This is the final line in this log file:
>> >> >> > > >
>> >> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> start
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > This is from agent.log:
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell]
>> (main:null)
>> >> >> > > Checking
>> >> >> > > > to see if agent.pid exists.
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil]
>> >> (main:null)
>> >> >> > > > Executing: bash -c echo $PPID
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil]
>> >> (main:null)
>> >> >> > > > Execution is successful.
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null)
>> id
>> >> is
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,000 DEBUG
>> [cloud.resource.ServerResourceBase]
>> >> >> > > > (main:null) Retrieving network interface: cloudbr0
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,016 DEBUG
>> [cloud.resource.ServerResourceBase]
>> >> >> > > > (main:null) Retrieving network interface: cloudbr0
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,016 DEBUG
>> [cloud.resource.ServerResourceBase]
>> >> >> > > > (main:null) Retrieving network interface: null
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,017 DEBUG
>> [cloud.resource.ServerResourceBase]
>> >> >> > > > (main:null) Retrieving network interface: null
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > The following kinds of lines are repeated for a bunch of
>> different
>> >> >> .sh
>> >> >> > > > files. I think they often end up being found here:
>> >> >> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
>> >> >> probably
>> >> >> > > not
>> >> >> > > > an issue.
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script]
>> (main:null)
>> >> >> Looking
>> >> >> > > for
>> >> >> > > > call_firewall.sh in the classpath
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script]
>> (main:null)
>> >> >> System
>> >> >> > > > resource: null
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script]
>> (main:null)
>> >> >> > Classpath
>> >> >> > > > resource: null
>> >> >> > > >
>> >> >> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script]
>> (main:null)
>> >> >> Looking
>> >> >> > > for
>> >> >> > > > call_firewall.sh
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > Is there a log file for the Java code that I could write stuff
>> >> out to
>> >> >> > and
>> >> >> > > > see how far we get?
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >
>> >> >> > > >> Thanks, Marcus
>> >> >> > > >>
>> >> >> > > >> I've been developing on Windows for most of my time, so a
>> bunch
>> >> of
>> >> >> > these
>> >> >> > > >> Linux-type commands are new to me and I don't always
>> interpret
>> >> the
>> >> >> > > output
>> >> >> > > >> correctly. Getting there. :)
>> >> >> > > >>
>> >> >> > > >>
>> >> >> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
>> >> >> shadowsor@gmail.com
>> >> >> > > >wrote:
>> >> >> > > >>
>> >> >> > > >>> Nope, not running. That's just your grep process. It would
>> look
>> >> >> like:
>> >> >> > > >>>
>> >> >> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec
>> -cp
>> >> >> > > >>>
>> >> >> > > >>>
>> >> >> > >
>> >> >> >
>> >> >>
>> >>
>> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>> >> >> > > >>>
>> >> >> > > >>> Your agent log should tell you why it failed to start if you
>> >> set it
>> >> >> > in
>> >> >> > > >>> debug and try to start... or maybe cloudstack-agent.out if
>> it
>> >> >> doesn't
>> >> >> > > >>> get far enough (say it's missing a class or something and
>> can't
>> >> >> > > >>> start).
>> >> >> > > >>>
>> >> >> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
>> >> >> > > >>> <mi...@solidfire.com> wrote:
>> >> >> > > >>> > Looks like it's running, though:
>> >> >> > > >>> >
>> >> >> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >> >> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep
>> >> --color=auto
>> >> >> > > jsvc
>> >> >> > > >>> >
>> >> >> > > >>> >
>> >> >> > > >>> >
>> >> >> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
>> >> >> > > >>> > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >
>> >> >> > > >>> >> Hey Marcus,
>> >> >> > > >>> >>
>> >> >> > > >>> >> Maybe you could give me a better idea of what the "flow"
>> is
>> >> when
>> >> >> > > >>> adding a
>> >> >> > > >>> >> KVM host.
>> >> >> > > >>> >>
>> >> >> > > >>> >> It looks like we SSH into the potential KVM host and
>> execute
>> >> a
>> >> >> > > startup
>> >> >> > > >>> >> script (giving it necessary info about the cloud and the
>> >> >> > management
>> >> >> > > >>> server
>> >> >> > > >>> >> it should talk to).
>> >> >> > > >>> >>
>> >> >> > > >>> >> After this, is the Java VM started?
>> >> >> > > >>> >>
>> >> >> > > >>> >> After a reboot, I assume the JVM is started
>> automatically?
>> >> >> > > >>> >>
>> >> >> > > >>> >> How do you debug your KVM-side Java code?
>> >> >> > > >>> >>
>> >> >> > > >>> >> Been looking through the logs and nothing obvious sticks
>> >> out. I
>> >> >> > will
>> >> >> > > >>> have
>> >> >> > > >>> >> another look.
>> >> >> > > >>> >>
>> >> >> > > >>> >> Thanks
>> >> >> > > >>> >>
>> >> >> > > >>> >>
>> >> >> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>> >> >> > > >>> >> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>
>> >> >> > > >>> >>> Hey Marcus,
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> I've been investigating my issue with not being able to
>> add
>> >> a
>> >> >> KVM
>> >> >> > > >>> host to
>> >> >> > > >>> >>> CS.
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> For what it's worth, this comes back successful:
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
>> >> >> "cloudstack-setup-agent
>> >> >> > > " +
>> >> >> > > >>> >>> parameters, 3);
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> This is what the command looks like:
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1
>> -g
>> >> >> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a
>> --pubNic=cloudbr0
>> >> >> > > >>> --prvNic=cloudbr0
>> >> >> > > >>> >>> --guestNic=cloudbr0
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> The problem is this method in LibvirtServerDiscoverer
>> never
>> >> >> > finds a
>> >> >> > > >>> >>> matching host in the DB:
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> waitForHostConnect(long dcId, long podId, long
>> clusterId,
>> >> >> String
>> >> >> > > guid)
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> I assume once the KVM host is up and running that it's
>> >> supposed
>> >> >> > to
>> >> >> > > >>> call
>> >> >> > > >>> >>> into the CS MS so the DB can be updated as such?
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> If so, the problem must be on the KVM side.
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> I did run this again (from the KVM host) to see if the
>> >> >> connection
>> >> >> > > was
>> >> >> > > >>> in
>> >> >> > > >>> >>> place:
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> Trying 192.168.233.1...
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> Connected to 192.168.233.1.
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> Escape character is '^]'.
>> >> >> > > >>> >>> So that looks good.
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> I turned on more info in the debug log, but nothing
>> obvious
>> >> >> jumps
>> >> >> > > out
>> >> >> > > >>> as
>> >> >> > > >>> >>> of yet.
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> If you have any thoughts on this, please shoot them my
>> way.
>> >> :)
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> Thanks!
>> >> >> > > >>> >>>
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>> >> >> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>
>> >> >> > > >>> >>>> First step is for me to get this working for KVM,
>> though.
>> >> :)
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>> Once I do that, I can perhaps make modifications to the
>> >> >> storage
>> >> >> > > >>> >>>> framework and hypervisor plug-ins to refactor the
>> logic and
>> >> >> > such.
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> >> >> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>>> Same would work for KVM.
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
>> >> >> > > appropriate
>> >> >> > > >>> >>>>> times by the storage framework, I could move my
>> connect
>> >> and
>> >> >> > > >>> disconnect
>> >> >> > > >>> >>>>> logic out of the attach/detach logic.
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>> >> >> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>>> Conversely, if the storage framework called the
>> >> >> DestroyCommand
>> >> >> > > for
>> >> >> > > >>> >>>>>> managed storage after the DetachCommand, then I could
>> >> have
>> >> >> had
>> >> >> > > my
>> >> >> > > >>> remove
>> >> >> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand
>> handling
>> >> >> > rather
>> >> >> > > >>> than in the
>> >> >> > > >>> >>>>>> DetachCommand handling.
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>> >> >> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does
>> not.
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>> The initial approach that was discussed during 4.2
>> was
>> >> for
>> >> >> me
>> >> >> > > to
>> >> >> > > >>> >>>>>>> modify the attach/detach logic only in the
>> XenServer and
>> >> >> > VMware
>> >> >> > > >>> hypervisor
>> >> >> > > >>> >>>>>>> plug-ins.
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>> Now that I think about it more, though, I kind of
>> would
>> >> >> have
>> >> >> > > >>> liked to
>> >> >> > > >>> >>>>>>> have the storage framework send a CreateCommand to
>> the
>> >> >> > > hypervisor
>> >> >> > > >>> before
>> >> >> > > >>> >>>>>>> sending the AttachCommand if the storage in
>> question was
>> >> >> > > managed.
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>> Then I could have created my SR/datastore in the
>> >> >> > CreateCommand
>> >> >> > > and
>> >> >> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore
>> that
>> >> it
>> >> >> was
>> >> >> > > >>> always
>> >> >> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
>> >> >> SR/datastore
>> >> >> > > in
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>> AttachCommand).
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>> >> >> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured
>> >> you'd
>> >> >> be
>> >> >> > > in a
>> >> >> > > >>> >>>>>>>> better position to tell.
>> >> >> > > >>> >>>>>>>>
>> >> >> > > >>> >>>>>>>> I see that copyAsync is unsupported in your
>> current 4.2
>> >> >> > > driver,
>> >> >> > > >>> does
>> >> >> > > >>> >>>>>>>> that mean that there's no template support? Or is
>> it
>> >> some
>> >> >> > > other
>> >> >> > > >>> call
>> >> >> > > >>> >>>>>>>> that does templating now? I'm still getting up to
>> >> speed on
>> >> >> > all
>> >> >> > > >>> of the
>> >> >> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>> >> >> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only
>> place
>> >> >> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me
>> >> that
>> >> >> > > >>> >>>>>>>> CreateCommand
>> >> >> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
>> >> >> plugins.
>> >> >> > > >>> >>>>>>>>
>> >> >> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>> >> >> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
>> >> >> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> > It was my intent that it should work with any
>> >> CloudStack
>> >> >> > > >>> "managed"
>> >> >> > > >>> >>>>>>>> storage
>> >> >> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using
>> >> CHAP, I
>> >> >> > > wrote
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> code so
>> >> >> > > >>> >>>>>>>> > CHAP didn't have to be used.
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
>> >> >> whether
>> >> >> > > it is
>> >> >> > > >>> >>>>>>>> generic
>> >> >> > > >>> >>>>>>>> > enough to keep those names or not.
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> > My expectation is that it is generic enough.
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus
>> Sorensen <
>> >> >> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> >> I added a comment to your diff. In general I
>> think
>> >> it
>> >> >> > looks
>> >> >> > > >>> good,
>> >> >> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or
>> not it
>> >> >> will
>> >> >> > > >>> work.
>> >> >> > > >>> >>>>>>>> One
>> >> >> > > >>> >>>>>>>> >> thing I do have reservations about is the
>> >> adaptor/pool
>> >> >> > > >>> naming. If
>> >> >> > > >>> >>>>>>>> you
>> >> >> > > >>> >>>>>>>> >> think the code is generic enough that it will
>> work
>> >> for
>> >> >> > > anyone
>> >> >> > > >>> who
>> >> >> > > >>> >>>>>>>> does
>> >> >> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK,
>> but if
>> >> >> > > there's
>> >> >> > > >>> >>>>>>>> anything
>> >> >> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or
>> >> how it
>> >> >> > > likes
>> >> >> > > >>> to
>> >> >> > > >>> >>>>>>>> be
>> >> >> > > >>> >>>>>>>> >> treated then I'd say that they should be named
>> >> >> something
>> >> >> > > less
>> >> >> > > >>> >>>>>>>> generic
>> >> >> > > >>> >>>>>>>> >> than iScsiAdmStorage.
>> >> >> > > >>> >>>>>>>> >>
>> >> >> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> >> >> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
>> >> >> > > >>> >>>>>>>> >> > Great - thanks!
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > Just to give you an overview of what my code
>> does
>> >> >> (for
>> >> >> > > when
>> >> >> > > >>> you
>> >> >> > > >>> >>>>>>>> get a
>> >> >> > > >>> >>>>>>>> >> > chance to review it):
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
>> >> >> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
>> >> >> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
>> >> >> connects
>> >> >> > > with
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> CS MS. If
>> >> >> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
>> >> >> > > >>> >>>>>>>> ModifyStoragePoolCommand to
>> >> >> > > >>> >>>>>>>> >> > the host. This logic was based off of
>> >> >> > > DefaultHostListener.
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
>> >> >> unchanged.
>> >> >> > It
>> >> >> > > >>> >>>>>>>> invokes
>> >> >> > > >>> >>>>>>>> >> > createStoragePool on the
>> KVMStoragePoolManager.
>> >> The
>> >> >> > > >>> >>>>>>>> KVMStoragePoolManager
>> >> >> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
>> >> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
>> >> >> > > >>> >>>>>>>> >> was
>> >> >> > > >>> >>>>>>>> >> > registered in the constructor for
>> >> >> KVMStoragePoolManager
>> >> >> > > >>> under
>> >> >> > > >>> >>>>>>>> the key of
>> >> >> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just
>> >> makes
>> >> >> an
>> >> >> > > >>> instance
>> >> >> > > >>> >>>>>>>> of
>> >> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and
>> returns
>> >> >> the
>> >> >> > > >>> pointer
>> >> >> > > >>> >>>>>>>> to the
>> >> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the
>> map is
>> >> the
>> >> >> > > UUID
>> >> >> > > >>> of
>> >> >> > > >>> >>>>>>>> the storage
>> >> >> > > >>> >>>>>>>> >> > pool.
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk
>> is
>> >> >> > invoked
>> >> >> > > for
>> >> >> > > >>> >>>>>>>> managed
>> >> >> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
>> >> >> createPhysicalDisk
>> >> >> > > uses
>> >> >> > > >>> >>>>>>>> iscsiadm to
>> >> >> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume
>> on
>> >> the
>> >> >> SAN
>> >> >> > > and
>> >> >> > > >>> a
>> >> >> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the
>> >> attach
>> >> >> > > logic
>> >> >> > > >>> that
>> >> >> > > >>> >>>>>>>> follows.
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is
>> >> invoked
>> >> >> > > with
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> IQN of the
>> >> >> > > >>> >>>>>>>> >> > volume if the storage pool in question is
>> managed
>> >> >> > > storage.
>> >> >> > > >>> >>>>>>>> Otherwise, the
>> >> >> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
>> >> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>> >> >> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to
>> be
>> >> used
>> >> >> in
>> >> >> > > the
>> >> >> > > >>> >>>>>>>> detach logic.
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > Once the volume has been detached,
>> >> >> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>> >> >> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
>> >> >> > > >>> deletePhysicalDisk
>> >> >> > > >>> >>>>>>>> removes the
>> >> >> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus
>> Sorensen <
>> >> >> > > >>> >>>>>>>> shadowsor@gmail.com
>> >> >> > > >>> >>>>>>>> >> >wrote:
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> >> Its the log4j properties file in
>> >> >> /etc/cloudstack/agent
>> >> >> > > >>> change
>> >> >> > > >>> >>>>>>>> all INFO
>> >> >> > > >>> >>>>>>>> >> to
>> >> >> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't
>> starting,
>> >> you
>> >> >> > can
>> >> >> > > >>> tail
>> >> >> > > >>> >>>>>>>> the log
>> >> >> > > >>> >>>>>>>> >> when
>> >> >> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it
>> will
>> >> spit
>> >> >> > > >>> something
>> >> >> > > >>> >>>>>>>> out into
>> >> >> > > >>> >>>>>>>> >> one
>> >> >> > > >>> >>>>>>>> >> >> of the other files in
>> /var/log/cloudstack/agent
>> >> >> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> >> >> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> >> wrote:
>> >> >> > > >>> >>>>>>>> >> >>
>> >> >> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for
>> the
>> >> >> status
>> >> >> > > of
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> service (I
>> >> >> > > >>> >>>>>>>> >> >> > assume it could be started this way, as
>> well,
>> >> by
>> >> >> > > changing
>> >> >> > > >>> >>>>>>>> "status" to
>> >> >> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$
>> sudo
>> >> >> > > >>> >>>>>>>> /usr/sbin/service
>> >> >> > > >>> >>>>>>>> >> >> > cloudstack-agent status
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > I get this back:
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID
>> file
>> >> for
>> >> >> > > >>> >>>>>>>> cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
>> >> >> though,
>> >> >> > > so I
>> >> >> > > >>> >>>>>>>> think I'm
>> >> >> > > >>> >>>>>>>> >> going
>> >> >> > > >>> >>>>>>>> >> >> to
>> >> >> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I
>> set
>> >> >> > > >>> enable.debug?
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus
>> >> Sorensen <
>> >> >> > > >>> >>>>>>>> shadowsor@gmail.com
>> >> >> > > >>> >>>>>>>> >> >> > >wrote:
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few
>> days.
>> >> As
>> >> >> > > >>> mentioned,
>> >> >> > > >>> >>>>>>>> you can
>> >> >> > > >>> >>>>>>>> >> set
>> >> >> > > >>> >>>>>>>> >> >> up
>> >> >> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server
>> as
>> >> well
>> >> >> if
>> >> >> > > all
>> >> >> > > >>> >>>>>>>> else fails.
>> >> >> > > >>> >>>>>>>> >>  If
>> >> >> > > >>> >>>>>>>> >> >> > you
>> >> >> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from
>> the
>> >> KVM
>> >> >> > > host,
>> >> >> > > >>> then
>> >> >> > > >>> >>>>>>>> you need
>> >> >> > > >>> >>>>>>>> >> to
>> >> >> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run
>> >> without
>> >> >> > > >>> >>>>>>>> complaining loudly
>> >> >> > > >>> >>>>>>>> >> if
>> >> >> > > >>> >>>>>>>> >> >> it
>> >> >> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I
>> didn't
>> >> see
>> >> >> > that
>> >> >> > > in
>> >> >> > > >>> >>>>>>>> your agent
>> >> >> > > >>> >>>>>>>> >> log,
>> >> >> > > >>> >>>>>>>> >> >> so
>> >> >> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you
>> know
>> >> how
>> >> >> to
>> >> >> > > >>> >>>>>>>> stop/start the
>> >> >> > > >>> >>>>>>>> >> agent
>> >> >> > > >>> >>>>>>>> >> >> on
>> >> >> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>> >> >> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike
>> Tutkowski" <
>> >> >> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
>> >> >> > > >>> >>>>>>>> >> >> > > wrote:
>> >> >> > > >>> >>>>>>>> >> >> > >
>> >> >> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new
>> >> code,
>> >> >> > but I
>> >> >> > > >>> >>>>>>>> thought you
>> >> >> > > >>> >>>>>>>> >> would
>> >> >> > > >>> >>>>>>>> >> >> > be a
>> >> >> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >>
>> >> >> > > >>> >>>>>>>> >>
>> >> >> > > >>> >>>>>>>>
>> >> >> > > >>>
>> >> >> > >
>> >> >> >
>> >> >>
>> >>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and
>> >> detach
>> >> >> a
>> >> >> > > data
>> >> >> > > >>> >>>>>>>> disk (that
>> >> >> > > >>> >>>>>>>> >> has
>> >> >> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the
>> >> hypervisor.
>> >> >> The
>> >> >> > > data
>> >> >> > > >>> >>>>>>>> disk
>> >> >> > > >>> >>>>>>>> >> happens to
>> >> >> > > >>> >>>>>>>> >> >> > be
>> >> >> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where
>> we
>> >> have
>> >> >> a
>> >> >> > > 1:1
>> >> >> > > >>> >>>>>>>> mapping
>> >> >> > > >>> >>>>>>>> >> between a
>> >> >> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor
>> >> snapshots
>> >> >> or
>> >> >> > > stuff
>> >> >> > > >>> >>>>>>>> like that
>> >> >> > > >>> >>>>>>>> >> >> > (likely a
>> >> >> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
>> >> >> detaching a
>> >> >> > > data
>> >> >> > > >>> >>>>>>>> disk in 4.3.
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > Thanks!
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
>> >> >> > Tutkowski <
>> >> >> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't
>> >> remove
>> >> >> > > >>> >>>>>>>> cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> >> first.
>> >> >> > > >>> >>>>>>>> >> >> > > > Would
>> >> >> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo
>> >> apt-get
>> >> >> > > >>> install
>> >> >> > > >>> >>>>>>>> >> >> > cloudstack-agent.
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM,
>> Mike
>> >> >> > > Tutkowski <
>> >> >> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the
>> command
>> >> >> > > manually:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu
>> >> :/etc/cloudstack/agent$
>> >> >> > sudo
>> >> >> > > >>> >>>>>>>> >> /usr/sbin/service
>> >> >> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
>> >> >> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
>> >> >> > > cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM,
>> Mike
>> >> >> > > Tutkowski <
>> >> >> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com>
>> wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > Agent
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> started
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is
>> >> 4.3.0-SNAPSHOT
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
>> >> >> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties
>> file for
>> >> >> > > storage
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time
>> backoff
>> >> >> > > algorithm
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>> >> >> > > >>>  [cloud.utils.LogUtils]
>> >> >> > > >>> >>>>>>>> >> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > log4j
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
>> >> >> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
>> >> >> > >  [cloud.agent.Agent]
>> >> >> > > >>> >>>>>>>> (main:null)
>> >> >> > > >>> >>>>>>>> >> id
>> >> >> > > >>> >>>>>>>> >> >> > is 3
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > >  [resource.virtualnetwork.VirtualRoutingResource]
>> >> >> > > >>> >>>>>>>> (main:null)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir
>> to
>> >> use:
>> >> >> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that
>> setup.log
>> >> was
>> >> >> > > >>> >>>>>>>> important. This
>> >> >> > > >>> >>>>>>>> >> seems
>> >> >> > > >>> >>>>>>>> >> >> to
>> >> >> > > >>> >>>>>>>> >> >> > > be
>> >> >> > > >>> >>>>>>>> >> >> > > > a
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it
>> might
>> >> >> > > indicate:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
>> >> /usr/sbin/service
>> >> >> > > >>> >>>>>>>> cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> status
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: *
>> could
>> >> not
>> >> >> > > access
>> >> >> > > >>> PID
>> >> >> > > >>> >>>>>>>> file for
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
>> >> /usr/sbin/service
>> >> >> > > >>> >>>>>>>> cloudstack-agent
>> >> >> > > >>> >>>>>>>> >> start
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM,
>> >> Marcus
>> >> >> > > >>> Sorensen <
>> >> >> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I
>> >> thought
>> >> >> it
>> >> >> > > was
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> agent log
>> >> >> > > >>> >>>>>>>> >> for
>> >> >> > > >>> >>>>>>>> >> >> > > some
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That
>> >> might
>> >> >> be
>> >> >> > > the
>> >> >> > > >>> >>>>>>>> place to
>> >> >> > > >>> >>>>>>>> >> look.
>> >> >> > > >>> >>>>>>>> >> >> > There
>> >> >> > > >>> >>>>>>>> >> >> > > > is
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> an
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one
>> for
>> >> the
>> >> >> > setup
>> >> >> > > >>> when
>> >> >> > > >>> >>>>>>>> it adds
>> >> >> > > >>> >>>>>>>> >> the
>> >> >> > > >>> >>>>>>>> >> >> > host,
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> both
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
>> >> >> Tutkowski"
>> >> >> > <
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at
>> the
>> >> IP
>> >> >> > > address
>> >> >> > > >>> or
>> >> >> > > >>> >>>>>>>> the KVM
>> >> >> > > >>> >>>>>>>> >> host?
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at
>> 192.168.233.10.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global
>> >> Settings
>> >> >> > > >>> parameter:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>> >> >> > > >>> >>>>>>>> server192.168.233.1
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> /etc/cloudstack/agent/agent.properties
>> >> >> > has
>> >> >> > > a
>> >> >> > > >>> >>>>>>>> >> >> host=192.168.233.1
>> >> >> > > >>> >>>>>>>> >> >> > > > value.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21
>> PM,
>> >> >> Marcus
>> >> >> > > >>> Sorensen
>> >> >> > > >>> >>>>>>>> <
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server
>> is
>> >> >> > > >>> >>>>>>>> 192.168.233.10? But you
>> >> >> > > >>> >>>>>>>> >> >> tried
>> >> >> > > >>> >>>>>>>> >> >> > > to
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> telnet
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > to
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be
>> enough
>> >> to
>> >> >> > > change
>> >> >> > > >>> >>>>>>>> that in
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> >> /etc/cloudstack/agent/agent.properties,
>> >> >> > > but
>> >> >> > > >>> you
>> >> >> > > >>> >>>>>>>> may want
>> >> >> > > >>> >>>>>>>> >> to
>> >> >> > > >>> >>>>>>>> >> >> > edit
>> >> >> > > >>> >>>>>>>> >> >> > > > the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > config
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real
>> ms IP.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM,
>> "Mike
>> >> >> > > Tutkowski" <
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
>> >> >> > /etc/network/interfaces
>> >> >> > > >>> file
>> >> >> > > >>> >>>>>>>> looks
>> >> >> > > >>> >>>>>>>> >> like, if
>> >> >> > > >>> >>>>>>>> >> >> > > that
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> is of
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0
>> >> network
>> >> >> > is
>> >> >> > > the
>> >> >> > > >>> >>>>>>>> NAT network
>> >> >> > > >>> >>>>>>>> >> >> > VMware
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > set
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast
>> 192.168.233.255
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add
>> default gw
>> >> >> > > >>> >>>>>>>> 192.168.233.2 metric 1
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del
>> default
>> >> gw
>> >> >> > > >>> >>>>>>>> 192.168.233.2
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at
>> 10:08
>> >> PM,
>> >> >> > Mike
>> >> >> > > >>> >>>>>>>> Tutkowski <
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> mike.tutkowski@solidfire.com>
>> >> >> wrote:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct.
>> >> This is
>> >> >> > > from
>> >> >> > > >>> the
>> >> >> > > >>> >>>>>>>> MS log
>> >> >> > > >>> >>>>>>>> >> >> (below).
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this
>> would
>> >> be.
>> >> >> My
>> >> >> > > >>> network
>> >> >> > > >>> >>>>>>>> settings
>> >> >> > > >>> >>>>>>>> >> >> > > shouldn't
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> have
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I
>> tried
>> >> this.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM
>> host
>> >> >> from
>> >> >> > > the
>> >> >> > > >>> MS
>> >> >> > > >>> >>>>>>>> host and
>> >> >> > > >>> >>>>>>>> >> vice
>> >> >> > > >>> >>>>>>>> >> >> > > > versa.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually
>> kick
>> >> >> off
>> >> >> > a
>> >> >> > > VM
>> >> >> > > >>> on
>> >> >> > > >>> >>>>>>>> the KVM
>> >> >> > > >>> >>>>>>>> >> host
>> >> >> > > >>> >>>>>>>> >> >> > and
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> ping from
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > it
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my
>> Mac OS
>> >> X
>> >> >> > host
>> >> >> > > >>> (also
>> >> >> > > >>> >>>>>>>> running
>> >> >> > > >>> >>>>>>>> >> the
>> >> >> > > >>> >>>>>>>> >> >> CS
>> >> >> > > >>> >>>>>>>> >> >> > > MS)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> to the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > VM
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware
>> Fusion).
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141
>> DEBUG
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> (613487991@qtp-1659933291-3
>> >> >> > > >>> :ctx-6b28dc48)
>> >> >> > > >>> >>>>>>>> Timeout,
>> >> >> > > >>> >>>>>>>> >> to
>> >> >> > > >>> >>>>>>>> >> >> > wait
>> >> >> > > >>> >>>>>>>> >> >> > > > for
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > host
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr,
>> >> assuming
>> >> >> it
>> >> >> > is
>> >> >> > > >>> failed
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144
>> WARN
>> >> >> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> (613487991@qtp-1659933291-3
>> >> >> > > >>> :ctx-6b28dc48)
>> >> >> > > >>> >>>>>>>> Unable to
>> >> >> > > >>> >>>>>>>> >> >> find
>> >> >> > > >>> >>>>>>>> >> >> > > the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> server
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
>> >> >> http://192.168.233.10
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145
>> INFO
>> >> >> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> (613487991@qtp-1659933291-3
>> >> >> > > >>> :ctx-6b28dc48)
>> >> >> > > >>> >>>>>>>> Could not
>> >> >> > > >>> >>>>>>>> >> >> find
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> exception:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > com.cloud.exception.DiscoveryException
>> >> >> > > >>> in
>> >> >> > > >>> >>>>>>>> error code
>> >> >> > > >>> >>>>>>>> >> >> list
>> >> >> > > >>> >>>>>>>> >> >> > > for
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147
>> WARN
>> >> >> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> (613487991@qtp-1659933291-3
>> >> >> > > >>> :ctx-6b28dc48)
>> >> >> > > >>> >>>>>>>> Exception:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > com.cloud.exception.DiscoveryException:
>> >> >> > > >>> >>>>>>>> Unable to add
>> >> >> > > >>> >>>>>>>> >> >> the
>> >> >> > > >>> >>>>>>>> >> >> > > host
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >>
>> >> >> > > >>> >>>>>>>> >>
>> >> >> > > >>> >>>>>>>>
>> >> >> > > >>>
>> >> >> > >
>> >> >> >
>> >> >>
>> >>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to
>> >> telnet in
>> >> >> > > from
>> >> >> > > >>> my
>> >> >> > > >>> >>>>>>>> KVM host to
>> >> >> > > >>> >>>>>>>> >> >> the
>> >> >> > > >>> >>>>>>>> >> >> > MS
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> host's
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$
>> telnet
>> >> >> > > >>> 192.168.233.1
>> >> >> > > >>> >>>>>>>> 8250
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to
>> 192.168.233.1.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack
>> Developer,
>> >> >> > SolidFire
>> >> >> > > >>> Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > e:
>> mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world
>> uses
>> >> >> the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
>> >> >> > > >>> >>>>>>>> >>
>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > --
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer,
>> >> SolidFire
>> >> >> > > Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world
>> uses the
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
>> >> >> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play
>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> --
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer,
>> SolidFire
>> >> >> > Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses
>> the
>> >> >> cloud<
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>> *™*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >> --
>> >> >> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer,
>> SolidFire
>> >> >> Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the
>> >> cloud<
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>>> >> >> > > > >> *™*
>> >> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > > --
>> >> >> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer,
>> SolidFire
>> >> >> Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the
>> >> cloud<
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>>> >> >> > > > > *™*
>> >> >> > > >>> >>>>>>>> >> >> > > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > > > --
>> >> >> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire
>> >> Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
>> >> >> > > >>> >>>>>>>> >> >> > > > cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> >> >> > > > *™*
>> >> >> > > >>> >>>>>>>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> >> > >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >> > --
>> >> >> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire
>> Inc.*
>> >> >> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> >> > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
>> >> >> > > >>> >>>>>>>> >> >> > cloud<
>> >> >> > > http://solidfire.com/solution/overview/?video=play
>> >> >> > > >>> >
>> >> >> > > >>> >>>>>>>> >> >> > *™*
>> >> >> > > >>> >>>>>>>> >> >> >
>> >> >> > > >>> >>>>>>>> >> >>
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> >
>> >> >> > > >>> >>>>>>>> >> > --
>> >> >> > > >>> >>>>>>>> >> > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> >> > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> >> > Advancing the way the world uses the
>> >> >> > > >>> >>>>>>>> >> > cloud<
>> >> >> > http://solidfire.com/solution/overview/?video=play
>> >> >> > > >
>> >> >> > > >>> >>>>>>>> >> > *™*
>> >> >> > > >>> >>>>>>>> >>
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> >
>> >> >> > > >>> >>>>>>>> > --
>> >> >> > > >>> >>>>>>>> > *Mike Tutkowski*
>> >> >> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>>> > o: 303.746.7302
>> >> >> > > >>> >>>>>>>> > Advancing the way the world uses the
>> >> >> > > >>> >>>>>>>> > cloud<
>> >> >> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>>> > *™*
>> >> >> > > >>> >>>>>>>>
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>> --
>> >> >> > > >>> >>>>>>> *Mike Tutkowski*
>> >> >> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>>> o: 303.746.7302
>> >> >> > > >>> >>>>>>> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>>> *™*
>> >> >> > > >>> >>>>>>>
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>> --
>> >> >> > > >>> >>>>>> *Mike Tutkowski*
>> >> >> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>>> o: 303.746.7302
>> >> >> > > >>> >>>>>> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>>> *™*
>> >> >> > > >>> >>>>>>
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>> --
>> >> >> > > >>> >>>>> *Mike Tutkowski*
>> >> >> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>>> o: 303.746.7302
>> >> >> > > >>> >>>>> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>>> *™*
>> >> >> > > >>> >>>>>
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>> --
>> >> >> > > >>> >>>> *Mike Tutkowski*
>> >> >> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>>> o: 303.746.7302
>> >> >> > > >>> >>>> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>>> *™*
>> >> >> > > >>> >>>>
>> >> >> > > >>> >>>
>> >> >> > > >>> >>>
>> >> >> > > >>> >>>
>> >> >> > > >>> >>> --
>> >> >> > > >>> >>> *Mike Tutkowski*
>> >> >> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >>> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >>> o: 303.746.7302
>> >> >> > > >>> >>> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >>> *™*
>> >> >> > > >>> >>>
>> >> >> > > >>> >>
>> >> >> > > >>> >>
>> >> >> > > >>> >>
>> >> >> > > >>> >> --
>> >> >> > > >>> >> *Mike Tutkowski*
>> >> >> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> >> e: mike.tutkowski@solidfire.com
>> >> >> > > >>> >> o: 303.746.7302
>> >> >> > > >>> >> Advancing the way the world uses the cloud<
>> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> >> *™*
>> >> >> > > >>> >>
>> >> >> > > >>> >
>> >> >> > > >>> >
>> >> >> > > >>> >
>> >> >> > > >>> > --
>> >> >> > > >>> > *Mike Tutkowski*
>> >> >> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >>> > e: mike.tutkowski@solidfire.com
>> >> >> > > >>> > o: 303.746.7302
>> >> >> > > >>> > Advancing the way the world uses the
>> >> >> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >>> > *™*
>> >> >> > > >>>
>> >> >> > > >>
>> >> >> > > >>
>> >> >> > > >>
>> >> >> > > >> --
>> >> >> > > >> *Mike Tutkowski*
>> >> >> > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > >> e: mike.tutkowski@solidfire.com
>> >> >> > > >> o: 303.746.7302
>> >> >> > > >> Advancing the way the world uses the cloud<
>> >> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > >> *™*
>> >> >> > > >>
>> >> >> > > >
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > --
>> >> >> > > > *Mike Tutkowski*
>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > e: mike.tutkowski@solidfire.com
>> >> >> > > > o: 303.746.7302
>> >> >> > > > Advancing the way the world uses the
>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > *™*
>> >> >> > >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > *Mike Tutkowski*
>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > e: mike.tutkowski@solidfire.com
>> >> >> > o: 303.746.7302
>> >> >> > Advancing the way the world uses the
>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> >> > *™*
>> >> >> >
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Right...I've seen it rewritten before. I just have to get a little time
today to resume my testing and I can re-add the host to CS and see how the
file looks then.


On Tue, Sep 24, 2013 at 10:05 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> If that's what your agent.properties looks like after you try to add
> the host then that's your problem. It should be completely rewritten
> after cloudstack-setup-agent, with all of the parameters passed to it.
> you should run it manually and see what is failing, there's probably a
> step missed in the ubuntu setup instructions.
>
> On Tue, Sep 24, 2013 at 10:00 AM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > This is what a fresh agent.properties file looks like on my system.
> >
> > I expect if I try to add it to a cluster, the empty, localhost, and
> default
> > values below should be filled in.
> >
> > I plan to try to add it to a cluster in a bit.
> >
> > # The GUID to identify the agent with, this is mandatory!
> > # Generate with "uuidgen"
> > guid=
> >
> > #resource= the java class, which agent load to execute
> > resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >
> > #workers= number of threads running in agent
> > workers=5
> >
> > #host= The IP address of management server
> > host=localhost
> >
> > #port = The port management server listening on, default is 8250
> > port=8250
> >
> > #cluster= The cluster which the agent belongs to
> > cluster=default
> >
> > #pod= The pod which the agent belongs to
> > pod=default
> >
> > #zone= The zone which the agent belongs to
> > zone=default
> >
> >
> >
> > On Tue, Sep 24, 2013 at 8:55 AM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> I stull haven't seen your agent.properties. This would tell me if your
> >> setup succeeded.  At this point my best guess is that
> >> "cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> >> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >> --prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fashion. You can
> >> run it manually at any time to see. Once that is run, then the agent
> >> should come up. The resource name in your code is pulled from
> >> agent.properties (I believe) and is usually
> >> "resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
> >>
> >> On Tue, Sep 24, 2013 at 12:12 AM, Mike Tutkowski
> >> <mi...@solidfire.com> wrote:
> >> > I've been narrowing it down by putting in a million print-to-log
> >> statements.
> >> >
> >> > Do you know if it is a problem that value ends up null (in a
> constructor
> >> > for Agent)?
> >> >
> >> > String value = _shell.getPersistentProperty(getResourceName(), "id");
> >> >
> >> > In that same constructor, this line never finishes:
> >> >
> >> > if (!_resource.configure(getResourceName(), params)) {
> >> >
> >> > I need to dig into the configure method to see what's going on there.
> >> >
> >> >
> >> > On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <shadowsor@gmail.com
> >> >wrote:
> >> >
> >> >> It might be a centos specific thing. These are created by the init
> >> scripts.
> >> >> Check your agent init script on Ubuntu and see I'd you can decipher
> >> where
> >> >> it sends stdout.
> >> >> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> >> >
> >> >> wrote:
> >> >>
> >> >> > Weird...no such file exists.
> >> >> >
> >> >> >
> >> >> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> >> > >wrote:
> >> >> >
> >> >> > > maybe cloudstack-agent.out
> >> >> > >
> >> >> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> >> >> > > <mi...@solidfire.com> wrote:
> >> >> > > > OK, so, nothing is screaming out in the logs. I did notice the
> >> >> > following:
> >> >> > > >
> >> >> > > > From setup.log:
> >> >> > > >
> >> >> > > > DEBUG:root:execute:apparmor_status |grep libvirt
> >> >> > > >
> >> >> > > > DEBUG:root:Failed to execute:
> >> >> > > >
> >> >> > > >
> >> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> status
> >> >> > > >
> >> >> > > > DEBUG:root:Failed to execute: * could not access PID file for
> >> >> > > > cloudstack-agent
> >> >> > > >
> >> >> > > >
> >> >> > > > This is the final line in this log file:
> >> >> > > >
> >> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> start
> >> >> > > >
> >> >> > > >
> >> >> > > > This is from agent.log:
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell]
> (main:null)
> >> >> > > Checking
> >> >> > > > to see if agent.pid exists.
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil]
> >> (main:null)
> >> >> > > > Executing: bash -c echo $PPID
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil]
> >> (main:null)
> >> >> > > > Execution is successful.
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null)
> id
> >> is
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,000 DEBUG
> [cloud.resource.ServerResourceBase]
> >> >> > > > (main:null) Retrieving network interface: cloudbr0
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,016 DEBUG
> [cloud.resource.ServerResourceBase]
> >> >> > > > (main:null) Retrieving network interface: cloudbr0
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,016 DEBUG
> [cloud.resource.ServerResourceBase]
> >> >> > > > (main:null) Retrieving network interface: null
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,017 DEBUG
> [cloud.resource.ServerResourceBase]
> >> >> > > > (main:null) Retrieving network interface: null
> >> >> > > >
> >> >> > > >
> >> >> > > > The following kinds of lines are repeated for a bunch of
> different
> >> >> .sh
> >> >> > > > files. I think they often end up being found here:
> >> >> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
> >> >> probably
> >> >> > > not
> >> >> > > > an issue.
> >> >> > > >
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null)
> >> >> Looking
> >> >> > > for
> >> >> > > > call_firewall.sh in the classpath
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null)
> >> >> System
> >> >> > > > resource: null
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
> >> >> > Classpath
> >> >> > > > resource: null
> >> >> > > >
> >> >> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null)
> >> >> Looking
> >> >> > > for
> >> >> > > > call_firewall.sh
> >> >> > > >
> >> >> > > >
> >> >> > > > Is there a log file for the Java code that I could write stuff
> >> out to
> >> >> > and
> >> >> > > > see how far we get?
> >> >> > > >
> >> >> > > >
> >> >> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >
> >> >> > > >> Thanks, Marcus
> >> >> > > >>
> >> >> > > >> I've been developing on Windows for most of my time, so a
> bunch
> >> of
> >> >> > these
> >> >> > > >> Linux-type commands are new to me and I don't always interpret
> >> the
> >> >> > > output
> >> >> > > >> correctly. Getting there. :)
> >> >> > > >>
> >> >> > > >>
> >> >> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
> >> >> shadowsor@gmail.com
> >> >> > > >wrote:
> >> >> > > >>
> >> >> > > >>> Nope, not running. That's just your grep process. It would
> look
> >> >> like:
> >> >> > > >>>
> >> >> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
> >> >> > > >>>
> >> >> > > >>>
> >> >> > >
> >> >> >
> >> >>
> >>
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
> >> >> > > >>>
> >> >> > > >>> Your agent log should tell you why it failed to start if you
> >> set it
> >> >> > in
> >> >> > > >>> debug and try to start... or maybe cloudstack-agent.out if it
> >> >> doesn't
> >> >> > > >>> get far enough (say it's missing a class or something and
> can't
> >> >> > > >>> start).
> >> >> > > >>>
> >> >> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> >> >> > > >>> <mi...@solidfire.com> wrote:
> >> >> > > >>> > Looks like it's running, though:
> >> >> > > >>> >
> >> >> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >> >> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep
> >> --color=auto
> >> >> > > jsvc
> >> >> > > >>> >
> >> >> > > >>> >
> >> >> > > >>> >
> >> >> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> >> >> > > >>> > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >
> >> >> > > >>> >> Hey Marcus,
> >> >> > > >>> >>
> >> >> > > >>> >> Maybe you could give me a better idea of what the "flow"
> is
> >> when
> >> >> > > >>> adding a
> >> >> > > >>> >> KVM host.
> >> >> > > >>> >>
> >> >> > > >>> >> It looks like we SSH into the potential KVM host and
> execute
> >> a
> >> >> > > startup
> >> >> > > >>> >> script (giving it necessary info about the cloud and the
> >> >> > management
> >> >> > > >>> server
> >> >> > > >>> >> it should talk to).
> >> >> > > >>> >>
> >> >> > > >>> >> After this, is the Java VM started?
> >> >> > > >>> >>
> >> >> > > >>> >> After a reboot, I assume the JVM is started automatically?
> >> >> > > >>> >>
> >> >> > > >>> >> How do you debug your KVM-side Java code?
> >> >> > > >>> >>
> >> >> > > >>> >> Been looking through the logs and nothing obvious sticks
> >> out. I
> >> >> > will
> >> >> > > >>> have
> >> >> > > >>> >> another look.
> >> >> > > >>> >>
> >> >> > > >>> >> Thanks
> >> >> > > >>> >>
> >> >> > > >>> >>
> >> >> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> >> >> > > >>> >> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>
> >> >> > > >>> >>> Hey Marcus,
> >> >> > > >>> >>>
> >> >> > > >>> >>> I've been investigating my issue with not being able to
> add
> >> a
> >> >> KVM
> >> >> > > >>> host to
> >> >> > > >>> >>> CS.
> >> >> > > >>> >>>
> >> >> > > >>> >>> For what it's worth, this comes back successful:
> >> >> > > >>> >>>
> >> >> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
> >> >> "cloudstack-setup-agent
> >> >> > > " +
> >> >> > > >>> >>> parameters, 3);
> >> >> > > >>> >>>
> >> >> > > >>> >>> This is what the command looks like:
> >> >> > > >>> >>>
> >> >> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1
> -g
> >> >> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >> >> > > >>> --prvNic=cloudbr0
> >> >> > > >>> >>> --guestNic=cloudbr0
> >> >> > > >>> >>>
> >> >> > > >>> >>> The problem is this method in LibvirtServerDiscoverer
> never
> >> >> > finds a
> >> >> > > >>> >>> matching host in the DB:
> >> >> > > >>> >>>
> >> >> > > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId,
> >> >> String
> >> >> > > guid)
> >> >> > > >>> >>>
> >> >> > > >>> >>> I assume once the KVM host is up and running that it's
> >> supposed
> >> >> > to
> >> >> > > >>> call
> >> >> > > >>> >>> into the CS MS so the DB can be updated as such?
> >> >> > > >>> >>>
> >> >> > > >>> >>> If so, the problem must be on the KVM side.
> >> >> > > >>> >>>
> >> >> > > >>> >>> I did run this again (from the KVM host) to see if the
> >> >> connection
> >> >> > > was
> >> >> > > >>> in
> >> >> > > >>> >>> place:
> >> >> > > >>> >>>
> >> >> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >> >> > > >>> >>>
> >> >> > > >>> >>> Trying 192.168.233.1...
> >> >> > > >>> >>>
> >> >> > > >>> >>> Connected to 192.168.233.1.
> >> >> > > >>> >>>
> >> >> > > >>> >>> Escape character is '^]'.
> >> >> > > >>> >>> So that looks good.
> >> >> > > >>> >>>
> >> >> > > >>> >>> I turned on more info in the debug log, but nothing
> obvious
> >> >> jumps
> >> >> > > out
> >> >> > > >>> as
> >> >> > > >>> >>> of yet.
> >> >> > > >>> >>>
> >> >> > > >>> >>> If you have any thoughts on this, please shoot them my
> way.
> >> :)
> >> >> > > >>> >>>
> >> >> > > >>> >>> Thanks!
> >> >> > > >>> >>>
> >> >> > > >>> >>>
> >> >> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> >> >> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>
> >> >> > > >>> >>>> First step is for me to get this working for KVM,
> though.
> >> :)
> >> >> > > >>> >>>>
> >> >> > > >>> >>>> Once I do that, I can perhaps make modifications to the
> >> >> storage
> >> >> > > >>> >>>> framework and hypervisor plug-ins to refactor the logic
> and
> >> >> > such.
> >> >> > > >>> >>>>
> >> >> > > >>> >>>>
> >> >> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> >> >> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>
> >> >> > > >>> >>>>> Same would work for KVM.
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
> >> >> > > appropriate
> >> >> > > >>> >>>>> times by the storage framework, I could move my connect
> >> and
> >> >> > > >>> disconnect
> >> >> > > >>> >>>>> logic out of the attach/detach logic.
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> >> >> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>>> Conversely, if the storage framework called the
> >> >> DestroyCommand
> >> >> > > for
> >> >> > > >>> >>>>>> managed storage after the DetachCommand, then I could
> >> have
> >> >> had
> >> >> > > my
> >> >> > > >>> remove
> >> >> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand
> handling
> >> >> > rather
> >> >> > > >>> than in the
> >> >> > > >>> >>>>>> DetachCommand handling.
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> >> >> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does
> not.
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>> The initial approach that was discussed during 4.2
> was
> >> for
> >> >> me
> >> >> > > to
> >> >> > > >>> >>>>>>> modify the attach/detach logic only in the XenServer
> and
> >> >> > VMware
> >> >> > > >>> hypervisor
> >> >> > > >>> >>>>>>> plug-ins.
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>> Now that I think about it more, though, I kind of
> would
> >> >> have
> >> >> > > >>> liked to
> >> >> > > >>> >>>>>>> have the storage framework send a CreateCommand to
> the
> >> >> > > hypervisor
> >> >> > > >>> before
> >> >> > > >>> >>>>>>> sending the AttachCommand if the storage in question
> was
> >> >> > > managed.
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>> Then I could have created my SR/datastore in the
> >> >> > CreateCommand
> >> >> > > and
> >> >> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore
> that
> >> it
> >> >> was
> >> >> > > >>> always
> >> >> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
> >> >> SR/datastore
> >> >> > > in
> >> >> > > >>> the
> >> >> > > >>> >>>>>>> AttachCommand).
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> >> >> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured
> >> you'd
> >> >> be
> >> >> > > in a
> >> >> > > >>> >>>>>>>> better position to tell.
> >> >> > > >>> >>>>>>>>
> >> >> > > >>> >>>>>>>> I see that copyAsync is unsupported in your current
> 4.2
> >> >> > > driver,
> >> >> > > >>> does
> >> >> > > >>> >>>>>>>> that mean that there's no template support? Or is it
> >> some
> >> >> > > other
> >> >> > > >>> call
> >> >> > > >>> >>>>>>>> that does templating now? I'm still getting up to
> >> speed on
> >> >> > all
> >> >> > > >>> of the
> >> >> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> >> >> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only
> place
> >> >> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me
> >> that
> >> >> > > >>> >>>>>>>> CreateCommand
> >> >> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
> >> >> plugins.
> >> >> > > >>> >>>>>>>>
> >> >> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> >> >> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
> >> >> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> > It was my intent that it should work with any
> >> CloudStack
> >> >> > > >>> "managed"
> >> >> > > >>> >>>>>>>> storage
> >> >> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using
> >> CHAP, I
> >> >> > > wrote
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> code so
> >> >> > > >>> >>>>>>>> > CHAP didn't have to be used.
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
> >> >> whether
> >> >> > > it is
> >> >> > > >>> >>>>>>>> generic
> >> >> > > >>> >>>>>>>> > enough to keep those names or not.
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> > My expectation is that it is generic enough.
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen
> <
> >> >> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> >> I added a comment to your diff. In general I
> think
> >> it
> >> >> > looks
> >> >> > > >>> good,
> >> >> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or
> not it
> >> >> will
> >> >> > > >>> work.
> >> >> > > >>> >>>>>>>> One
> >> >> > > >>> >>>>>>>> >> thing I do have reservations about is the
> >> adaptor/pool
> >> >> > > >>> naming. If
> >> >> > > >>> >>>>>>>> you
> >> >> > > >>> >>>>>>>> >> think the code is generic enough that it will
> work
> >> for
> >> >> > > anyone
> >> >> > > >>> who
> >> >> > > >>> >>>>>>>> does
> >> >> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK,
> but if
> >> >> > > there's
> >> >> > > >>> >>>>>>>> anything
> >> >> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or
> >> how it
> >> >> > > likes
> >> >> > > >>> to
> >> >> > > >>> >>>>>>>> be
> >> >> > > >>> >>>>>>>> >> treated then I'd say that they should be named
> >> >> something
> >> >> > > less
> >> >> > > >>> >>>>>>>> generic
> >> >> > > >>> >>>>>>>> >> than iScsiAdmStorage.
> >> >> > > >>> >>>>>>>> >>
> >> >> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >> >> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
> >> >> > > >>> >>>>>>>> >> > Great - thanks!
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > Just to give you an overview of what my code
> does
> >> >> (for
> >> >> > > when
> >> >> > > >>> you
> >> >> > > >>> >>>>>>>> get a
> >> >> > > >>> >>>>>>>> >> > chance to review it):
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
> >> >> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> >> >> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
> >> >> connects
> >> >> > > with
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> CS MS. If
> >> >> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
> >> >> > > >>> >>>>>>>> ModifyStoragePoolCommand to
> >> >> > > >>> >>>>>>>> >> > the host. This logic was based off of
> >> >> > > DefaultHostListener.
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
> >> >> unchanged.
> >> >> > It
> >> >> > > >>> >>>>>>>> invokes
> >> >> > > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager.
> >> The
> >> >> > > >>> >>>>>>>> KVMStoragePoolManager
> >> >> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
> >> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
> >> >> > > >>> >>>>>>>> >> was
> >> >> > > >>> >>>>>>>> >> > registered in the constructor for
> >> >> KVMStoragePoolManager
> >> >> > > >>> under
> >> >> > > >>> >>>>>>>> the key of
> >> >> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just
> >> makes
> >> >> an
> >> >> > > >>> instance
> >> >> > > >>> >>>>>>>> of
> >> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and
> returns
> >> >> the
> >> >> > > >>> pointer
> >> >> > > >>> >>>>>>>> to the
> >> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map
> is
> >> the
> >> >> > > UUID
> >> >> > > >>> of
> >> >> > > >>> >>>>>>>> the storage
> >> >> > > >>> >>>>>>>> >> > pool.
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk
> is
> >> >> > invoked
> >> >> > > for
> >> >> > > >>> >>>>>>>> managed
> >> >> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
> >> >> createPhysicalDisk
> >> >> > > uses
> >> >> > > >>> >>>>>>>> iscsiadm to
> >> >> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on
> >> the
> >> >> SAN
> >> >> > > and
> >> >> > > >>> a
> >> >> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the
> >> attach
> >> >> > > logic
> >> >> > > >>> that
> >> >> > > >>> >>>>>>>> follows.
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is
> >> invoked
> >> >> > > with
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> IQN of the
> >> >> > > >>> >>>>>>>> >> > volume if the storage pool in question is
> managed
> >> >> > > storage.
> >> >> > > >>> >>>>>>>> Otherwise, the
> >> >> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
> >> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> >> >> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be
> >> used
> >> >> in
> >> >> > > the
> >> >> > > >>> >>>>>>>> detach logic.
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > Once the volume has been detached,
> >> >> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> >> >> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
> >> >> > > >>> deletePhysicalDisk
> >> >> > > >>> >>>>>>>> removes the
> >> >> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus
> Sorensen <
> >> >> > > >>> >>>>>>>> shadowsor@gmail.com
> >> >> > > >>> >>>>>>>> >> >wrote:
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> >> Its the log4j properties file in
> >> >> /etc/cloudstack/agent
> >> >> > > >>> change
> >> >> > > >>> >>>>>>>> all INFO
> >> >> > > >>> >>>>>>>> >> to
> >> >> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't
> starting,
> >> you
> >> >> > can
> >> >> > > >>> tail
> >> >> > > >>> >>>>>>>> the log
> >> >> > > >>> >>>>>>>> >> when
> >> >> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it will
> >> spit
> >> >> > > >>> something
> >> >> > > >>> >>>>>>>> out into
> >> >> > > >>> >>>>>>>> >> one
> >> >> > > >>> >>>>>>>> >> >> of the other files in
> /var/log/cloudstack/agent
> >> >> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> >> >> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> >> wrote:
> >> >> > > >>> >>>>>>>> >> >>
> >> >> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for
> the
> >> >> status
> >> >> > > of
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> service (I
> >> >> > > >>> >>>>>>>> >> >> > assume it could be started this way, as
> well,
> >> by
> >> >> > > changing
> >> >> > > >>> >>>>>>>> "status" to
> >> >> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$
> sudo
> >> >> > > >>> >>>>>>>> /usr/sbin/service
> >> >> > > >>> >>>>>>>> >> >> > cloudstack-agent status
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > I get this back:
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID
> file
> >> for
> >> >> > > >>> >>>>>>>> cloudstack-agent
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
> >> >> though,
> >> >> > > so I
> >> >> > > >>> >>>>>>>> think I'm
> >> >> > > >>> >>>>>>>> >> going
> >> >> > > >>> >>>>>>>> >> >> to
> >> >> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I
> set
> >> >> > > >>> enable.debug?
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus
> >> Sorensen <
> >> >> > > >>> >>>>>>>> shadowsor@gmail.com
> >> >> > > >>> >>>>>>>> >> >> > >wrote:
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few
> days.
> >> As
> >> >> > > >>> mentioned,
> >> >> > > >>> >>>>>>>> you can
> >> >> > > >>> >>>>>>>> >> set
> >> >> > > >>> >>>>>>>> >> >> up
> >> >> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as
> >> well
> >> >> if
> >> >> > > all
> >> >> > > >>> >>>>>>>> else fails.
> >> >> > > >>> >>>>>>>> >>  If
> >> >> > > >>> >>>>>>>> >> >> > you
> >> >> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from
> the
> >> KVM
> >> >> > > host,
> >> >> > > >>> then
> >> >> > > >>> >>>>>>>> you need
> >> >> > > >>> >>>>>>>> >> to
> >> >> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run
> >> without
> >> >> > > >>> >>>>>>>> complaining loudly
> >> >> > > >>> >>>>>>>> >> if
> >> >> > > >>> >>>>>>>> >> >> it
> >> >> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't
> >> see
> >> >> > that
> >> >> > > in
> >> >> > > >>> >>>>>>>> your agent
> >> >> > > >>> >>>>>>>> >> log,
> >> >> > > >>> >>>>>>>> >> >> so
> >> >> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know
> >> how
> >> >> to
> >> >> > > >>> >>>>>>>> stop/start the
> >> >> > > >>> >>>>>>>> >> agent
> >> >> > > >>> >>>>>>>> >> >> on
> >> >> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> >> >> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski"
> <
> >> >> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> >> >> > > >>> >>>>>>>> >> >> > > wrote:
> >> >> > > >>> >>>>>>>> >> >> > >
> >> >> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new
> >> code,
> >> >> > but I
> >> >> > > >>> >>>>>>>> thought you
> >> >> > > >>> >>>>>>>> >> would
> >> >> > > >>> >>>>>>>> >> >> > be a
> >> >> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >>
> >> >> > > >>> >>>>>>>> >>
> >> >> > > >>> >>>>>>>>
> >> >> > > >>>
> >> >> > >
> >> >> >
> >> >>
> >>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and
> >> detach
> >> >> a
> >> >> > > data
> >> >> > > >>> >>>>>>>> disk (that
> >> >> > > >>> >>>>>>>> >> has
> >> >> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the
> >> hypervisor.
> >> >> The
> >> >> > > data
> >> >> > > >>> >>>>>>>> disk
> >> >> > > >>> >>>>>>>> >> happens to
> >> >> > > >>> >>>>>>>> >> >> > be
> >> >> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we
> >> have
> >> >> a
> >> >> > > 1:1
> >> >> > > >>> >>>>>>>> mapping
> >> >> > > >>> >>>>>>>> >> between a
> >> >> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor
> >> snapshots
> >> >> or
> >> >> > > stuff
> >> >> > > >>> >>>>>>>> like that
> >> >> > > >>> >>>>>>>> >> >> > (likely a
> >> >> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
> >> >> detaching a
> >> >> > > data
> >> >> > > >>> >>>>>>>> disk in 4.3.
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > Thanks!
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
> >> >> > Tutkowski <
> >> >> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't
> >> remove
> >> >> > > >>> >>>>>>>> cloudstack-agent
> >> >> > > >>> >>>>>>>> >> >> first.
> >> >> > > >>> >>>>>>>> >> >> > > > Would
> >> >> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo
> >> apt-get
> >> >> > > >>> install
> >> >> > > >>> >>>>>>>> >> >> > cloudstack-agent.
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
> >> >> > > Tutkowski <
> >> >> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the
> command
> >> >> > > manually:
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu
> >> :/etc/cloudstack/agent$
> >> >> > sudo
> >> >> > > >>> >>>>>>>> >> /usr/sbin/service
> >> >> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
> >> >> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
> >> >> > > cloudstack-agent
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM,
> Mike
> >> >> > > Tutkowski <
> >> >> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> >> > > >>> >>>>>>>> >> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > Agent
> >> >> > > >>> >>>>>>>> >> >> > > > >>> started
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> >> > > >>> >>>>>>>> >> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is
> >> 4.3.0-SNAPSHOT
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> >> > > >>> >>>>>>>> >> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
> >> >> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> >> > > >>> >>>>>>>> >> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file
> for
> >> >> > > storage
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> >> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> >> > > >>> >>>>>>>> >> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time
> backoff
> >> >> > > algorithm
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
> >> >> > > >>>  [cloud.utils.LogUtils]
> >> >> > > >>> >>>>>>>> >> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > log4j
> >> >> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
> >> >> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
> >> >> > >  [cloud.agent.Agent]
> >> >> > > >>> >>>>>>>> (main:null)
> >> >> > > >>> >>>>>>>> >> id
> >> >> > > >>> >>>>>>>> >> >> > is 3
> >> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > >  [resource.virtualnetwork.VirtualRoutingResource]
> >> >> > > >>> >>>>>>>> (main:null)
> >> >> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to
> >> use:
> >> >> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that
> setup.log
> >> was
> >> >> > > >>> >>>>>>>> important. This
> >> >> > > >>> >>>>>>>> >> seems
> >> >> > > >>> >>>>>>>> >> >> to
> >> >> > > >>> >>>>>>>> >> >> > > be
> >> >> > > >>> >>>>>>>> >> >> > > > a
> >> >> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it
> might
> >> >> > > indicate:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
> >> /usr/sbin/service
> >> >> > > >>> >>>>>>>> cloudstack-agent
> >> >> > > >>> >>>>>>>> >> status
> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: *
> could
> >> not
> >> >> > > access
> >> >> > > >>> PID
> >> >> > > >>> >>>>>>>> file for
> >> >> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
> >> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
> >> /usr/sbin/service
> >> >> > > >>> >>>>>>>> cloudstack-agent
> >> >> > > >>> >>>>>>>> >> start
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM,
> >> Marcus
> >> >> > > >>> Sorensen <
> >> >> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
> >> >> > > >>> >>>>>>>> >> >> > > > >wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I
> >> thought
> >> >> it
> >> >> > > was
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> agent log
> >> >> > > >>> >>>>>>>> >> for
> >> >> > > >>> >>>>>>>> >> >> > > some
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That
> >> might
> >> >> be
> >> >> > > the
> >> >> > > >>> >>>>>>>> place to
> >> >> > > >>> >>>>>>>> >> look.
> >> >> > > >>> >>>>>>>> >> >> > There
> >> >> > > >>> >>>>>>>> >> >> > > > is
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> an
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for
> >> the
> >> >> > setup
> >> >> > > >>> when
> >> >> > > >>> >>>>>>>> it adds
> >> >> > > >>> >>>>>>>> >> the
> >> >> > > >>> >>>>>>>> >> >> > host,
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> both
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
> >> >> Tutkowski"
> >> >> > <
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at
> the
> >> IP
> >> >> > > address
> >> >> > > >>> or
> >> >> > > >>> >>>>>>>> the KVM
> >> >> > > >>> >>>>>>>> >> host?
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at
> 192.168.233.10.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global
> >> Settings
> >> >> > > >>> parameter:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> >> >> > > >>> >>>>>>>> server192.168.233.1
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> /etc/cloudstack/agent/agent.properties
> >> >> > has
> >> >> > > a
> >> >> > > >>> >>>>>>>> >> >> host=192.168.233.1
> >> >> > > >>> >>>>>>>> >> >> > > > value.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM,
> >> >> Marcus
> >> >> > > >>> Sorensen
> >> >> > > >>> >>>>>>>> <
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server
> is
> >> >> > > >>> >>>>>>>> 192.168.233.10? But you
> >> >> > > >>> >>>>>>>> >> >> tried
> >> >> > > >>> >>>>>>>> >> >> > > to
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> telnet
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > to
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be
> enough
> >> to
> >> >> > > change
> >> >> > > >>> >>>>>>>> that in
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> >> /etc/cloudstack/agent/agent.properties,
> >> >> > > but
> >> >> > > >>> you
> >> >> > > >>> >>>>>>>> may want
> >> >> > > >>> >>>>>>>> >> to
> >> >> > > >>> >>>>>>>> >> >> > edit
> >> >> > > >>> >>>>>>>> >> >> > > > the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > config
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms
> IP.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
> >> >> > > Tutkowski" <
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
> >> >> > /etc/network/interfaces
> >> >> > > >>> file
> >> >> > > >>> >>>>>>>> looks
> >> >> > > >>> >>>>>>>> >> like, if
> >> >> > > >>> >>>>>>>> >> >> > > that
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> is of
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0
> >> network
> >> >> > is
> >> >> > > the
> >> >> > > >>> >>>>>>>> NAT network
> >> >> > > >>> >>>>>>>> >> >> > VMware
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > set
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add
> default gw
> >> >> > > >>> >>>>>>>> 192.168.233.2 metric 1
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del
> default
> >> gw
> >> >> > > >>> >>>>>>>> 192.168.233.2
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08
> >> PM,
> >> >> > Mike
> >> >> > > >>> >>>>>>>> Tutkowski <
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com
> >
> >> >> wrote:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct.
> >> This is
> >> >> > > from
> >> >> > > >>> the
> >> >> > > >>> >>>>>>>> MS log
> >> >> > > >>> >>>>>>>> >> >> (below).
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would
> >> be.
> >> >> My
> >> >> > > >>> network
> >> >> > > >>> >>>>>>>> settings
> >> >> > > >>> >>>>>>>> >> >> > > shouldn't
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> have
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried
> >> this.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM
> host
> >> >> from
> >> >> > > the
> >> >> > > >>> MS
> >> >> > > >>> >>>>>>>> host and
> >> >> > > >>> >>>>>>>> >> vice
> >> >> > > >>> >>>>>>>> >> >> > > > versa.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually
> kick
> >> >> off
> >> >> > a
> >> >> > > VM
> >> >> > > >>> on
> >> >> > > >>> >>>>>>>> the KVM
> >> >> > > >>> >>>>>>>> >> host
> >> >> > > >>> >>>>>>>> >> >> > and
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> ping from
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > it
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac
> OS
> >> X
> >> >> > host
> >> >> > > >>> (also
> >> >> > > >>> >>>>>>>> running
> >> >> > > >>> >>>>>>>> >> the
> >> >> > > >>> >>>>>>>> >> >> CS
> >> >> > > >>> >>>>>>>> >> >> > > MS)
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> to the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > VM
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware
> Fusion).
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141
> DEBUG
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> >> > > >>> :ctx-6b28dc48)
> >> >> > > >>> >>>>>>>> Timeout,
> >> >> > > >>> >>>>>>>> >> to
> >> >> > > >>> >>>>>>>> >> >> > wait
> >> >> > > >>> >>>>>>>> >> >> > > > for
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > host
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr,
> >> assuming
> >> >> it
> >> >> > is
> >> >> > > >>> failed
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144
> WARN
> >> >> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> >> > > >>> :ctx-6b28dc48)
> >> >> > > >>> >>>>>>>> Unable to
> >> >> > > >>> >>>>>>>> >> >> find
> >> >> > > >>> >>>>>>>> >> >> > > the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> server
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
> >> >> http://192.168.233.10
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145
> INFO
> >> >> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> >> > > >>> :ctx-6b28dc48)
> >> >> > > >>> >>>>>>>> Could not
> >> >> > > >>> >>>>>>>> >> >> find
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> exception:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > com.cloud.exception.DiscoveryException
> >> >> > > >>> in
> >> >> > > >>> >>>>>>>> error code
> >> >> > > >>> >>>>>>>> >> >> list
> >> >> > > >>> >>>>>>>> >> >> > > for
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147
> WARN
> >> >> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> >> > > >>> :ctx-6b28dc48)
> >> >> > > >>> >>>>>>>> Exception:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > com.cloud.exception.DiscoveryException:
> >> >> > > >>> >>>>>>>> Unable to add
> >> >> > > >>> >>>>>>>> >> >> the
> >> >> > > >>> >>>>>>>> >> >> > > host
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >>
> >> >> > > >>> >>>>>>>> >>
> >> >> > > >>> >>>>>>>>
> >> >> > > >>>
> >> >> > >
> >> >> >
> >> >>
> >>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to
> >> telnet in
> >> >> > > from
> >> >> > > >>> my
> >> >> > > >>> >>>>>>>> KVM host to
> >> >> > > >>> >>>>>>>> >> >> the
> >> >> > > >>> >>>>>>>> >> >> > MS
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> host's
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$
> telnet
> >> >> > > >>> 192.168.233.1
> >> >> > > >>> >>>>>>>> 8250
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
> >> >> > SolidFire
> >> >> > > >>> Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > e:
> mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world
> uses
> >> >> the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
> >> >> > > >>> >>>>>>>> >>
> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > --
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer,
> >> SolidFire
> >> >> > > Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses
> the
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
> >> >> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> --
> >> >> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer,
> SolidFire
> >> >> > Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the
> >> >> cloud<
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> >> >> > > > >>> *™*
> >> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >> --
> >> >> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer,
> SolidFire
> >> >> Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the
> >> cloud<
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> >> >> > > > >> *™*
> >> >> > > >>> >>>>>>>> >> >> > > > >>
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > > > --
> >> >> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer,
> SolidFire
> >> >> Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the
> >> cloud<
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> >> >> > > > > *™*
> >> >> > > >>> >>>>>>>> >> >> > > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > > > --
> >> >> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire
> >> Inc.*
> >> >> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
> >> >> > > >>> >>>>>>>> >> >> > > > cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> >> >> > > > *™*
> >> >> > > >>> >>>>>>>> >> >> > > >
> >> >> > > >>> >>>>>>>> >> >> > >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >> > --
> >> >> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire
> Inc.*
> >> >> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> >> > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
> >> >> > > >>> >>>>>>>> >> >> > cloud<
> >> >> > > http://solidfire.com/solution/overview/?video=play
> >> >> > > >>> >
> >> >> > > >>> >>>>>>>> >> >> > *™*
> >> >> > > >>> >>>>>>>> >> >> >
> >> >> > > >>> >>>>>>>> >> >>
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> >
> >> >> > > >>> >>>>>>>> >> > --
> >> >> > > >>> >>>>>>>> >> > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> >> > o: 303.746.7302
> >> >> > > >>> >>>>>>>> >> > Advancing the way the world uses the
> >> >> > > >>> >>>>>>>> >> > cloud<
> >> >> > http://solidfire.com/solution/overview/?video=play
> >> >> > > >
> >> >> > > >>> >>>>>>>> >> > *™*
> >> >> > > >>> >>>>>>>> >>
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> >
> >> >> > > >>> >>>>>>>> > --
> >> >> > > >>> >>>>>>>> > *Mike Tutkowski*
> >> >> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>>> > o: 303.746.7302
> >> >> > > >>> >>>>>>>> > Advancing the way the world uses the
> >> >> > > >>> >>>>>>>> > cloud<
> >> >> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>>> > *™*
> >> >> > > >>> >>>>>>>>
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>> --
> >> >> > > >>> >>>>>>> *Mike Tutkowski*
> >> >> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>>> o: 303.746.7302
> >> >> > > >>> >>>>>>> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>>> *™*
> >> >> > > >>> >>>>>>>
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>> --
> >> >> > > >>> >>>>>> *Mike Tutkowski*
> >> >> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>>> o: 303.746.7302
> >> >> > > >>> >>>>>> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>>> *™*
> >> >> > > >>> >>>>>>
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>> --
> >> >> > > >>> >>>>> *Mike Tutkowski*
> >> >> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>>> o: 303.746.7302
> >> >> > > >>> >>>>> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>>> *™*
> >> >> > > >>> >>>>>
> >> >> > > >>> >>>>
> >> >> > > >>> >>>>
> >> >> > > >>> >>>>
> >> >> > > >>> >>>> --
> >> >> > > >>> >>>> *Mike Tutkowski*
> >> >> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>>> o: 303.746.7302
> >> >> > > >>> >>>> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>>> *™*
> >> >> > > >>> >>>>
> >> >> > > >>> >>>
> >> >> > > >>> >>>
> >> >> > > >>> >>>
> >> >> > > >>> >>> --
> >> >> > > >>> >>> *Mike Tutkowski*
> >> >> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >>> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >>> o: 303.746.7302
> >> >> > > >>> >>> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >>> *™*
> >> >> > > >>> >>>
> >> >> > > >>> >>
> >> >> > > >>> >>
> >> >> > > >>> >>
> >> >> > > >>> >> --
> >> >> > > >>> >> *Mike Tutkowski*
> >> >> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> >> e: mike.tutkowski@solidfire.com
> >> >> > > >>> >> o: 303.746.7302
> >> >> > > >>> >> Advancing the way the world uses the cloud<
> >> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> >> *™*
> >> >> > > >>> >>
> >> >> > > >>> >
> >> >> > > >>> >
> >> >> > > >>> >
> >> >> > > >>> > --
> >> >> > > >>> > *Mike Tutkowski*
> >> >> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >>> > e: mike.tutkowski@solidfire.com
> >> >> > > >>> > o: 303.746.7302
> >> >> > > >>> > Advancing the way the world uses the
> >> >> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > > >>> > *™*
> >> >> > > >>>
> >> >> > > >>
> >> >> > > >>
> >> >> > > >>
> >> >> > > >> --
> >> >> > > >> *Mike Tutkowski*
> >> >> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > >> e: mike.tutkowski@solidfire.com
> >> >> > > >> o: 303.746.7302
> >> >> > > >> Advancing the way the world uses the cloud<
> >> >> > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > >> *™*
> >> >> > > >>
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > > > --
> >> >> > > > *Mike Tutkowski*
> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > e: mike.tutkowski@solidfire.com
> >> >> > > > o: 303.746.7302
> >> >> > > > Advancing the way the world uses the
> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > > > *™*
> >> >> > >
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > *Mike Tutkowski*
> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > e: mike.tutkowski@solidfire.com
> >> >> > o: 303.746.7302
> >> >> > Advancing the way the world uses the
> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > *™*
> >> >> >
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
If that's what your agent.properties looks like after you try to add
the host then that's your problem. It should be completely rewritten
after cloudstack-setup-agent, with all of the parameters passed to it.
you should run it manually and see what is failing, there's probably a
step missed in the ubuntu setup instructions.

On Tue, Sep 24, 2013 at 10:00 AM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> This is what a fresh agent.properties file looks like on my system.
>
> I expect if I try to add it to a cluster, the empty, localhost, and default
> values below should be filled in.
>
> I plan to try to add it to a cluster in a bit.
>
> # The GUID to identify the agent with, this is mandatory!
> # Generate with "uuidgen"
> guid=
>
> #resource= the java class, which agent load to execute
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>
> #workers= number of threads running in agent
> workers=5
>
> #host= The IP address of management server
> host=localhost
>
> #port = The port management server listening on, default is 8250
> port=8250
>
> #cluster= The cluster which the agent belongs to
> cluster=default
>
> #pod= The pod which the agent belongs to
> pod=default
>
> #zone= The zone which the agent belongs to
> zone=default
>
>
>
> On Tue, Sep 24, 2013 at 8:55 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> I stull haven't seen your agent.properties. This would tell me if your
>> setup succeeded.  At this point my best guess is that
>> "cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> --prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fashion. You can
>> run it manually at any time to see. Once that is run, then the agent
>> should come up. The resource name in your code is pulled from
>> agent.properties (I believe) and is usually
>> "resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
>>
>> On Tue, Sep 24, 2013 at 12:12 AM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > I've been narrowing it down by putting in a million print-to-log
>> statements.
>> >
>> > Do you know if it is a problem that value ends up null (in a constructor
>> > for Agent)?
>> >
>> > String value = _shell.getPersistentProperty(getResourceName(), "id");
>> >
>> > In that same constructor, this line never finishes:
>> >
>> > if (!_resource.configure(getResourceName(), params)) {
>> >
>> > I need to dig into the configure method to see what's going on there.
>> >
>> >
>> > On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> It might be a centos specific thing. These are created by the init
>> scripts.
>> >> Check your agent init script on Ubuntu and see I'd you can decipher
>> where
>> >> it sends stdout.
>> >> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
>> >
>> >> wrote:
>> >>
>> >> > Weird...no such file exists.
>> >> >
>> >> >
>> >> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <shadowsor@gmail.com
>> >> > >wrote:
>> >> >
>> >> > > maybe cloudstack-agent.out
>> >> > >
>> >> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
>> >> > > <mi...@solidfire.com> wrote:
>> >> > > > OK, so, nothing is screaming out in the logs. I did notice the
>> >> > following:
>> >> > > >
>> >> > > > From setup.log:
>> >> > > >
>> >> > > > DEBUG:root:execute:apparmor_status |grep libvirt
>> >> > > >
>> >> > > > DEBUG:root:Failed to execute:
>> >> > > >
>> >> > > >
>> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>> >> > > >
>> >> > > > DEBUG:root:Failed to execute: * could not access PID file for
>> >> > > > cloudstack-agent
>> >> > > >
>> >> > > >
>> >> > > > This is the final line in this log file:
>> >> > > >
>> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>> >> > > >
>> >> > > >
>> >> > > > This is from agent.log:
>> >> > > >
>> >> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
>> >> > > Checking
>> >> > > > to see if agent.pid exists.
>> >> > > >
>> >> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil]
>> (main:null)
>> >> > > > Executing: bash -c echo $PPID
>> >> > > >
>> >> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil]
>> (main:null)
>> >> > > > Execution is successful.
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id
>> is
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
>> >> > > > (main:null) Retrieving network interface: cloudbr0
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
>> >> > > > (main:null) Retrieving network interface: cloudbr0
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
>> >> > > > (main:null) Retrieving network interface: null
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
>> >> > > > (main:null) Retrieving network interface: null
>> >> > > >
>> >> > > >
>> >> > > > The following kinds of lines are repeated for a bunch of different
>> >> .sh
>> >> > > > files. I think they often end up being found here:
>> >> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
>> >> probably
>> >> > > not
>> >> > > > an issue.
>> >> > > >
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null)
>> >> Looking
>> >> > > for
>> >> > > > call_firewall.sh in the classpath
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null)
>> >> System
>> >> > > > resource: null
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
>> >> > Classpath
>> >> > > > resource: null
>> >> > > >
>> >> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null)
>> >> Looking
>> >> > > for
>> >> > > > call_firewall.sh
>> >> > > >
>> >> > > >
>> >> > > > Is there a log file for the Java code that I could write stuff
>> out to
>> >> > and
>> >> > > > see how far we get?
>> >> > > >
>> >> > > >
>> >> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
>> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > >
>> >> > > >> Thanks, Marcus
>> >> > > >>
>> >> > > >> I've been developing on Windows for most of my time, so a bunch
>> of
>> >> > these
>> >> > > >> Linux-type commands are new to me and I don't always interpret
>> the
>> >> > > output
>> >> > > >> correctly. Getting there. :)
>> >> > > >>
>> >> > > >>
>> >> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
>> >> shadowsor@gmail.com
>> >> > > >wrote:
>> >> > > >>
>> >> > > >>> Nope, not running. That's just your grep process. It would look
>> >> like:
>> >> > > >>>
>> >> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
>> >> > > >>>
>> >> > > >>>
>> >> > >
>> >> >
>> >>
>> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>> >> > > >>>
>> >> > > >>> Your agent log should tell you why it failed to start if you
>> set it
>> >> > in
>> >> > > >>> debug and try to start... or maybe cloudstack-agent.out if it
>> >> doesn't
>> >> > > >>> get far enough (say it's missing a class or something and can't
>> >> > > >>> start).
>> >> > > >>>
>> >> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
>> >> > > >>> <mi...@solidfire.com> wrote:
>> >> > > >>> > Looks like it's running, though:
>> >> > > >>> >
>> >> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> >> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep
>> --color=auto
>> >> > > jsvc
>> >> > > >>> >
>> >> > > >>> >
>> >> > > >>> >
>> >> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
>> >> > > >>> > mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >
>> >> > > >>> >> Hey Marcus,
>> >> > > >>> >>
>> >> > > >>> >> Maybe you could give me a better idea of what the "flow" is
>> when
>> >> > > >>> adding a
>> >> > > >>> >> KVM host.
>> >> > > >>> >>
>> >> > > >>> >> It looks like we SSH into the potential KVM host and execute
>> a
>> >> > > startup
>> >> > > >>> >> script (giving it necessary info about the cloud and the
>> >> > management
>> >> > > >>> server
>> >> > > >>> >> it should talk to).
>> >> > > >>> >>
>> >> > > >>> >> After this, is the Java VM started?
>> >> > > >>> >>
>> >> > > >>> >> After a reboot, I assume the JVM is started automatically?
>> >> > > >>> >>
>> >> > > >>> >> How do you debug your KVM-side Java code?
>> >> > > >>> >>
>> >> > > >>> >> Been looking through the logs and nothing obvious sticks
>> out. I
>> >> > will
>> >> > > >>> have
>> >> > > >>> >> another look.
>> >> > > >>> >>
>> >> > > >>> >> Thanks
>> >> > > >>> >>
>> >> > > >>> >>
>> >> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>> >> > > >>> >> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>
>> >> > > >>> >>> Hey Marcus,
>> >> > > >>> >>>
>> >> > > >>> >>> I've been investigating my issue with not being able to add
>> a
>> >> KVM
>> >> > > >>> host to
>> >> > > >>> >>> CS.
>> >> > > >>> >>>
>> >> > > >>> >>> For what it's worth, this comes back successful:
>> >> > > >>> >>>
>> >> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
>> >> "cloudstack-setup-agent
>> >> > > " +
>> >> > > >>> >>> parameters, 3);
>> >> > > >>> >>>
>> >> > > >>> >>> This is what the command looks like:
>> >> > > >>> >>>
>> >> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> >> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> >> > > >>> --prvNic=cloudbr0
>> >> > > >>> >>> --guestNic=cloudbr0
>> >> > > >>> >>>
>> >> > > >>> >>> The problem is this method in LibvirtServerDiscoverer never
>> >> > finds a
>> >> > > >>> >>> matching host in the DB:
>> >> > > >>> >>>
>> >> > > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId,
>> >> String
>> >> > > guid)
>> >> > > >>> >>>
>> >> > > >>> >>> I assume once the KVM host is up and running that it's
>> supposed
>> >> > to
>> >> > > >>> call
>> >> > > >>> >>> into the CS MS so the DB can be updated as such?
>> >> > > >>> >>>
>> >> > > >>> >>> If so, the problem must be on the KVM side.
>> >> > > >>> >>>
>> >> > > >>> >>> I did run this again (from the KVM host) to see if the
>> >> connection
>> >> > > was
>> >> > > >>> in
>> >> > > >>> >>> place:
>> >> > > >>> >>>
>> >> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >> > > >>> >>>
>> >> > > >>> >>> Trying 192.168.233.1...
>> >> > > >>> >>>
>> >> > > >>> >>> Connected to 192.168.233.1.
>> >> > > >>> >>>
>> >> > > >>> >>> Escape character is '^]'.
>> >> > > >>> >>> So that looks good.
>> >> > > >>> >>>
>> >> > > >>> >>> I turned on more info in the debug log, but nothing obvious
>> >> jumps
>> >> > > out
>> >> > > >>> as
>> >> > > >>> >>> of yet.
>> >> > > >>> >>>
>> >> > > >>> >>> If you have any thoughts on this, please shoot them my way.
>> :)
>> >> > > >>> >>>
>> >> > > >>> >>> Thanks!
>> >> > > >>> >>>
>> >> > > >>> >>>
>> >> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>> >> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>
>> >> > > >>> >>>> First step is for me to get this working for KVM, though.
>> :)
>> >> > > >>> >>>>
>> >> > > >>> >>>> Once I do that, I can perhaps make modifications to the
>> >> storage
>> >> > > >>> >>>> framework and hypervisor plug-ins to refactor the logic and
>> >> > such.
>> >> > > >>> >>>>
>> >> > > >>> >>>>
>> >> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> >> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>
>> >> > > >>> >>>>> Same would work for KVM.
>> >> > > >>> >>>>>
>> >> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
>> >> > > appropriate
>> >> > > >>> >>>>> times by the storage framework, I could move my connect
>> and
>> >> > > >>> disconnect
>> >> > > >>> >>>>> logic out of the attach/detach logic.
>> >> > > >>> >>>>>
>> >> > > >>> >>>>>
>> >> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>> >> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>>
>> >> > > >>> >>>>>> Conversely, if the storage framework called the
>> >> DestroyCommand
>> >> > > for
>> >> > > >>> >>>>>> managed storage after the DetachCommand, then I could
>> have
>> >> had
>> >> > > my
>> >> > > >>> remove
>> >> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling
>> >> > rather
>> >> > > >>> than in the
>> >> > > >>> >>>>>> DetachCommand handling.
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>> >> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>> The initial approach that was discussed during 4.2 was
>> for
>> >> me
>> >> > > to
>> >> > > >>> >>>>>>> modify the attach/detach logic only in the XenServer and
>> >> > VMware
>> >> > > >>> hypervisor
>> >> > > >>> >>>>>>> plug-ins.
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>> Now that I think about it more, though, I kind of would
>> >> have
>> >> > > >>> liked to
>> >> > > >>> >>>>>>> have the storage framework send a CreateCommand to the
>> >> > > hypervisor
>> >> > > >>> before
>> >> > > >>> >>>>>>> sending the AttachCommand if the storage in question was
>> >> > > managed.
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>> Then I could have created my SR/datastore in the
>> >> > CreateCommand
>> >> > > and
>> >> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore that
>> it
>> >> was
>> >> > > >>> always
>> >> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
>> >> SR/datastore
>> >> > > in
>> >> > > >>> the
>> >> > > >>> >>>>>>> AttachCommand).
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>> >> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured
>> you'd
>> >> be
>> >> > > in a
>> >> > > >>> >>>>>>>> better position to tell.
>> >> > > >>> >>>>>>>>
>> >> > > >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
>> >> > > driver,
>> >> > > >>> does
>> >> > > >>> >>>>>>>> that mean that there's no template support? Or is it
>> some
>> >> > > other
>> >> > > >>> call
>> >> > > >>> >>>>>>>> that does templating now? I'm still getting up to
>> speed on
>> >> > all
>> >> > > >>> of the
>> >> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>> >> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only place
>> >> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me
>> that
>> >> > > >>> >>>>>>>> CreateCommand
>> >> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
>> >> plugins.
>> >> > > >>> >>>>>>>>
>> >> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>> >> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
>> >> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> > It was my intent that it should work with any
>> CloudStack
>> >> > > >>> "managed"
>> >> > > >>> >>>>>>>> storage
>> >> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using
>> CHAP, I
>> >> > > wrote
>> >> > > >>> the
>> >> > > >>> >>>>>>>> code so
>> >> > > >>> >>>>>>>> > CHAP didn't have to be used.
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
>> >> whether
>> >> > > it is
>> >> > > >>> >>>>>>>> generic
>> >> > > >>> >>>>>>>> > enough to keep those names or not.
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> > My expectation is that it is generic enough.
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>> >> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> >> I added a comment to your diff. In general I think
>> it
>> >> > looks
>> >> > > >>> good,
>> >> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or not it
>> >> will
>> >> > > >>> work.
>> >> > > >>> >>>>>>>> One
>> >> > > >>> >>>>>>>> >> thing I do have reservations about is the
>> adaptor/pool
>> >> > > >>> naming. If
>> >> > > >>> >>>>>>>> you
>> >> > > >>> >>>>>>>> >> think the code is generic enough that it will work
>> for
>> >> > > anyone
>> >> > > >>> who
>> >> > > >>> >>>>>>>> does
>> >> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
>> >> > > there's
>> >> > > >>> >>>>>>>> anything
>> >> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or
>> how it
>> >> > > likes
>> >> > > >>> to
>> >> > > >>> >>>>>>>> be
>> >> > > >>> >>>>>>>> >> treated then I'd say that they should be named
>> >> something
>> >> > > less
>> >> > > >>> >>>>>>>> generic
>> >> > > >>> >>>>>>>> >> than iScsiAdmStorage.
>> >> > > >>> >>>>>>>> >>
>> >> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> >> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
>> >> > > >>> >>>>>>>> >> > Great - thanks!
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > Just to give you an overview of what my code does
>> >> (for
>> >> > > when
>> >> > > >>> you
>> >> > > >>> >>>>>>>> get a
>> >> > > >>> >>>>>>>> >> > chance to review it):
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
>> >> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
>> >> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
>> >> connects
>> >> > > with
>> >> > > >>> the
>> >> > > >>> >>>>>>>> CS MS. If
>> >> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
>> >> > > >>> >>>>>>>> ModifyStoragePoolCommand to
>> >> > > >>> >>>>>>>> >> > the host. This logic was based off of
>> >> > > DefaultHostListener.
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
>> >> unchanged.
>> >> > It
>> >> > > >>> >>>>>>>> invokes
>> >> > > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager.
>> The
>> >> > > >>> >>>>>>>> KVMStoragePoolManager
>> >> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
>> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
>> >> > > >>> >>>>>>>> >> was
>> >> > > >>> >>>>>>>> >> > registered in the constructor for
>> >> KVMStoragePoolManager
>> >> > > >>> under
>> >> > > >>> >>>>>>>> the key of
>> >> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just
>> makes
>> >> an
>> >> > > >>> instance
>> >> > > >>> >>>>>>>> of
>> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns
>> >> the
>> >> > > >>> pointer
>> >> > > >>> >>>>>>>> to the
>> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is
>> the
>> >> > > UUID
>> >> > > >>> of
>> >> > > >>> >>>>>>>> the storage
>> >> > > >>> >>>>>>>> >> > pool.
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is
>> >> > invoked
>> >> > > for
>> >> > > >>> >>>>>>>> managed
>> >> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
>> >> createPhysicalDisk
>> >> > > uses
>> >> > > >>> >>>>>>>> iscsiadm to
>> >> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on
>> the
>> >> SAN
>> >> > > and
>> >> > > >>> a
>> >> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the
>> attach
>> >> > > logic
>> >> > > >>> that
>> >> > > >>> >>>>>>>> follows.
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is
>> invoked
>> >> > > with
>> >> > > >>> the
>> >> > > >>> >>>>>>>> IQN of the
>> >> > > >>> >>>>>>>> >> > volume if the storage pool in question is managed
>> >> > > storage.
>> >> > > >>> >>>>>>>> Otherwise, the
>> >> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
>> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>> >> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be
>> used
>> >> in
>> >> > > the
>> >> > > >>> >>>>>>>> detach logic.
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > Once the volume has been detached,
>> >> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>> >> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
>> >> > > >>> deletePhysicalDisk
>> >> > > >>> >>>>>>>> removes the
>> >> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>> >> > > >>> >>>>>>>> shadowsor@gmail.com
>> >> > > >>> >>>>>>>> >> >wrote:
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> >> Its the log4j properties file in
>> >> /etc/cloudstack/agent
>> >> > > >>> change
>> >> > > >>> >>>>>>>> all INFO
>> >> > > >>> >>>>>>>> >> to
>> >> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting,
>> you
>> >> > can
>> >> > > >>> tail
>> >> > > >>> >>>>>>>> the log
>> >> > > >>> >>>>>>>> >> when
>> >> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it will
>> spit
>> >> > > >>> something
>> >> > > >>> >>>>>>>> out into
>> >> > > >>> >>>>>>>> >> one
>> >> > > >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>> >> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> >> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> >> wrote:
>> >> > > >>> >>>>>>>> >> >>
>> >> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for the
>> >> status
>> >> > > of
>> >> > > >>> the
>> >> > > >>> >>>>>>>> service (I
>> >> > > >>> >>>>>>>> >> >> > assume it could be started this way, as well,
>> by
>> >> > > changing
>> >> > > >>> >>>>>>>> "status" to
>> >> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> >> > > >>> >>>>>>>> /usr/sbin/service
>> >> > > >>> >>>>>>>> >> >> > cloudstack-agent status
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > I get this back:
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file
>> for
>> >> > > >>> >>>>>>>> cloudstack-agent
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
>> >> though,
>> >> > > so I
>> >> > > >>> >>>>>>>> think I'm
>> >> > > >>> >>>>>>>> >> going
>> >> > > >>> >>>>>>>> >> >> to
>> >> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
>> >> > > >>> enable.debug?
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus
>> Sorensen <
>> >> > > >>> >>>>>>>> shadowsor@gmail.com
>> >> > > >>> >>>>>>>> >> >> > >wrote:
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few days.
>> As
>> >> > > >>> mentioned,
>> >> > > >>> >>>>>>>> you can
>> >> > > >>> >>>>>>>> >> set
>> >> > > >>> >>>>>>>> >> >> up
>> >> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as
>> well
>> >> if
>> >> > > all
>> >> > > >>> >>>>>>>> else fails.
>> >> > > >>> >>>>>>>> >>  If
>> >> > > >>> >>>>>>>> >> >> > you
>> >> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the
>> KVM
>> >> > > host,
>> >> > > >>> then
>> >> > > >>> >>>>>>>> you need
>> >> > > >>> >>>>>>>> >> to
>> >> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run
>> without
>> >> > > >>> >>>>>>>> complaining loudly
>> >> > > >>> >>>>>>>> >> if
>> >> > > >>> >>>>>>>> >> >> it
>> >> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't
>> see
>> >> > that
>> >> > > in
>> >> > > >>> >>>>>>>> your agent
>> >> > > >>> >>>>>>>> >> log,
>> >> > > >>> >>>>>>>> >> >> so
>> >> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know
>> how
>> >> to
>> >> > > >>> >>>>>>>> stop/start the
>> >> > > >>> >>>>>>>> >> agent
>> >> > > >>> >>>>>>>> >> >> on
>> >> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>> >> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> >> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
>> >> > > >>> >>>>>>>> >> >> > > wrote:
>> >> > > >>> >>>>>>>> >> >> > >
>> >> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new
>> code,
>> >> > but I
>> >> > > >>> >>>>>>>> thought you
>> >> > > >>> >>>>>>>> >> would
>> >> > > >>> >>>>>>>> >> >> > be a
>> >> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >>
>> >> > > >>> >>>>>>>> >>
>> >> > > >>> >>>>>>>>
>> >> > > >>>
>> >> > >
>> >> >
>> >>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and
>> detach
>> >> a
>> >> > > data
>> >> > > >>> >>>>>>>> disk (that
>> >> > > >>> >>>>>>>> >> has
>> >> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the
>> hypervisor.
>> >> The
>> >> > > data
>> >> > > >>> >>>>>>>> disk
>> >> > > >>> >>>>>>>> >> happens to
>> >> > > >>> >>>>>>>> >> >> > be
>> >> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we
>> have
>> >> a
>> >> > > 1:1
>> >> > > >>> >>>>>>>> mapping
>> >> > > >>> >>>>>>>> >> between a
>> >> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor
>> snapshots
>> >> or
>> >> > > stuff
>> >> > > >>> >>>>>>>> like that
>> >> > > >>> >>>>>>>> >> >> > (likely a
>> >> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
>> >> detaching a
>> >> > > data
>> >> > > >>> >>>>>>>> disk in 4.3.
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > Thanks!
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
>> >> > Tutkowski <
>> >> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't
>> remove
>> >> > > >>> >>>>>>>> cloudstack-agent
>> >> > > >>> >>>>>>>> >> >> first.
>> >> > > >>> >>>>>>>> >> >> > > > Would
>> >> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo
>> apt-get
>> >> > > >>> install
>> >> > > >>> >>>>>>>> >> >> > cloudstack-agent.
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
>> >> > > Tutkowski <
>> >> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the command
>> >> > > manually:
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu
>> :/etc/cloudstack/agent$
>> >> > sudo
>> >> > > >>> >>>>>>>> >> /usr/sbin/service
>> >> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
>> >> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
>> >> > > cloudstack-agent
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
>> >> > > Tutkowski <
>> >> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > Agent
>> >> > > >>> >>>>>>>> >> >> > > > >>> started
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is
>> 4.3.0-SNAPSHOT
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
>> >> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
>> >> > > storage
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> >> > > >>> >>>>>>>> >> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
>> >> > > algorithm
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>> >> > > >>>  [cloud.utils.LogUtils]
>> >> > > >>> >>>>>>>> >> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > log4j
>> >> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
>> >> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
>> >> > >  [cloud.agent.Agent]
>> >> > > >>> >>>>>>>> (main:null)
>> >> > > >>> >>>>>>>> >> id
>> >> > > >>> >>>>>>>> >> >> > is 3
>> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > >  [resource.virtualnetwork.VirtualRoutingResource]
>> >> > > >>> >>>>>>>> (main:null)
>> >> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to
>> use:
>> >> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log
>> was
>> >> > > >>> >>>>>>>> important. This
>> >> > > >>> >>>>>>>> >> seems
>> >> > > >>> >>>>>>>> >> >> to
>> >> > > >>> >>>>>>>> >> >> > > be
>> >> > > >>> >>>>>>>> >> >> > > > a
>> >> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
>> >> > > indicate:
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
>> /usr/sbin/service
>> >> > > >>> >>>>>>>> cloudstack-agent
>> >> > > >>> >>>>>>>> >> status
>> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could
>> not
>> >> > > access
>> >> > > >>> PID
>> >> > > >>> >>>>>>>> file for
>> >> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
>> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
>> /usr/sbin/service
>> >> > > >>> >>>>>>>> cloudstack-agent
>> >> > > >>> >>>>>>>> >> start
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM,
>> Marcus
>> >> > > >>> Sorensen <
>> >> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
>> >> > > >>> >>>>>>>> >> >> > > > >wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I
>> thought
>> >> it
>> >> > > was
>> >> > > >>> the
>> >> > > >>> >>>>>>>> agent log
>> >> > > >>> >>>>>>>> >> for
>> >> > > >>> >>>>>>>> >> >> > > some
>> >> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That
>> might
>> >> be
>> >> > > the
>> >> > > >>> >>>>>>>> place to
>> >> > > >>> >>>>>>>> >> look.
>> >> > > >>> >>>>>>>> >> >> > There
>> >> > > >>> >>>>>>>> >> >> > > > is
>> >> > > >>> >>>>>>>> >> >> > > > >>>> an
>> >> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for
>> the
>> >> > setup
>> >> > > >>> when
>> >> > > >>> >>>>>>>> it adds
>> >> > > >>> >>>>>>>> >> the
>> >> > > >>> >>>>>>>> >> >> > host,
>> >> > > >>> >>>>>>>> >> >> > > > >>>> both
>> >> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
>> >> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
>> >> Tutkowski"
>> >> > <
>> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the
>> IP
>> >> > > address
>> >> > > >>> or
>> >> > > >>> >>>>>>>> the KVM
>> >> > > >>> >>>>>>>> >> host?
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global
>> Settings
>> >> > > >>> parameter:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>> >> > > >>> >>>>>>>> server192.168.233.1
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> /etc/cloudstack/agent/agent.properties
>> >> > has
>> >> > > a
>> >> > > >>> >>>>>>>> >> >> host=192.168.233.1
>> >> > > >>> >>>>>>>> >> >> > > > value.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM,
>> >> Marcus
>> >> > > >>> Sorensen
>> >> > > >>> >>>>>>>> <
>> >> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>> >> > > >>> >>>>>>>> 192.168.233.10? But you
>> >> > > >>> >>>>>>>> >> >> tried
>> >> > > >>> >>>>>>>> >> >> > > to
>> >> > > >>> >>>>>>>> >> >> > > > >>>> telnet
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > to
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough
>> to
>> >> > > change
>> >> > > >>> >>>>>>>> that in
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> /etc/cloudstack/agent/agent.properties,
>> >> > > but
>> >> > > >>> you
>> >> > > >>> >>>>>>>> may want
>> >> > > >>> >>>>>>>> >> to
>> >> > > >>> >>>>>>>> >> >> > edit
>> >> > > >>> >>>>>>>> >> >> > > > the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > config
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
>> >> > > Tutkowski" <
>> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
>> >> > /etc/network/interfaces
>> >> > > >>> file
>> >> > > >>> >>>>>>>> looks
>> >> > > >>> >>>>>>>> >> like, if
>> >> > > >>> >>>>>>>> >> >> > > that
>> >> > > >>> >>>>>>>> >> >> > > > >>>> is of
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0
>> network
>> >> > is
>> >> > > the
>> >> > > >>> >>>>>>>> NAT network
>> >> > > >>> >>>>>>>> >> >> > VMware
>> >> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > set
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>> >> > > >>> >>>>>>>> 192.168.233.2 metric 1
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default
>> gw
>> >> > > >>> >>>>>>>> 192.168.233.2
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08
>> PM,
>> >> > Mike
>> >> > > >>> >>>>>>>> Tutkowski <
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com>
>> >> wrote:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct.
>> This is
>> >> > > from
>> >> > > >>> the
>> >> > > >>> >>>>>>>> MS log
>> >> > > >>> >>>>>>>> >> >> (below).
>> >> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would
>> be.
>> >> My
>> >> > > >>> network
>> >> > > >>> >>>>>>>> settings
>> >> > > >>> >>>>>>>> >> >> > > shouldn't
>> >> > > >>> >>>>>>>> >> >> > > > >>>> have
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried
>> this.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host
>> >> from
>> >> > > the
>> >> > > >>> MS
>> >> > > >>> >>>>>>>> host and
>> >> > > >>> >>>>>>>> >> vice
>> >> > > >>> >>>>>>>> >> >> > > > versa.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick
>> >> off
>> >> > a
>> >> > > VM
>> >> > > >>> on
>> >> > > >>> >>>>>>>> the KVM
>> >> > > >>> >>>>>>>> >> host
>> >> > > >>> >>>>>>>> >> >> > and
>> >> > > >>> >>>>>>>> >> >> > > > >>>> ping from
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > it
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS
>> X
>> >> > host
>> >> > > >>> (also
>> >> > > >>> >>>>>>>> running
>> >> > > >>> >>>>>>>> >> the
>> >> > > >>> >>>>>>>> >> >> CS
>> >> > > >>> >>>>>>>> >> >> > > MS)
>> >> > > >>> >>>>>>>> >> >> > > > >>>> to the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > VM
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> >> > > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> >> > > >>> :ctx-6b28dc48)
>> >> > > >>> >>>>>>>> Timeout,
>> >> > > >>> >>>>>>>> >> to
>> >> > > >>> >>>>>>>> >> >> > wait
>> >> > > >>> >>>>>>>> >> >> > > > for
>> >> > > >>> >>>>>>>> >> >> > > > >>>> the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > host
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr,
>> assuming
>> >> it
>> >> > is
>> >> > > >>> failed
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> >> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> >> > > >>> :ctx-6b28dc48)
>> >> > > >>> >>>>>>>> Unable to
>> >> > > >>> >>>>>>>> >> >> find
>> >> > > >>> >>>>>>>> >> >> > > the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> server
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
>> >> http://192.168.233.10
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> >> > > >>> :ctx-6b28dc48)
>> >> > > >>> >>>>>>>> Could not
>> >> > > >>> >>>>>>>> >> >> find
>> >> > > >>> >>>>>>>> >> >> > > > >>>> exception:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > com.cloud.exception.DiscoveryException
>> >> > > >>> in
>> >> > > >>> >>>>>>>> error code
>> >> > > >>> >>>>>>>> >> >> list
>> >> > > >>> >>>>>>>> >> >> > > for
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>> >> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> >> > > >>> :ctx-6b28dc48)
>> >> > > >>> >>>>>>>> Exception:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > com.cloud.exception.DiscoveryException:
>> >> > > >>> >>>>>>>> Unable to add
>> >> > > >>> >>>>>>>> >> >> the
>> >> > > >>> >>>>>>>> >> >> > > host
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >>
>> >> > > >>> >>>>>>>> >>
>> >> > > >>> >>>>>>>>
>> >> > > >>>
>> >> > >
>> >> >
>> >>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to
>> telnet in
>> >> > > from
>> >> > > >>> my
>> >> > > >>> >>>>>>>> KVM host to
>> >> > > >>> >>>>>>>> >> >> the
>> >> > > >>> >>>>>>>> >> >> > MS
>> >> > > >>> >>>>>>>> >> >> > > > >>>> host's
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
>> >> > > >>> 192.168.233.1
>> >> > > >>> >>>>>>>> 8250
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
>> >> > SolidFire
>> >> > > >>> Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses
>> >> the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
>> >> > > >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > --
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer,
>> SolidFire
>> >> > > Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
>> >> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
>> >> > > >>> >>>>>>>> >> >> > > > >>>> >
>> >> > > >>> >>>>>>>> >> >> > > > >>>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>> --
>> >> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire
>> >> > Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the
>> >> cloud<
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> >> >> > > > >>> *™*
>> >> > > >>> >>>>>>>> >> >> > > > >>>
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >> --
>> >> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire
>> >> Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the
>> cloud<
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> >> >> > > > >> *™*
>> >> > > >>> >>>>>>>> >> >> > > > >>
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > > > --
>> >> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire
>> >> Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the
>> cloud<
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> >> >> > > > > *™*
>> >> > > >>> >>>>>>>> >> >> > > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > > > --
>> >> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire
>> Inc.*
>> >> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
>> >> > > >>> >>>>>>>> >> >> > > > cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> >> >> > > > *™*
>> >> > > >>> >>>>>>>> >> >> > > >
>> >> > > >>> >>>>>>>> >> >> > >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >> > --
>> >> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> >> > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
>> >> > > >>> >>>>>>>> >> >> > cloud<
>> >> > > http://solidfire.com/solution/overview/?video=play
>> >> > > >>> >
>> >> > > >>> >>>>>>>> >> >> > *™*
>> >> > > >>> >>>>>>>> >> >> >
>> >> > > >>> >>>>>>>> >> >>
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> >
>> >> > > >>> >>>>>>>> >> > --
>> >> > > >>> >>>>>>>> >> > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> >> > o: 303.746.7302
>> >> > > >>> >>>>>>>> >> > Advancing the way the world uses the
>> >> > > >>> >>>>>>>> >> > cloud<
>> >> > http://solidfire.com/solution/overview/?video=play
>> >> > > >
>> >> > > >>> >>>>>>>> >> > *™*
>> >> > > >>> >>>>>>>> >>
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> >
>> >> > > >>> >>>>>>>> > --
>> >> > > >>> >>>>>>>> > *Mike Tutkowski*
>> >> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>>> > o: 303.746.7302
>> >> > > >>> >>>>>>>> > Advancing the way the world uses the
>> >> > > >>> >>>>>>>> > cloud<
>> >> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>>> > *™*
>> >> > > >>> >>>>>>>>
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>> --
>> >> > > >>> >>>>>>> *Mike Tutkowski*
>> >> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>>> o: 303.746.7302
>> >> > > >>> >>>>>>> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>>> *™*
>> >> > > >>> >>>>>>>
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>> --
>> >> > > >>> >>>>>> *Mike Tutkowski*
>> >> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>>> o: 303.746.7302
>> >> > > >>> >>>>>> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>>> *™*
>> >> > > >>> >>>>>>
>> >> > > >>> >>>>>
>> >> > > >>> >>>>>
>> >> > > >>> >>>>>
>> >> > > >>> >>>>> --
>> >> > > >>> >>>>> *Mike Tutkowski*
>> >> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>>> o: 303.746.7302
>> >> > > >>> >>>>> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>>> *™*
>> >> > > >>> >>>>>
>> >> > > >>> >>>>
>> >> > > >>> >>>>
>> >> > > >>> >>>>
>> >> > > >>> >>>> --
>> >> > > >>> >>>> *Mike Tutkowski*
>> >> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>>> o: 303.746.7302
>> >> > > >>> >>>> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>>> *™*
>> >> > > >>> >>>>
>> >> > > >>> >>>
>> >> > > >>> >>>
>> >> > > >>> >>>
>> >> > > >>> >>> --
>> >> > > >>> >>> *Mike Tutkowski*
>> >> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >>> e: mike.tutkowski@solidfire.com
>> >> > > >>> >>> o: 303.746.7302
>> >> > > >>> >>> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >>> *™*
>> >> > > >>> >>>
>> >> > > >>> >>
>> >> > > >>> >>
>> >> > > >>> >>
>> >> > > >>> >> --
>> >> > > >>> >> *Mike Tutkowski*
>> >> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> >> e: mike.tutkowski@solidfire.com
>> >> > > >>> >> o: 303.746.7302
>> >> > > >>> >> Advancing the way the world uses the cloud<
>> >> > > >>> http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> >> *™*
>> >> > > >>> >>
>> >> > > >>> >
>> >> > > >>> >
>> >> > > >>> >
>> >> > > >>> > --
>> >> > > >>> > *Mike Tutkowski*
>> >> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >>> > e: mike.tutkowski@solidfire.com
>> >> > > >>> > o: 303.746.7302
>> >> > > >>> > Advancing the way the world uses the
>> >> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > >>> > *™*
>> >> > > >>>
>> >> > > >>
>> >> > > >>
>> >> > > >>
>> >> > > >> --
>> >> > > >> *Mike Tutkowski*
>> >> > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > >> e: mike.tutkowski@solidfire.com
>> >> > > >> o: 303.746.7302
>> >> > > >> Advancing the way the world uses the cloud<
>> >> > > http://solidfire.com/solution/overview/?video=play>
>> >> > > >> *™*
>> >> > > >>
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > --
>> >> > > > *Mike Tutkowski*
>> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > e: mike.tutkowski@solidfire.com
>> >> > > > o: 303.746.7302
>> >> > > > Advancing the way the world uses the
>> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > > *™*
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
This is what a fresh agent.properties file looks like on my system.

I expect if I try to add it to a cluster, the empty, localhost, and default
values below should be filled in.

I plan to try to add it to a cluster in a bit.

# The GUID to identify the agent with, this is mandatory!
# Generate with "uuidgen"
guid=

#resource= the java class, which agent load to execute
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource

#workers= number of threads running in agent
workers=5

#host= The IP address of management server
host=localhost

#port = The port management server listening on, default is 8250
port=8250

#cluster= The cluster which the agent belongs to
cluster=default

#pod= The pod which the agent belongs to
pod=default

#zone= The zone which the agent belongs to
zone=default



On Tue, Sep 24, 2013 at 8:55 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> I stull haven't seen your agent.properties. This would tell me if your
> setup succeeded.  At this point my best guess is that
> "cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> --prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fashion. You can
> run it manually at any time to see. Once that is run, then the agent
> should come up. The resource name in your code is pulled from
> agent.properties (I believe) and is usually
> "resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".
>
> On Tue, Sep 24, 2013 at 12:12 AM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > I've been narrowing it down by putting in a million print-to-log
> statements.
> >
> > Do you know if it is a problem that value ends up null (in a constructor
> > for Agent)?
> >
> > String value = _shell.getPersistentProperty(getResourceName(), "id");
> >
> > In that same constructor, this line never finishes:
> >
> > if (!_resource.configure(getResourceName(), params)) {
> >
> > I need to dig into the configure method to see what's going on there.
> >
> >
> > On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> It might be a centos specific thing. These are created by the init
> scripts.
> >> Check your agent init script on Ubuntu and see I'd you can decipher
> where
> >> it sends stdout.
> >> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
> >
> >> wrote:
> >>
> >> > Weird...no such file exists.
> >> >
> >> >
> >> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <shadowsor@gmail.com
> >> > >wrote:
> >> >
> >> > > maybe cloudstack-agent.out
> >> > >
> >> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> >> > > <mi...@solidfire.com> wrote:
> >> > > > OK, so, nothing is screaming out in the logs. I did notice the
> >> > following:
> >> > > >
> >> > > > From setup.log:
> >> > > >
> >> > > > DEBUG:root:execute:apparmor_status |grep libvirt
> >> > > >
> >> > > > DEBUG:root:Failed to execute:
> >> > > >
> >> > > >
> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> >> > > >
> >> > > > DEBUG:root:Failed to execute: * could not access PID file for
> >> > > > cloudstack-agent
> >> > > >
> >> > > >
> >> > > > This is the final line in this log file:
> >> > > >
> >> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> >> > > >
> >> > > >
> >> > > > This is from agent.log:
> >> > > >
> >> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
> >> > > Checking
> >> > > > to see if agent.pid exists.
> >> > > >
> >> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil]
> (main:null)
> >> > > > Executing: bash -c echo $PPID
> >> > > >
> >> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil]
> (main:null)
> >> > > > Execution is successful.
> >> > > >
> >> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id
> is
> >> > > >
> >> > > > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
> >> > > > (main:null) Retrieving network interface: cloudbr0
> >> > > >
> >> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> >> > > > (main:null) Retrieving network interface: cloudbr0
> >> > > >
> >> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> >> > > > (main:null) Retrieving network interface: null
> >> > > >
> >> > > > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
> >> > > > (main:null) Retrieving network interface: null
> >> > > >
> >> > > >
> >> > > > The following kinds of lines are repeated for a bunch of different
> >> .sh
> >> > > > files. I think they often end up being found here:
> >> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
> >> probably
> >> > > not
> >> > > > an issue.
> >> > > >
> >> > > >
> >> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null)
> >> Looking
> >> > > for
> >> > > > call_firewall.sh in the classpath
> >> > > >
> >> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null)
> >> System
> >> > > > resource: null
> >> > > >
> >> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
> >> > Classpath
> >> > > > resource: null
> >> > > >
> >> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null)
> >> Looking
> >> > > for
> >> > > > call_firewall.sh
> >> > > >
> >> > > >
> >> > > > Is there a log file for the Java code that I could write stuff
> out to
> >> > and
> >> > > > see how far we get?
> >> > > >
> >> > > >
> >> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> > > >
> >> > > >> Thanks, Marcus
> >> > > >>
> >> > > >> I've been developing on Windows for most of my time, so a bunch
> of
> >> > these
> >> > > >> Linux-type commands are new to me and I don't always interpret
> the
> >> > > output
> >> > > >> correctly. Getting there. :)
> >> > > >>
> >> > > >>
> >> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
> >> shadowsor@gmail.com
> >> > > >wrote:
> >> > > >>
> >> > > >>> Nope, not running. That's just your grep process. It would look
> >> like:
> >> > > >>>
> >> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
> >> > > >>>
> >> > > >>>
> >> > >
> >> >
> >>
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
> >> > > >>>
> >> > > >>> Your agent log should tell you why it failed to start if you
> set it
> >> > in
> >> > > >>> debug and try to start... or maybe cloudstack-agent.out if it
> >> doesn't
> >> > > >>> get far enough (say it's missing a class or something and can't
> >> > > >>> start).
> >> > > >>>
> >> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> >> > > >>> <mi...@solidfire.com> wrote:
> >> > > >>> > Looks like it's running, though:
> >> > > >>> >
> >> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep
> --color=auto
> >> > > jsvc
> >> > > >>> >
> >> > > >>> >
> >> > > >>> >
> >> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> >> > > >>> > mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >
> >> > > >>> >> Hey Marcus,
> >> > > >>> >>
> >> > > >>> >> Maybe you could give me a better idea of what the "flow" is
> when
> >> > > >>> adding a
> >> > > >>> >> KVM host.
> >> > > >>> >>
> >> > > >>> >> It looks like we SSH into the potential KVM host and execute
> a
> >> > > startup
> >> > > >>> >> script (giving it necessary info about the cloud and the
> >> > management
> >> > > >>> server
> >> > > >>> >> it should talk to).
> >> > > >>> >>
> >> > > >>> >> After this, is the Java VM started?
> >> > > >>> >>
> >> > > >>> >> After a reboot, I assume the JVM is started automatically?
> >> > > >>> >>
> >> > > >>> >> How do you debug your KVM-side Java code?
> >> > > >>> >>
> >> > > >>> >> Been looking through the logs and nothing obvious sticks
> out. I
> >> > will
> >> > > >>> have
> >> > > >>> >> another look.
> >> > > >>> >>
> >> > > >>> >> Thanks
> >> > > >>> >>
> >> > > >>> >>
> >> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> >> > > >>> >> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>
> >> > > >>> >>> Hey Marcus,
> >> > > >>> >>>
> >> > > >>> >>> I've been investigating my issue with not being able to add
> a
> >> KVM
> >> > > >>> host to
> >> > > >>> >>> CS.
> >> > > >>> >>>
> >> > > >>> >>> For what it's worth, this comes back successful:
> >> > > >>> >>>
> >> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
> >> "cloudstack-setup-agent
> >> > > " +
> >> > > >>> >>> parameters, 3);
> >> > > >>> >>>
> >> > > >>> >>> This is what the command looks like:
> >> > > >>> >>>
> >> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> >> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >> > > >>> --prvNic=cloudbr0
> >> > > >>> >>> --guestNic=cloudbr0
> >> > > >>> >>>
> >> > > >>> >>> The problem is this method in LibvirtServerDiscoverer never
> >> > finds a
> >> > > >>> >>> matching host in the DB:
> >> > > >>> >>>
> >> > > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId,
> >> String
> >> > > guid)
> >> > > >>> >>>
> >> > > >>> >>> I assume once the KVM host is up and running that it's
> supposed
> >> > to
> >> > > >>> call
> >> > > >>> >>> into the CS MS so the DB can be updated as such?
> >> > > >>> >>>
> >> > > >>> >>> If so, the problem must be on the KVM side.
> >> > > >>> >>>
> >> > > >>> >>> I did run this again (from the KVM host) to see if the
> >> connection
> >> > > was
> >> > > >>> in
> >> > > >>> >>> place:
> >> > > >>> >>>
> >> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >> > > >>> >>>
> >> > > >>> >>> Trying 192.168.233.1...
> >> > > >>> >>>
> >> > > >>> >>> Connected to 192.168.233.1.
> >> > > >>> >>>
> >> > > >>> >>> Escape character is '^]'.
> >> > > >>> >>> So that looks good.
> >> > > >>> >>>
> >> > > >>> >>> I turned on more info in the debug log, but nothing obvious
> >> jumps
> >> > > out
> >> > > >>> as
> >> > > >>> >>> of yet.
> >> > > >>> >>>
> >> > > >>> >>> If you have any thoughts on this, please shoot them my way.
> :)
> >> > > >>> >>>
> >> > > >>> >>> Thanks!
> >> > > >>> >>>
> >> > > >>> >>>
> >> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> >> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>
> >> > > >>> >>>> First step is for me to get this working for KVM, though.
> :)
> >> > > >>> >>>>
> >> > > >>> >>>> Once I do that, I can perhaps make modifications to the
> >> storage
> >> > > >>> >>>> framework and hypervisor plug-ins to refactor the logic and
> >> > such.
> >> > > >>> >>>>
> >> > > >>> >>>>
> >> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> >> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>
> >> > > >>> >>>>> Same would work for KVM.
> >> > > >>> >>>>>
> >> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
> >> > > appropriate
> >> > > >>> >>>>> times by the storage framework, I could move my connect
> and
> >> > > >>> disconnect
> >> > > >>> >>>>> logic out of the attach/detach logic.
> >> > > >>> >>>>>
> >> > > >>> >>>>>
> >> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> >> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>>
> >> > > >>> >>>>>> Conversely, if the storage framework called the
> >> DestroyCommand
> >> > > for
> >> > > >>> >>>>>> managed storage after the DetachCommand, then I could
> have
> >> had
> >> > > my
> >> > > >>> remove
> >> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling
> >> > rather
> >> > > >>> than in the
> >> > > >>> >>>>>> DetachCommand handling.
> >> > > >>> >>>>>>
> >> > > >>> >>>>>>
> >> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> >> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>>>
> >> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>> The initial approach that was discussed during 4.2 was
> for
> >> me
> >> > > to
> >> > > >>> >>>>>>> modify the attach/detach logic only in the XenServer and
> >> > VMware
> >> > > >>> hypervisor
> >> > > >>> >>>>>>> plug-ins.
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>> Now that I think about it more, though, I kind of would
> >> have
> >> > > >>> liked to
> >> > > >>> >>>>>>> have the storage framework send a CreateCommand to the
> >> > > hypervisor
> >> > > >>> before
> >> > > >>> >>>>>>> sending the AttachCommand if the storage in question was
> >> > > managed.
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>> Then I could have created my SR/datastore in the
> >> > CreateCommand
> >> > > and
> >> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore that
> it
> >> was
> >> > > >>> always
> >> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
> >> SR/datastore
> >> > > in
> >> > > >>> the
> >> > > >>> >>>>>>> AttachCommand).
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> >> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured
> you'd
> >> be
> >> > > in a
> >> > > >>> >>>>>>>> better position to tell.
> >> > > >>> >>>>>>>>
> >> > > >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
> >> > > driver,
> >> > > >>> does
> >> > > >>> >>>>>>>> that mean that there's no template support? Or is it
> some
> >> > > other
> >> > > >>> call
> >> > > >>> >>>>>>>> that does templating now? I'm still getting up to
> speed on
> >> > all
> >> > > >>> of the
> >> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> >> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only place
> >> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me
> that
> >> > > >>> >>>>>>>> CreateCommand
> >> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
> >> plugins.
> >> > > >>> >>>>>>>>
> >> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> >> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
> >> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> > It was my intent that it should work with any
> CloudStack
> >> > > >>> "managed"
> >> > > >>> >>>>>>>> storage
> >> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using
> CHAP, I
> >> > > wrote
> >> > > >>> the
> >> > > >>> >>>>>>>> code so
> >> > > >>> >>>>>>>> > CHAP didn't have to be used.
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
> >> whether
> >> > > it is
> >> > > >>> >>>>>>>> generic
> >> > > >>> >>>>>>>> > enough to keep those names or not.
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> > My expectation is that it is generic enough.
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
> >> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> >> I added a comment to your diff. In general I think
> it
> >> > looks
> >> > > >>> good,
> >> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or not it
> >> will
> >> > > >>> work.
> >> > > >>> >>>>>>>> One
> >> > > >>> >>>>>>>> >> thing I do have reservations about is the
> adaptor/pool
> >> > > >>> naming. If
> >> > > >>> >>>>>>>> you
> >> > > >>> >>>>>>>> >> think the code is generic enough that it will work
> for
> >> > > anyone
> >> > > >>> who
> >> > > >>> >>>>>>>> does
> >> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
> >> > > there's
> >> > > >>> >>>>>>>> anything
> >> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or
> how it
> >> > > likes
> >> > > >>> to
> >> > > >>> >>>>>>>> be
> >> > > >>> >>>>>>>> >> treated then I'd say that they should be named
> >> something
> >> > > less
> >> > > >>> >>>>>>>> generic
> >> > > >>> >>>>>>>> >> than iScsiAdmStorage.
> >> > > >>> >>>>>>>> >>
> >> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
> >> > > >>> >>>>>>>> >> > Great - thanks!
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > Just to give you an overview of what my code does
> >> (for
> >> > > when
> >> > > >>> you
> >> > > >>> >>>>>>>> get a
> >> > > >>> >>>>>>>> >> > chance to review it):
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
> >> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> >> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
> >> connects
> >> > > with
> >> > > >>> the
> >> > > >>> >>>>>>>> CS MS. If
> >> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
> >> > > >>> >>>>>>>> ModifyStoragePoolCommand to
> >> > > >>> >>>>>>>> >> > the host. This logic was based off of
> >> > > DefaultHostListener.
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
> >> unchanged.
> >> > It
> >> > > >>> >>>>>>>> invokes
> >> > > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager.
> The
> >> > > >>> >>>>>>>> KVMStoragePoolManager
> >> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
> >> > > >>> >>>>>>>> >> was
> >> > > >>> >>>>>>>> >> > registered in the constructor for
> >> KVMStoragePoolManager
> >> > > >>> under
> >> > > >>> >>>>>>>> the key of
> >> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just
> makes
> >> an
> >> > > >>> instance
> >> > > >>> >>>>>>>> of
> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns
> >> the
> >> > > >>> pointer
> >> > > >>> >>>>>>>> to the
> >> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is
> the
> >> > > UUID
> >> > > >>> of
> >> > > >>> >>>>>>>> the storage
> >> > > >>> >>>>>>>> >> > pool.
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is
> >> > invoked
> >> > > for
> >> > > >>> >>>>>>>> managed
> >> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
> >> createPhysicalDisk
> >> > > uses
> >> > > >>> >>>>>>>> iscsiadm to
> >> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on
> the
> >> SAN
> >> > > and
> >> > > >>> a
> >> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the
> attach
> >> > > logic
> >> > > >>> that
> >> > > >>> >>>>>>>> follows.
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is
> invoked
> >> > > with
> >> > > >>> the
> >> > > >>> >>>>>>>> IQN of the
> >> > > >>> >>>>>>>> >> > volume if the storage pool in question is managed
> >> > > storage.
> >> > > >>> >>>>>>>> Otherwise, the
> >> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
> >> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> >> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be
> used
> >> in
> >> > > the
> >> > > >>> >>>>>>>> detach logic.
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > Once the volume has been detached,
> >> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> >> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
> >> > > >>> deletePhysicalDisk
> >> > > >>> >>>>>>>> removes the
> >> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
> >> > > >>> >>>>>>>> shadowsor@gmail.com
> >> > > >>> >>>>>>>> >> >wrote:
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> >> Its the log4j properties file in
> >> /etc/cloudstack/agent
> >> > > >>> change
> >> > > >>> >>>>>>>> all INFO
> >> > > >>> >>>>>>>> >> to
> >> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting,
> you
> >> > can
> >> > > >>> tail
> >> > > >>> >>>>>>>> the log
> >> > > >>> >>>>>>>> >> when
> >> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it will
> spit
> >> > > >>> something
> >> > > >>> >>>>>>>> out into
> >> > > >>> >>>>>>>> >> one
> >> > > >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
> >> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> >> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> >> wrote:
> >> > > >>> >>>>>>>> >> >>
> >> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for the
> >> status
> >> > > of
> >> > > >>> the
> >> > > >>> >>>>>>>> service (I
> >> > > >>> >>>>>>>> >> >> > assume it could be started this way, as well,
> by
> >> > > changing
> >> > > >>> >>>>>>>> "status" to
> >> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >> > > >>> >>>>>>>> /usr/sbin/service
> >> > > >>> >>>>>>>> >> >> > cloudstack-agent status
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > I get this back:
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file
> for
> >> > > >>> >>>>>>>> cloudstack-agent
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
> >> though,
> >> > > so I
> >> > > >>> >>>>>>>> think I'm
> >> > > >>> >>>>>>>> >> going
> >> > > >>> >>>>>>>> >> >> to
> >> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
> >> > > >>> enable.debug?
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus
> Sorensen <
> >> > > >>> >>>>>>>> shadowsor@gmail.com
> >> > > >>> >>>>>>>> >> >> > >wrote:
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few days.
> As
> >> > > >>> mentioned,
> >> > > >>> >>>>>>>> you can
> >> > > >>> >>>>>>>> >> set
> >> > > >>> >>>>>>>> >> >> up
> >> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as
> well
> >> if
> >> > > all
> >> > > >>> >>>>>>>> else fails.
> >> > > >>> >>>>>>>> >>  If
> >> > > >>> >>>>>>>> >> >> > you
> >> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the
> KVM
> >> > > host,
> >> > > >>> then
> >> > > >>> >>>>>>>> you need
> >> > > >>> >>>>>>>> >> to
> >> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run
> without
> >> > > >>> >>>>>>>> complaining loudly
> >> > > >>> >>>>>>>> >> if
> >> > > >>> >>>>>>>> >> >> it
> >> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't
> see
> >> > that
> >> > > in
> >> > > >>> >>>>>>>> your agent
> >> > > >>> >>>>>>>> >> log,
> >> > > >>> >>>>>>>> >> >> so
> >> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know
> how
> >> to
> >> > > >>> >>>>>>>> stop/start the
> >> > > >>> >>>>>>>> >> agent
> >> > > >>> >>>>>>>> >> >> on
> >> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> >> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> >> > > >>> >>>>>>>> >> >> > > wrote:
> >> > > >>> >>>>>>>> >> >> > >
> >> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new
> code,
> >> > but I
> >> > > >>> >>>>>>>> thought you
> >> > > >>> >>>>>>>> >> would
> >> > > >>> >>>>>>>> >> >> > be a
> >> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >>
> >> > > >>> >>>>>>>> >>
> >> > > >>> >>>>>>>>
> >> > > >>>
> >> > >
> >> >
> >>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and
> detach
> >> a
> >> > > data
> >> > > >>> >>>>>>>> disk (that
> >> > > >>> >>>>>>>> >> has
> >> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the
> hypervisor.
> >> The
> >> > > data
> >> > > >>> >>>>>>>> disk
> >> > > >>> >>>>>>>> >> happens to
> >> > > >>> >>>>>>>> >> >> > be
> >> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we
> have
> >> a
> >> > > 1:1
> >> > > >>> >>>>>>>> mapping
> >> > > >>> >>>>>>>> >> between a
> >> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor
> snapshots
> >> or
> >> > > stuff
> >> > > >>> >>>>>>>> like that
> >> > > >>> >>>>>>>> >> >> > (likely a
> >> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
> >> detaching a
> >> > > data
> >> > > >>> >>>>>>>> disk in 4.3.
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > Thanks!
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
> >> > Tutkowski <
> >> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't
> remove
> >> > > >>> >>>>>>>> cloudstack-agent
> >> > > >>> >>>>>>>> >> >> first.
> >> > > >>> >>>>>>>> >> >> > > > Would
> >> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo
> apt-get
> >> > > >>> install
> >> > > >>> >>>>>>>> >> >> > cloudstack-agent.
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
> >> > > Tutkowski <
> >> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the command
> >> > > manually:
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu
> :/etc/cloudstack/agent$
> >> > sudo
> >> > > >>> >>>>>>>> >> /usr/sbin/service
> >> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
> >> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
> >> > > cloudstack-agent
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
> >> > > Tutkowski <
> >> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> > > >>> >>>>>>>> >> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > Agent
> >> > > >>> >>>>>>>> >> >> > > > >>> started
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> > > >>> >>>>>>>> >> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is
> 4.3.0-SNAPSHOT
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> > > >>> >>>>>>>> >> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
> >> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> > > >>> >>>>>>>> >> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
> >> > > storage
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> >> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> >> > > >>> >>>>>>>> >> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
> >> > > algorithm
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
> >> > > >>>  [cloud.utils.LogUtils]
> >> > > >>> >>>>>>>> >> (main:null)
> >> > > >>> >>>>>>>> >> >> > > log4j
> >> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
> >> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
> >> > >  [cloud.agent.Agent]
> >> > > >>> >>>>>>>> (main:null)
> >> > > >>> >>>>>>>> >> id
> >> > > >>> >>>>>>>> >> >> > is 3
> >> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > >  [resource.virtualnetwork.VirtualRoutingResource]
> >> > > >>> >>>>>>>> (main:null)
> >> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to
> use:
> >> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log
> was
> >> > > >>> >>>>>>>> important. This
> >> > > >>> >>>>>>>> >> seems
> >> > > >>> >>>>>>>> >> >> to
> >> > > >>> >>>>>>>> >> >> > > be
> >> > > >>> >>>>>>>> >> >> > > > a
> >> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
> >> > > indicate:
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
> /usr/sbin/service
> >> > > >>> >>>>>>>> cloudstack-agent
> >> > > >>> >>>>>>>> >> status
> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could
> not
> >> > > access
> >> > > >>> PID
> >> > > >>> >>>>>>>> file for
> >> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
> >> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo
> /usr/sbin/service
> >> > > >>> >>>>>>>> cloudstack-agent
> >> > > >>> >>>>>>>> >> start
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM,
> Marcus
> >> > > >>> Sorensen <
> >> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
> >> > > >>> >>>>>>>> >> >> > > > >wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I
> thought
> >> it
> >> > > was
> >> > > >>> the
> >> > > >>> >>>>>>>> agent log
> >> > > >>> >>>>>>>> >> for
> >> > > >>> >>>>>>>> >> >> > > some
> >> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That
> might
> >> be
> >> > > the
> >> > > >>> >>>>>>>> place to
> >> > > >>> >>>>>>>> >> look.
> >> > > >>> >>>>>>>> >> >> > There
> >> > > >>> >>>>>>>> >> >> > > > is
> >> > > >>> >>>>>>>> >> >> > > > >>>> an
> >> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for
> the
> >> > setup
> >> > > >>> when
> >> > > >>> >>>>>>>> it adds
> >> > > >>> >>>>>>>> >> the
> >> > > >>> >>>>>>>> >> >> > host,
> >> > > >>> >>>>>>>> >> >> > > > >>>> both
> >> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
> >> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
> >> Tutkowski"
> >> > <
> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the
> IP
> >> > > address
> >> > > >>> or
> >> > > >>> >>>>>>>> the KVM
> >> > > >>> >>>>>>>> >> host?
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global
> Settings
> >> > > >>> parameter:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> >> > > >>> >>>>>>>> server192.168.233.1
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> /etc/cloudstack/agent/agent.properties
> >> > has
> >> > > a
> >> > > >>> >>>>>>>> >> >> host=192.168.233.1
> >> > > >>> >>>>>>>> >> >> > > > value.
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM,
> >> Marcus
> >> > > >>> Sorensen
> >> > > >>> >>>>>>>> <
> >> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
> >> > > >>> >>>>>>>> 192.168.233.10? But you
> >> > > >>> >>>>>>>> >> >> tried
> >> > > >>> >>>>>>>> >> >> > > to
> >> > > >>> >>>>>>>> >> >> > > > >>>> telnet
> >> > > >>> >>>>>>>> >> >> > > > >>>> > to
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough
> to
> >> > > change
> >> > > >>> >>>>>>>> that in
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> /etc/cloudstack/agent/agent.properties,
> >> > > but
> >> > > >>> you
> >> > > >>> >>>>>>>> may want
> >> > > >>> >>>>>>>> >> to
> >> > > >>> >>>>>>>> >> >> > edit
> >> > > >>> >>>>>>>> >> >> > > > the
> >> > > >>> >>>>>>>> >> >> > > > >>>> > config
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
> >> > > Tutkowski" <
> >> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
> >> > /etc/network/interfaces
> >> > > >>> file
> >> > > >>> >>>>>>>> looks
> >> > > >>> >>>>>>>> >> like, if
> >> > > >>> >>>>>>>> >> >> > > that
> >> > > >>> >>>>>>>> >> >> > > > >>>> is of
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0
> network
> >> > is
> >> > > the
> >> > > >>> >>>>>>>> NAT network
> >> > > >>> >>>>>>>> >> >> > VMware
> >> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
> >> > > >>> >>>>>>>> >> >> > > > >>>> > set
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
> >> > > >>> >>>>>>>> 192.168.233.2 metric 1
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default
> gw
> >> > > >>> >>>>>>>> 192.168.233.2
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08
> PM,
> >> > Mike
> >> > > >>> >>>>>>>> Tutkowski <
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com>
> >> wrote:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct.
> This is
> >> > > from
> >> > > >>> the
> >> > > >>> >>>>>>>> MS log
> >> > > >>> >>>>>>>> >> >> (below).
> >> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would
> be.
> >> My
> >> > > >>> network
> >> > > >>> >>>>>>>> settings
> >> > > >>> >>>>>>>> >> >> > > shouldn't
> >> > > >>> >>>>>>>> >> >> > > > >>>> have
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried
> this.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host
> >> from
> >> > > the
> >> > > >>> MS
> >> > > >>> >>>>>>>> host and
> >> > > >>> >>>>>>>> >> vice
> >> > > >>> >>>>>>>> >> >> > > > versa.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick
> >> off
> >> > a
> >> > > VM
> >> > > >>> on
> >> > > >>> >>>>>>>> the KVM
> >> > > >>> >>>>>>>> >> host
> >> > > >>> >>>>>>>> >> >> > and
> >> > > >>> >>>>>>>> >> >> > > > >>>> ping from
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > it
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS
> X
> >> > host
> >> > > >>> (also
> >> > > >>> >>>>>>>> running
> >> > > >>> >>>>>>>> >> the
> >> > > >>> >>>>>>>> >> >> CS
> >> > > >>> >>>>>>>> >> >> > > MS)
> >> > > >>> >>>>>>>> >> >> > > > >>>> to the
> >> > > >>> >>>>>>>> >> >> > > > >>>> > VM
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >> > > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> > > >>> :ctx-6b28dc48)
> >> > > >>> >>>>>>>> Timeout,
> >> > > >>> >>>>>>>> >> to
> >> > > >>> >>>>>>>> >> >> > wait
> >> > > >>> >>>>>>>> >> >> > > > for
> >> > > >>> >>>>>>>> >> >> > > > >>>> the
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > host
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr,
> assuming
> >> it
> >> > is
> >> > > >>> failed
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> > > >>> :ctx-6b28dc48)
> >> > > >>> >>>>>>>> Unable to
> >> > > >>> >>>>>>>> >> >> find
> >> > > >>> >>>>>>>> >> >> > > the
> >> > > >>> >>>>>>>> >> >> > > > >>>> server
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
> >> http://192.168.233.10
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> > > >>> :ctx-6b28dc48)
> >> > > >>> >>>>>>>> Could not
> >> > > >>> >>>>>>>> >> >> find
> >> > > >>> >>>>>>>> >> >> > > > >>>> exception:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > com.cloud.exception.DiscoveryException
> >> > > >>> in
> >> > > >>> >>>>>>>> error code
> >> > > >>> >>>>>>>> >> >> list
> >> > > >>> >>>>>>>> >> >> > > for
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> >> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >> > > >>> :ctx-6b28dc48)
> >> > > >>> >>>>>>>> Exception:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > com.cloud.exception.DiscoveryException:
> >> > > >>> >>>>>>>> Unable to add
> >> > > >>> >>>>>>>> >> >> the
> >> > > >>> >>>>>>>> >> >> > > host
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >>
> >> > > >>> >>>>>>>> >>
> >> > > >>> >>>>>>>>
> >> > > >>>
> >> > >
> >> >
> >>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to
> telnet in
> >> > > from
> >> > > >>> my
> >> > > >>> >>>>>>>> KVM host to
> >> > > >>> >>>>>>>> >> >> the
> >> > > >>> >>>>>>>> >> >> > MS
> >> > > >>> >>>>>>>> >> >> > > > >>>> host's
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
> >> > > >>> 192.168.233.1
> >> > > >>> >>>>>>>> 8250
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
> >> > SolidFire
> >> > > >>> Inc.*
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses
> >> the
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
> >> > > >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
> >> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>> > --
> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer,
> SolidFire
> >> > > Inc.*
> >> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
> >> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
> >> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
> >> > > >>> >>>>>>>> >> >> > > > >>>> >
> >> > > >>> >>>>>>>> >> >> > > > >>>>
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>> --
> >> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire
> >> > Inc.*
> >> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the
> >> cloud<
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> >> >> > > > >>> *™*
> >> > > >>> >>>>>>>> >> >> > > > >>>
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >> --
> >> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire
> >> Inc.*
> >> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the
> cloud<
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> >> >> > > > >> *™*
> >> > > >>> >>>>>>>> >> >> > > > >>
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > > > --
> >> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire
> >> Inc.*
> >> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the
> cloud<
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> >> >> > > > > *™*
> >> > > >>> >>>>>>>> >> >> > > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > > > --
> >> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire
> Inc.*
> >> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
> >> > > >>> >>>>>>>> >> >> > > > cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> >> >> > > > *™*
> >> > > >>> >>>>>>>> >> >> > > >
> >> > > >>> >>>>>>>> >> >> > >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >> > --
> >> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> >> > o: 303.746.7302
> >> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
> >> > > >>> >>>>>>>> >> >> > cloud<
> >> > > http://solidfire.com/solution/overview/?video=play
> >> > > >>> >
> >> > > >>> >>>>>>>> >> >> > *™*
> >> > > >>> >>>>>>>> >> >> >
> >> > > >>> >>>>>>>> >> >>
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> >
> >> > > >>> >>>>>>>> >> > --
> >> > > >>> >>>>>>>> >> > *Mike Tutkowski*
> >> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> >> > o: 303.746.7302
> >> > > >>> >>>>>>>> >> > Advancing the way the world uses the
> >> > > >>> >>>>>>>> >> > cloud<
> >> > http://solidfire.com/solution/overview/?video=play
> >> > > >
> >> > > >>> >>>>>>>> >> > *™*
> >> > > >>> >>>>>>>> >>
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> >
> >> > > >>> >>>>>>>> > --
> >> > > >>> >>>>>>>> > *Mike Tutkowski*
> >> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>>> > o: 303.746.7302
> >> > > >>> >>>>>>>> > Advancing the way the world uses the
> >> > > >>> >>>>>>>> > cloud<
> >> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>>> > *™*
> >> > > >>> >>>>>>>>
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>> --
> >> > > >>> >>>>>>> *Mike Tutkowski*
> >> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>>> o: 303.746.7302
> >> > > >>> >>>>>>> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>>> *™*
> >> > > >>> >>>>>>>
> >> > > >>> >>>>>>
> >> > > >>> >>>>>>
> >> > > >>> >>>>>>
> >> > > >>> >>>>>> --
> >> > > >>> >>>>>> *Mike Tutkowski*
> >> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>>> o: 303.746.7302
> >> > > >>> >>>>>> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>>> *™*
> >> > > >>> >>>>>>
> >> > > >>> >>>>>
> >> > > >>> >>>>>
> >> > > >>> >>>>>
> >> > > >>> >>>>> --
> >> > > >>> >>>>> *Mike Tutkowski*
> >> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>>> o: 303.746.7302
> >> > > >>> >>>>> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>>> *™*
> >> > > >>> >>>>>
> >> > > >>> >>>>
> >> > > >>> >>>>
> >> > > >>> >>>>
> >> > > >>> >>>> --
> >> > > >>> >>>> *Mike Tutkowski*
> >> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>>> o: 303.746.7302
> >> > > >>> >>>> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>>> *™*
> >> > > >>> >>>>
> >> > > >>> >>>
> >> > > >>> >>>
> >> > > >>> >>>
> >> > > >>> >>> --
> >> > > >>> >>> *Mike Tutkowski*
> >> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >>> e: mike.tutkowski@solidfire.com
> >> > > >>> >>> o: 303.746.7302
> >> > > >>> >>> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >>> *™*
> >> > > >>> >>>
> >> > > >>> >>
> >> > > >>> >>
> >> > > >>> >>
> >> > > >>> >> --
> >> > > >>> >> *Mike Tutkowski*
> >> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> >> e: mike.tutkowski@solidfire.com
> >> > > >>> >> o: 303.746.7302
> >> > > >>> >> Advancing the way the world uses the cloud<
> >> > > >>> http://solidfire.com/solution/overview/?video=play>
> >> > > >>> >> *™*
> >> > > >>> >>
> >> > > >>> >
> >> > > >>> >
> >> > > >>> >
> >> > > >>> > --
> >> > > >>> > *Mike Tutkowski*
> >> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> > e: mike.tutkowski@solidfire.com
> >> > > >>> > o: 303.746.7302
> >> > > >>> > Advancing the way the world uses the
> >> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > >>> > *™*
> >> > > >>>
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> --
> >> > > >> *Mike Tutkowski*
> >> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >> e: mike.tutkowski@solidfire.com
> >> > > >> o: 303.746.7302
> >> > > >> Advancing the way the world uses the cloud<
> >> > > http://solidfire.com/solution/overview/?video=play>
> >> > > >> *™*
> >> > > >>
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > *Mike Tutkowski*
> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > e: mike.tutkowski@solidfire.com
> >> > > > o: 303.746.7302
> >> > > > Advancing the way the world uses the
> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > > *™*
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >> >
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
I stull haven't seen your agent.properties. This would tell me if your
setup succeeded.  At this point my best guess is that
"cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
--prvNic=cloudbr0 --guestNic=cloudbr0" failed in some fashion. You can
run it manually at any time to see. Once that is run, then the agent
should come up. The resource name in your code is pulled from
agent.properties (I believe) and is usually
"resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource".

On Tue, Sep 24, 2013 at 12:12 AM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> I've been narrowing it down by putting in a million print-to-log statements.
>
> Do you know if it is a problem that value ends up null (in a constructor
> for Agent)?
>
> String value = _shell.getPersistentProperty(getResourceName(), "id");
>
> In that same constructor, this line never finishes:
>
> if (!_resource.configure(getResourceName(), params)) {
>
> I need to dig into the configure method to see what's going on there.
>
>
> On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> It might be a centos specific thing. These are created by the init scripts.
>> Check your agent init script on Ubuntu and see I'd you can decipher where
>> it sends stdout.
>> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > Weird...no such file exists.
>> >
>> >
>> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > maybe cloudstack-agent.out
>> > >
>> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
>> > > <mi...@solidfire.com> wrote:
>> > > > OK, so, nothing is screaming out in the logs. I did notice the
>> > following:
>> > > >
>> > > > From setup.log:
>> > > >
>> > > > DEBUG:root:execute:apparmor_status |grep libvirt
>> > > >
>> > > > DEBUG:root:Failed to execute:
>> > > >
>> > > >
>> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>> > > >
>> > > > DEBUG:root:Failed to execute: * could not access PID file for
>> > > > cloudstack-agent
>> > > >
>> > > >
>> > > > This is the final line in this log file:
>> > > >
>> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>> > > >
>> > > >
>> > > > This is from agent.log:
>> > > >
>> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
>> > > Checking
>> > > > to see if agent.pid exists.
>> > > >
>> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
>> > > > Executing: bash -c echo $PPID
>> > > >
>> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
>> > > > Execution is successful.
>> > > >
>> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is
>> > > >
>> > > > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
>> > > > (main:null) Retrieving network interface: cloudbr0
>> > > >
>> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
>> > > > (main:null) Retrieving network interface: cloudbr0
>> > > >
>> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
>> > > > (main:null) Retrieving network interface: null
>> > > >
>> > > > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
>> > > > (main:null) Retrieving network interface: null
>> > > >
>> > > >
>> > > > The following kinds of lines are repeated for a bunch of different
>> .sh
>> > > > files. I think they often end up being found here:
>> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
>> probably
>> > > not
>> > > > an issue.
>> > > >
>> > > >
>> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null)
>> Looking
>> > > for
>> > > > call_firewall.sh in the classpath
>> > > >
>> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null)
>> System
>> > > > resource: null
>> > > >
>> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
>> > Classpath
>> > > > resource: null
>> > > >
>> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null)
>> Looking
>> > > for
>> > > > call_firewall.sh
>> > > >
>> > > >
>> > > > Is there a log file for the Java code that I could write stuff out to
>> > and
>> > > > see how far we get?
>> > > >
>> > > >
>> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
>> > > > mike.tutkowski@solidfire.com> wrote:
>> > > >
>> > > >> Thanks, Marcus
>> > > >>
>> > > >> I've been developing on Windows for most of my time, so a bunch of
>> > these
>> > > >> Linux-type commands are new to me and I don't always interpret the
>> > > output
>> > > >> correctly. Getting there. :)
>> > > >>
>> > > >>
>> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> > > >wrote:
>> > > >>
>> > > >>> Nope, not running. That's just your grep process. It would look
>> like:
>> > > >>>
>> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
>> > > >>>
>> > > >>>
>> > >
>> >
>> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>> > > >>>
>> > > >>> Your agent log should tell you why it failed to start if you set it
>> > in
>> > > >>> debug and try to start... or maybe cloudstack-agent.out if it
>> doesn't
>> > > >>> get far enough (say it's missing a class or something and can't
>> > > >>> start).
>> > > >>>
>> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
>> > > >>> <mi...@solidfire.com> wrote:
>> > > >>> > Looks like it's running, though:
>> > > >>> >
>> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto
>> > > jsvc
>> > > >>> >
>> > > >>> >
>> > > >>> >
>> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
>> > > >>> > mike.tutkowski@solidfire.com> wrote:
>> > > >>> >
>> > > >>> >> Hey Marcus,
>> > > >>> >>
>> > > >>> >> Maybe you could give me a better idea of what the "flow" is when
>> > > >>> adding a
>> > > >>> >> KVM host.
>> > > >>> >>
>> > > >>> >> It looks like we SSH into the potential KVM host and execute a
>> > > startup
>> > > >>> >> script (giving it necessary info about the cloud and the
>> > management
>> > > >>> server
>> > > >>> >> it should talk to).
>> > > >>> >>
>> > > >>> >> After this, is the Java VM started?
>> > > >>> >>
>> > > >>> >> After a reboot, I assume the JVM is started automatically?
>> > > >>> >>
>> > > >>> >> How do you debug your KVM-side Java code?
>> > > >>> >>
>> > > >>> >> Been looking through the logs and nothing obvious sticks out. I
>> > will
>> > > >>> have
>> > > >>> >> another look.
>> > > >>> >>
>> > > >>> >> Thanks
>> > > >>> >>
>> > > >>> >>
>> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>> > > >>> >> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>
>> > > >>> >>> Hey Marcus,
>> > > >>> >>>
>> > > >>> >>> I've been investigating my issue with not being able to add a
>> KVM
>> > > >>> host to
>> > > >>> >>> CS.
>> > > >>> >>>
>> > > >>> >>> For what it's worth, this comes back successful:
>> > > >>> >>>
>> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
>> "cloudstack-setup-agent
>> > > " +
>> > > >>> >>> parameters, 3);
>> > > >>> >>>
>> > > >>> >>> This is what the command looks like:
>> > > >>> >>>
>> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> > > >>> --prvNic=cloudbr0
>> > > >>> >>> --guestNic=cloudbr0
>> > > >>> >>>
>> > > >>> >>> The problem is this method in LibvirtServerDiscoverer never
>> > finds a
>> > > >>> >>> matching host in the DB:
>> > > >>> >>>
>> > > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId,
>> String
>> > > guid)
>> > > >>> >>>
>> > > >>> >>> I assume once the KVM host is up and running that it's supposed
>> > to
>> > > >>> call
>> > > >>> >>> into the CS MS so the DB can be updated as such?
>> > > >>> >>>
>> > > >>> >>> If so, the problem must be on the KVM side.
>> > > >>> >>>
>> > > >>> >>> I did run this again (from the KVM host) to see if the
>> connection
>> > > was
>> > > >>> in
>> > > >>> >>> place:
>> > > >>> >>>
>> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> > > >>> >>>
>> > > >>> >>> Trying 192.168.233.1...
>> > > >>> >>>
>> > > >>> >>> Connected to 192.168.233.1.
>> > > >>> >>>
>> > > >>> >>> Escape character is '^]'.
>> > > >>> >>> So that looks good.
>> > > >>> >>>
>> > > >>> >>> I turned on more info in the debug log, but nothing obvious
>> jumps
>> > > out
>> > > >>> as
>> > > >>> >>> of yet.
>> > > >>> >>>
>> > > >>> >>> If you have any thoughts on this, please shoot them my way. :)
>> > > >>> >>>
>> > > >>> >>> Thanks!
>> > > >>> >>>
>> > > >>> >>>
>> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>
>> > > >>> >>>> First step is for me to get this working for KVM, though. :)
>> > > >>> >>>>
>> > > >>> >>>> Once I do that, I can perhaps make modifications to the
>> storage
>> > > >>> >>>> framework and hypervisor plug-ins to refactor the logic and
>> > such.
>> > > >>> >>>>
>> > > >>> >>>>
>> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>
>> > > >>> >>>>> Same would work for KVM.
>> > > >>> >>>>>
>> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
>> > > appropriate
>> > > >>> >>>>> times by the storage framework, I could move my connect and
>> > > >>> disconnect
>> > > >>> >>>>> logic out of the attach/detach logic.
>> > > >>> >>>>>
>> > > >>> >>>>>
>> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>>
>> > > >>> >>>>>> Conversely, if the storage framework called the
>> DestroyCommand
>> > > for
>> > > >>> >>>>>> managed storage after the DetachCommand, then I could have
>> had
>> > > my
>> > > >>> remove
>> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling
>> > rather
>> > > >>> than in the
>> > > >>> >>>>>> DetachCommand handling.
>> > > >>> >>>>>>
>> > > >>> >>>>>>
>> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>>>
>> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>> > > >>> >>>>>>>
>> > > >>> >>>>>>> The initial approach that was discussed during 4.2 was for
>> me
>> > > to
>> > > >>> >>>>>>> modify the attach/detach logic only in the XenServer and
>> > VMware
>> > > >>> hypervisor
>> > > >>> >>>>>>> plug-ins.
>> > > >>> >>>>>>>
>> > > >>> >>>>>>> Now that I think about it more, though, I kind of would
>> have
>> > > >>> liked to
>> > > >>> >>>>>>> have the storage framework send a CreateCommand to the
>> > > hypervisor
>> > > >>> before
>> > > >>> >>>>>>> sending the AttachCommand if the storage in question was
>> > > managed.
>> > > >>> >>>>>>>
>> > > >>> >>>>>>> Then I could have created my SR/datastore in the
>> > CreateCommand
>> > > and
>> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore that it
>> was
>> > > >>> always
>> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
>> SR/datastore
>> > > in
>> > > >>> the
>> > > >>> >>>>>>> AttachCommand).
>> > > >>> >>>>>>>
>> > > >>> >>>>>>>
>> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
>> > > >>> >>>>>>>
>> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd
>> be
>> > > in a
>> > > >>> >>>>>>>> better position to tell.
>> > > >>> >>>>>>>>
>> > > >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
>> > > driver,
>> > > >>> does
>> > > >>> >>>>>>>> that mean that there's no template support? Or is it some
>> > > other
>> > > >>> call
>> > > >>> >>>>>>>> that does templating now? I'm still getting up to speed on
>> > all
>> > > >>> of the
>> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only place
>> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
>> > > >>> >>>>>>>> CreateCommand
>> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
>> plugins.
>> > > >>> >>>>>>>>
>> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> > It was my intent that it should work with any CloudStack
>> > > >>> "managed"
>> > > >>> >>>>>>>> storage
>> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I
>> > > wrote
>> > > >>> the
>> > > >>> >>>>>>>> code so
>> > > >>> >>>>>>>> > CHAP didn't have to be used.
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
>> whether
>> > > it is
>> > > >>> >>>>>>>> generic
>> > > >>> >>>>>>>> > enough to keep those names or not.
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> > My expectation is that it is generic enough.
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> >> I added a comment to your diff. In general I think it
>> > looks
>> > > >>> good,
>> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or not it
>> will
>> > > >>> work.
>> > > >>> >>>>>>>> One
>> > > >>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
>> > > >>> naming. If
>> > > >>> >>>>>>>> you
>> > > >>> >>>>>>>> >> think the code is generic enough that it will work for
>> > > anyone
>> > > >>> who
>> > > >>> >>>>>>>> does
>> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
>> > > there's
>> > > >>> >>>>>>>> anything
>> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it
>> > > likes
>> > > >>> to
>> > > >>> >>>>>>>> be
>> > > >>> >>>>>>>> >> treated then I'd say that they should be named
>> something
>> > > less
>> > > >>> >>>>>>>> generic
>> > > >>> >>>>>>>> >> than iScsiAdmStorage.
>> > > >>> >>>>>>>> >>
>> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
>> > > >>> >>>>>>>> >> > Great - thanks!
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > Just to give you an overview of what my code does
>> (for
>> > > when
>> > > >>> you
>> > > >>> >>>>>>>> get a
>> > > >>> >>>>>>>> >> > chance to review it):
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
>> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
>> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
>> connects
>> > > with
>> > > >>> the
>> > > >>> >>>>>>>> CS MS. If
>> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
>> > > >>> >>>>>>>> ModifyStoragePoolCommand to
>> > > >>> >>>>>>>> >> > the host. This logic was based off of
>> > > DefaultHostListener.
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
>> unchanged.
>> > It
>> > > >>> >>>>>>>> invokes
>> > > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>> > > >>> >>>>>>>> KVMStoragePoolManager
>> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
>> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
>> > > >>> >>>>>>>> >> was
>> > > >>> >>>>>>>> >> > registered in the constructor for
>> KVMStoragePoolManager
>> > > >>> under
>> > > >>> >>>>>>>> the key of
>> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes
>> an
>> > > >>> instance
>> > > >>> >>>>>>>> of
>> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns
>> the
>> > > >>> pointer
>> > > >>> >>>>>>>> to the
>> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the
>> > > UUID
>> > > >>> of
>> > > >>> >>>>>>>> the storage
>> > > >>> >>>>>>>> >> > pool.
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is
>> > invoked
>> > > for
>> > > >>> >>>>>>>> managed
>> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
>> createPhysicalDisk
>> > > uses
>> > > >>> >>>>>>>> iscsiadm to
>> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on the
>> SAN
>> > > and
>> > > >>> a
>> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach
>> > > logic
>> > > >>> that
>> > > >>> >>>>>>>> follows.
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked
>> > > with
>> > > >>> the
>> > > >>> >>>>>>>> IQN of the
>> > > >>> >>>>>>>> >> > volume if the storage pool in question is managed
>> > > storage.
>> > > >>> >>>>>>>> Otherwise, the
>> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
>> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used
>> in
>> > > the
>> > > >>> >>>>>>>> detach logic.
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > Once the volume has been detached,
>> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
>> > > >>> deletePhysicalDisk
>> > > >>> >>>>>>>> removes the
>> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>> > > >>> >>>>>>>> shadowsor@gmail.com
>> > > >>> >>>>>>>> >> >wrote:
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> >> Its the log4j properties file in
>> /etc/cloudstack/agent
>> > > >>> change
>> > > >>> >>>>>>>> all INFO
>> > > >>> >>>>>>>> >> to
>> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you
>> > can
>> > > >>> tail
>> > > >>> >>>>>>>> the log
>> > > >>> >>>>>>>> >> when
>> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
>> > > >>> something
>> > > >>> >>>>>>>> out into
>> > > >>> >>>>>>>> >> one
>> > > >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> >> wrote:
>> > > >>> >>>>>>>> >> >>
>> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for the
>> status
>> > > of
>> > > >>> the
>> > > >>> >>>>>>>> service (I
>> > > >>> >>>>>>>> >> >> > assume it could be started this way, as well, by
>> > > changing
>> > > >>> >>>>>>>> "status" to
>> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> > > >>> >>>>>>>> /usr/sbin/service
>> > > >>> >>>>>>>> >> >> > cloudstack-agent status
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > I get this back:
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
>> > > >>> >>>>>>>> cloudstack-agent
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
>> though,
>> > > so I
>> > > >>> >>>>>>>> think I'm
>> > > >>> >>>>>>>> >> going
>> > > >>> >>>>>>>> >> >> to
>> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
>> > > >>> enable.debug?
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>> > > >>> >>>>>>>> shadowsor@gmail.com
>> > > >>> >>>>>>>> >> >> > >wrote:
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
>> > > >>> mentioned,
>> > > >>> >>>>>>>> you can
>> > > >>> >>>>>>>> >> set
>> > > >>> >>>>>>>> >> >> up
>> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well
>> if
>> > > all
>> > > >>> >>>>>>>> else fails.
>> > > >>> >>>>>>>> >>  If
>> > > >>> >>>>>>>> >> >> > you
>> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM
>> > > host,
>> > > >>> then
>> > > >>> >>>>>>>> you need
>> > > >>> >>>>>>>> >> to
>> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
>> > > >>> >>>>>>>> complaining loudly
>> > > >>> >>>>>>>> >> if
>> > > >>> >>>>>>>> >> >> it
>> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see
>> > that
>> > > in
>> > > >>> >>>>>>>> your agent
>> > > >>> >>>>>>>> >> log,
>> > > >>> >>>>>>>> >> >> so
>> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how
>> to
>> > > >>> >>>>>>>> stop/start the
>> > > >>> >>>>>>>> >> agent
>> > > >>> >>>>>>>> >> >> on
>> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
>> > > >>> >>>>>>>> >> >> > > wrote:
>> > > >>> >>>>>>>> >> >> > >
>> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code,
>> > but I
>> > > >>> >>>>>>>> thought you
>> > > >>> >>>>>>>> >> would
>> > > >>> >>>>>>>> >> >> > be a
>> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >>
>> > > >>> >>>>>>>> >>
>> > > >>> >>>>>>>>
>> > > >>>
>> > >
>> >
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach
>> a
>> > > data
>> > > >>> >>>>>>>> disk (that
>> > > >>> >>>>>>>> >> has
>> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor.
>> The
>> > > data
>> > > >>> >>>>>>>> disk
>> > > >>> >>>>>>>> >> happens to
>> > > >>> >>>>>>>> >> >> > be
>> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have
>> a
>> > > 1:1
>> > > >>> >>>>>>>> mapping
>> > > >>> >>>>>>>> >> between a
>> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots
>> or
>> > > stuff
>> > > >>> >>>>>>>> like that
>> > > >>> >>>>>>>> >> >> > (likely a
>> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
>> detaching a
>> > > data
>> > > >>> >>>>>>>> disk in 4.3.
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > Thanks!
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
>> > Tutkowski <
>> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>> > > >>> >>>>>>>> cloudstack-agent
>> > > >>> >>>>>>>> >> >> first.
>> > > >>> >>>>>>>> >> >> > > > Would
>> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
>> > > >>> install
>> > > >>> >>>>>>>> >> >> > cloudstack-agent.
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
>> > > Tutkowski <
>> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the command
>> > > manually:
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$
>> > sudo
>> > > >>> >>>>>>>> >> /usr/sbin/service
>> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
>> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
>> > > cloudstack-agent
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
>> > > Tutkowski <
>> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> > > >>> >>>>>>>> >> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > > Agent
>> > > >>> >>>>>>>> >> >> > > > >>> started
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> > > >>> >>>>>>>> >> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> > > >>> >>>>>>>> >> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
>> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> > > >>> >>>>>>>> >> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
>> > > storage
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
>> > > >>> >>>>>>>> >> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
>> > > algorithm
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>> > > >>>  [cloud.utils.LogUtils]
>> > > >>> >>>>>>>> >> (main:null)
>> > > >>> >>>>>>>> >> >> > > log4j
>> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
>> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
>> > >  [cloud.agent.Agent]
>> > > >>> >>>>>>>> (main:null)
>> > > >>> >>>>>>>> >> id
>> > > >>> >>>>>>>> >> >> > is 3
>> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > >  [resource.virtualnetwork.VirtualRoutingResource]
>> > > >>> >>>>>>>> (main:null)
>> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
>> > > >>> >>>>>>>> important. This
>> > > >>> >>>>>>>> >> seems
>> > > >>> >>>>>>>> >> >> to
>> > > >>> >>>>>>>> >> >> > > be
>> > > >>> >>>>>>>> >> >> > > > a
>> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
>> > > indicate:
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>> > > >>> >>>>>>>> cloudstack-agent
>> > > >>> >>>>>>>> >> status
>> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not
>> > > access
>> > > >>> PID
>> > > >>> >>>>>>>> file for
>> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
>> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>> > > >>> >>>>>>>> cloudstack-agent
>> > > >>> >>>>>>>> >> start
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
>> > > >>> Sorensen <
>> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
>> > > >>> >>>>>>>> >> >> > > > >wrote:
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought
>> it
>> > > was
>> > > >>> the
>> > > >>> >>>>>>>> agent log
>> > > >>> >>>>>>>> >> for
>> > > >>> >>>>>>>> >> >> > > some
>> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might
>> be
>> > > the
>> > > >>> >>>>>>>> place to
>> > > >>> >>>>>>>> >> look.
>> > > >>> >>>>>>>> >> >> > There
>> > > >>> >>>>>>>> >> >> > > > is
>> > > >>> >>>>>>>> >> >> > > > >>>> an
>> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the
>> > setup
>> > > >>> when
>> > > >>> >>>>>>>> it adds
>> > > >>> >>>>>>>> >> the
>> > > >>> >>>>>>>> >> >> > host,
>> > > >>> >>>>>>>> >> >> > > > >>>> both
>> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
>> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
>> Tutkowski"
>> > <
>> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
>> > > >>> >>>>>>>> >> >> > > > >>>>
>> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP
>> > > address
>> > > >>> or
>> > > >>> >>>>>>>> the KVM
>> > > >>> >>>>>>>> >> host?
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
>> > > >>> parameter:
>> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>> > > >>> >>>>>>>> server192.168.233.1
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties
>> > has
>> > > a
>> > > >>> >>>>>>>> >> >> host=192.168.233.1
>> > > >>> >>>>>>>> >> >> > > > value.
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM,
>> Marcus
>> > > >>> Sorensen
>> > > >>> >>>>>>>> <
>> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>> > > >>> >>>>>>>> 192.168.233.10? But you
>> > > >>> >>>>>>>> >> >> tried
>> > > >>> >>>>>>>> >> >> > > to
>> > > >>> >>>>>>>> >> >> > > > >>>> telnet
>> > > >>> >>>>>>>> >> >> > > > >>>> > to
>> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to
>> > > change
>> > > >>> >>>>>>>> that in
>> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> /etc/cloudstack/agent/agent.properties,
>> > > but
>> > > >>> you
>> > > >>> >>>>>>>> may want
>> > > >>> >>>>>>>> >> to
>> > > >>> >>>>>>>> >> >> > edit
>> > > >>> >>>>>>>> >> >> > > > the
>> > > >>> >>>>>>>> >> >> > > > >>>> > config
>> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
>> > > Tutkowski" <
>> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
>> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
>> > /etc/network/interfaces
>> > > >>> file
>> > > >>> >>>>>>>> looks
>> > > >>> >>>>>>>> >> like, if
>> > > >>> >>>>>>>> >> >> > > that
>> > > >>> >>>>>>>> >> >> > > > >>>> is of
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network
>> > is
>> > > the
>> > > >>> >>>>>>>> NAT network
>> > > >>> >>>>>>>> >> >> > VMware
>> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
>> > > >>> >>>>>>>> >> >> > > > >>>> > set
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>> > > >>> >>>>>>>> 192.168.233.2 metric 1
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
>> > > >>> >>>>>>>> 192.168.233.2
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM,
>> > Mike
>> > > >>> >>>>>>>> Tutkowski <
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com>
>> wrote:
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is
>> > > from
>> > > >>> the
>> > > >>> >>>>>>>> MS log
>> > > >>> >>>>>>>> >> >> (below).
>> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be.
>> My
>> > > >>> network
>> > > >>> >>>>>>>> settings
>> > > >>> >>>>>>>> >> >> > > shouldn't
>> > > >>> >>>>>>>> >> >> > > > >>>> have
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host
>> from
>> > > the
>> > > >>> MS
>> > > >>> >>>>>>>> host and
>> > > >>> >>>>>>>> >> vice
>> > > >>> >>>>>>>> >> >> > > > versa.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick
>> off
>> > a
>> > > VM
>> > > >>> on
>> > > >>> >>>>>>>> the KVM
>> > > >>> >>>>>>>> >> host
>> > > >>> >>>>>>>> >> >> > and
>> > > >>> >>>>>>>> >> >> > > > >>>> ping from
>> > > >>> >>>>>>>> >> >> > > > >>>> > > it
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X
>> > host
>> > > >>> (also
>> > > >>> >>>>>>>> running
>> > > >>> >>>>>>>> >> the
>> > > >>> >>>>>>>> >> >> CS
>> > > >>> >>>>>>>> >> >> > > MS)
>> > > >>> >>>>>>>> >> >> > > > >>>> to the
>> > > >>> >>>>>>>> >> >> > > > >>>> > VM
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> > > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> > > >>> :ctx-6b28dc48)
>> > > >>> >>>>>>>> Timeout,
>> > > >>> >>>>>>>> >> to
>> > > >>> >>>>>>>> >> >> > wait
>> > > >>> >>>>>>>> >> >> > > > for
>> > > >>> >>>>>>>> >> >> > > > >>>> the
>> > > >>> >>>>>>>> >> >> > > > >>>> > > host
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming
>> it
>> > is
>> > > >>> failed
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> > > >>> :ctx-6b28dc48)
>> > > >>> >>>>>>>> Unable to
>> > > >>> >>>>>>>> >> >> find
>> > > >>> >>>>>>>> >> >> > > the
>> > > >>> >>>>>>>> >> >> > > > >>>> server
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
>> http://192.168.233.10
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> > > >>> :ctx-6b28dc48)
>> > > >>> >>>>>>>> Could not
>> > > >>> >>>>>>>> >> >> find
>> > > >>> >>>>>>>> >> >> > > > >>>> exception:
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > com.cloud.exception.DiscoveryException
>> > > >>> in
>> > > >>> >>>>>>>> error code
>> > > >>> >>>>>>>> >> >> list
>> > > >>> >>>>>>>> >> >> > > for
>> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> > > >>> :ctx-6b28dc48)
>> > > >>> >>>>>>>> Exception:
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > com.cloud.exception.DiscoveryException:
>> > > >>> >>>>>>>> Unable to add
>> > > >>> >>>>>>>> >> >> the
>> > > >>> >>>>>>>> >> >> > > host
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>>
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >>
>> > > >>> >>>>>>>> >>
>> > > >>> >>>>>>>>
>> > > >>>
>> > >
>> >
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in
>> > > from
>> > > >>> my
>> > > >>> >>>>>>>> KVM host to
>> > > >>> >>>>>>>> >> >> the
>> > > >>> >>>>>>>> >> >> > MS
>> > > >>> >>>>>>>> >> >> > > > >>>> host's
>> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
>> > > >>> 192.168.233.1
>> > > >>> >>>>>>>> 8250
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
>> > SolidFire
>> > > >>> Inc.*
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses
>> the
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
>> > > >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
>> > > >>> >>>>>>>> >> >> > > > >>>> > > >
>> > > >>> >>>>>>>> >> >> > > > >>>> > >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>> > --
>> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire
>> > > Inc.*
>> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
>> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
>> > > >>> >>>>>>>> >> >> > > > >>>> >
>> > > >>> >>>>>>>> >> >> > > > >>>>
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>> --
>> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire
>> > Inc.*
>> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the
>> cloud<
>> > > >>> >>>>>>>> >> >> > > >
>> > > http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> >> >> > > > >>> *™*
>> > > >>> >>>>>>>> >> >> > > > >>>
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >> --
>> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire
>> Inc.*
>> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>> > > >>> >>>>>>>> >> >> > > >
>> > > http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> >> >> > > > >> *™*
>> > > >>> >>>>>>>> >> >> > > > >>
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > > > --
>> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire
>> Inc.*
>> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>> > > >>> >>>>>>>> >> >> > > >
>> > > http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> >> >> > > > > *™*
>> > > >>> >>>>>>>> >> >> > > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > > > --
>> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
>> > > >>> >>>>>>>> >> >> > > > cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> >> >> > > > *™*
>> > > >>> >>>>>>>> >> >> > > >
>> > > >>> >>>>>>>> >> >> > >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >> > --
>> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> >> > o: 303.746.7302
>> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
>> > > >>> >>>>>>>> >> >> > cloud<
>> > > http://solidfire.com/solution/overview/?video=play
>> > > >>> >
>> > > >>> >>>>>>>> >> >> > *™*
>> > > >>> >>>>>>>> >> >> >
>> > > >>> >>>>>>>> >> >>
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> >
>> > > >>> >>>>>>>> >> > --
>> > > >>> >>>>>>>> >> > *Mike Tutkowski*
>> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> >> > o: 303.746.7302
>> > > >>> >>>>>>>> >> > Advancing the way the world uses the
>> > > >>> >>>>>>>> >> > cloud<
>> > http://solidfire.com/solution/overview/?video=play
>> > > >
>> > > >>> >>>>>>>> >> > *™*
>> > > >>> >>>>>>>> >>
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> >
>> > > >>> >>>>>>>> > --
>> > > >>> >>>>>>>> > *Mike Tutkowski*
>> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>>> > o: 303.746.7302
>> > > >>> >>>>>>>> > Advancing the way the world uses the
>> > > >>> >>>>>>>> > cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>>> > *™*
>> > > >>> >>>>>>>>
>> > > >>> >>>>>>>
>> > > >>> >>>>>>>
>> > > >>> >>>>>>>
>> > > >>> >>>>>>> --
>> > > >>> >>>>>>> *Mike Tutkowski*
>> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>>> o: 303.746.7302
>> > > >>> >>>>>>> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>>> *™*
>> > > >>> >>>>>>>
>> > > >>> >>>>>>
>> > > >>> >>>>>>
>> > > >>> >>>>>>
>> > > >>> >>>>>> --
>> > > >>> >>>>>> *Mike Tutkowski*
>> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> >>>>>> o: 303.746.7302
>> > > >>> >>>>>> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>>> *™*
>> > > >>> >>>>>>
>> > > >>> >>>>>
>> > > >>> >>>>>
>> > > >>> >>>>>
>> > > >>> >>>>> --
>> > > >>> >>>>> *Mike Tutkowski*
>> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>>> e: mike.tutkowski@solidfire.com
>> > > >>> >>>>> o: 303.746.7302
>> > > >>> >>>>> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>>> *™*
>> > > >>> >>>>>
>> > > >>> >>>>
>> > > >>> >>>>
>> > > >>> >>>>
>> > > >>> >>>> --
>> > > >>> >>>> *Mike Tutkowski*
>> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>> >>>> o: 303.746.7302
>> > > >>> >>>> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>>> *™*
>> > > >>> >>>>
>> > > >>> >>>
>> > > >>> >>>
>> > > >>> >>>
>> > > >>> >>> --
>> > > >>> >>> *Mike Tutkowski*
>> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >>> e: mike.tutkowski@solidfire.com
>> > > >>> >>> o: 303.746.7302
>> > > >>> >>> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >>> *™*
>> > > >>> >>>
>> > > >>> >>
>> > > >>> >>
>> > > >>> >>
>> > > >>> >> --
>> > > >>> >> *Mike Tutkowski*
>> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> >> e: mike.tutkowski@solidfire.com
>> > > >>> >> o: 303.746.7302
>> > > >>> >> Advancing the way the world uses the cloud<
>> > > >>> http://solidfire.com/solution/overview/?video=play>
>> > > >>> >> *™*
>> > > >>> >>
>> > > >>> >
>> > > >>> >
>> > > >>> >
>> > > >>> > --
>> > > >>> > *Mike Tutkowski*
>> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> > e: mike.tutkowski@solidfire.com
>> > > >>> > o: 303.746.7302
>> > > >>> > Advancing the way the world uses the
>> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > >>> > *™*
>> > > >>>
>> > > >>
>> > > >>
>> > > >>
>> > > >> --
>> > > >> *Mike Tutkowski*
>> > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >> e: mike.tutkowski@solidfire.com
>> > > >> o: 303.746.7302
>> > > >> Advancing the way the world uses the cloud<
>> > > http://solidfire.com/solution/overview/?video=play>
>> > > >> *™*
>> > > >>
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > *Mike Tutkowski*
>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > e: mike.tutkowski@solidfire.com
>> > > > o: 303.746.7302
>> > > > Advancing the way the world uses the
>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > > *™*
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I've been narrowing it down by putting in a million print-to-log statements.

Do you know if it is a problem that value ends up null (in a constructor
for Agent)?

String value = _shell.getPersistentProperty(getResourceName(), "id");

In that same constructor, this line never finishes:

if (!_resource.configure(getResourceName(), params)) {

I need to dig into the configure method to see what's going on there.


On Mon, Sep 23, 2013 at 5:45 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> It might be a centos specific thing. These are created by the init scripts.
> Check your agent init script on Ubuntu and see I'd you can decipher where
> it sends stdout.
> On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Weird...no such file exists.
> >
> >
> > On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > maybe cloudstack-agent.out
> > >
> > > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> > > <mi...@solidfire.com> wrote:
> > > > OK, so, nothing is screaming out in the logs. I did notice the
> > following:
> > > >
> > > > From setup.log:
> > > >
> > > > DEBUG:root:execute:apparmor_status |grep libvirt
> > > >
> > > > DEBUG:root:Failed to execute:
> > > >
> > > >
> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> > > >
> > > > DEBUG:root:Failed to execute: * could not access PID file for
> > > > cloudstack-agent
> > > >
> > > >
> > > > This is the final line in this log file:
> > > >
> > > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> > > >
> > > >
> > > > This is from agent.log:
> > > >
> > > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
> > > Checking
> > > > to see if agent.pid exists.
> > > >
> > > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > > > Executing: bash -c echo $PPID
> > > >
> > > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > > > Execution is successful.
> > > >
> > > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is
> > > >
> > > > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
> > > > (main:null) Retrieving network interface: cloudbr0
> > > >
> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > > > (main:null) Retrieving network interface: cloudbr0
> > > >
> > > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > > > (main:null) Retrieving network interface: null
> > > >
> > > > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
> > > > (main:null) Retrieving network interface: null
> > > >
> > > >
> > > > The following kinds of lines are repeated for a bunch of different
> .sh
> > > > files. I think they often end up being found here:
> > > > /usr/share/cloudstack-common/scripts/network/domr, so this is
> probably
> > > not
> > > > an issue.
> > > >
> > > >
> > > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null)
> Looking
> > > for
> > > > call_firewall.sh in the classpath
> > > >
> > > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null)
> System
> > > > resource: null
> > > >
> > > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
> > Classpath
> > > > resource: null
> > > >
> > > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null)
> Looking
> > > for
> > > > call_firewall.sh
> > > >
> > > >
> > > > Is there a log file for the Java code that I could write stuff out to
> > and
> > > > see how far we get?
> > > >
> > > >
> > > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> > > > mike.tutkowski@solidfire.com> wrote:
> > > >
> > > >> Thanks, Marcus
> > > >>
> > > >> I've been developing on Windows for most of my time, so a bunch of
> > these
> > > >> Linux-type commands are new to me and I don't always interpret the
> > > output
> > > >> correctly. Getting there. :)
> > > >>
> > > >>
> > > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <
> shadowsor@gmail.com
> > > >wrote:
> > > >>
> > > >>> Nope, not running. That's just your grep process. It would look
> like:
> > > >>>
> > > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
> > > >>>
> > > >>>
> > >
> >
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
> > > >>>
> > > >>> Your agent log should tell you why it failed to start if you set it
> > in
> > > >>> debug and try to start... or maybe cloudstack-agent.out if it
> doesn't
> > > >>> get far enough (say it's missing a class or something and can't
> > > >>> start).
> > > >>>
> > > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> > > >>> <mi...@solidfire.com> wrote:
> > > >>> > Looks like it's running, though:
> > > >>> >
> > > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> > > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto
> > > jsvc
> > > >>> >
> > > >>> >
> > > >>> >
> > > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> > > >>> > mike.tutkowski@solidfire.com> wrote:
> > > >>> >
> > > >>> >> Hey Marcus,
> > > >>> >>
> > > >>> >> Maybe you could give me a better idea of what the "flow" is when
> > > >>> adding a
> > > >>> >> KVM host.
> > > >>> >>
> > > >>> >> It looks like we SSH into the potential KVM host and execute a
> > > startup
> > > >>> >> script (giving it necessary info about the cloud and the
> > management
> > > >>> server
> > > >>> >> it should talk to).
> > > >>> >>
> > > >>> >> After this, is the Java VM started?
> > > >>> >>
> > > >>> >> After a reboot, I assume the JVM is started automatically?
> > > >>> >>
> > > >>> >> How do you debug your KVM-side Java code?
> > > >>> >>
> > > >>> >> Been looking through the logs and nothing obvious sticks out. I
> > will
> > > >>> have
> > > >>> >> another look.
> > > >>> >>
> > > >>> >> Thanks
> > > >>> >>
> > > >>> >>
> > > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> > > >>> >> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>
> > > >>> >>> Hey Marcus,
> > > >>> >>>
> > > >>> >>> I've been investigating my issue with not being able to add a
> KVM
> > > >>> host to
> > > >>> >>> CS.
> > > >>> >>>
> > > >>> >>> For what it's worth, this comes back successful:
> > > >>> >>>
> > > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection,
> "cloudstack-setup-agent
> > > " +
> > > >>> >>> parameters, 3);
> > > >>> >>>
> > > >>> >>> This is what the command looks like:
> > > >>> >>>
> > > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> > > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> > > >>> --prvNic=cloudbr0
> > > >>> >>> --guestNic=cloudbr0
> > > >>> >>>
> > > >>> >>> The problem is this method in LibvirtServerDiscoverer never
> > finds a
> > > >>> >>> matching host in the DB:
> > > >>> >>>
> > > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId,
> String
> > > guid)
> > > >>> >>>
> > > >>> >>> I assume once the KVM host is up and running that it's supposed
> > to
> > > >>> call
> > > >>> >>> into the CS MS so the DB can be updated as such?
> > > >>> >>>
> > > >>> >>> If so, the problem must be on the KVM side.
> > > >>> >>>
> > > >>> >>> I did run this again (from the KVM host) to see if the
> connection
> > > was
> > > >>> in
> > > >>> >>> place:
> > > >>> >>>
> > > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > >>> >>>
> > > >>> >>> Trying 192.168.233.1...
> > > >>> >>>
> > > >>> >>> Connected to 192.168.233.1.
> > > >>> >>>
> > > >>> >>> Escape character is '^]'.
> > > >>> >>> So that looks good.
> > > >>> >>>
> > > >>> >>> I turned on more info in the debug log, but nothing obvious
> jumps
> > > out
> > > >>> as
> > > >>> >>> of yet.
> > > >>> >>>
> > > >>> >>> If you have any thoughts on this, please shoot them my way. :)
> > > >>> >>>
> > > >>> >>> Thanks!
> > > >>> >>>
> > > >>> >>>
> > > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> > > >>> >>> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>
> > > >>> >>>> First step is for me to get this working for KVM, though. :)
> > > >>> >>>>
> > > >>> >>>> Once I do that, I can perhaps make modifications to the
> storage
> > > >>> >>>> framework and hypervisor plug-ins to refactor the logic and
> > such.
> > > >>> >>>>
> > > >>> >>>>
> > > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> > > >>> >>>> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>
> > > >>> >>>>> Same would work for KVM.
> > > >>> >>>>>
> > > >>> >>>>> If CreateCommand and DestroyCommand were called at the
> > > appropriate
> > > >>> >>>>> times by the storage framework, I could move my connect and
> > > >>> disconnect
> > > >>> >>>>> logic out of the attach/detach logic.
> > > >>> >>>>>
> > > >>> >>>>>
> > > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> > > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>>
> > > >>> >>>>>> Conversely, if the storage framework called the
> DestroyCommand
> > > for
> > > >>> >>>>>> managed storage after the DetachCommand, then I could have
> had
> > > my
> > > >>> remove
> > > >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling
> > rather
> > > >>> than in the
> > > >>> >>>>>> DetachCommand handling.
> > > >>> >>>>>>
> > > >>> >>>>>>
> > > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> > > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>>>
> > > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
> > > >>> >>>>>>>
> > > >>> >>>>>>> The initial approach that was discussed during 4.2 was for
> me
> > > to
> > > >>> >>>>>>> modify the attach/detach logic only in the XenServer and
> > VMware
> > > >>> hypervisor
> > > >>> >>>>>>> plug-ins.
> > > >>> >>>>>>>
> > > >>> >>>>>>> Now that I think about it more, though, I kind of would
> have
> > > >>> liked to
> > > >>> >>>>>>> have the storage framework send a CreateCommand to the
> > > hypervisor
> > > >>> before
> > > >>> >>>>>>> sending the AttachCommand if the storage in question was
> > > managed.
> > > >>> >>>>>>>
> > > >>> >>>>>>> Then I could have created my SR/datastore in the
> > CreateCommand
> > > and
> > > >>> >>>>>>> the AttachCommand would have had the SR/datastore that it
> was
> > > >>> always
> > > >>> >>>>>>> expecting (and I wouldn't have had to create the
> SR/datastore
> > > in
> > > >>> the
> > > >>> >>>>>>> AttachCommand).
> > > >>> >>>>>>>
> > > >>> >>>>>>>
> > > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> > > >>> >>>>>>> shadowsor@gmail.com> wrote:
> > > >>> >>>>>>>
> > > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd
> be
> > > in a
> > > >>> >>>>>>>> better position to tell.
> > > >>> >>>>>>>>
> > > >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
> > > driver,
> > > >>> does
> > > >>> >>>>>>>> that mean that there's no template support? Or is it some
> > > other
> > > >>> call
> > > >>> >>>>>>>> that does templating now? I'm still getting up to speed on
> > all
> > > >>> of the
> > > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> > > >>> >>>>>>>> LibvirtComputingResource, since that's the only place
> > > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
> > > >>> >>>>>>>> CreateCommand
> > > >>> >>>>>>>> might be skipped altogether when utilizing storage
> plugins.
> > > >>> >>>>>>>>
> > > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> > > >>> >>>>>>>> <mi...@solidfire.com> wrote:
> > > >>> >>>>>>>> > That's an interesting comment, Marcus.
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> > It was my intent that it should work with any CloudStack
> > > >>> "managed"
> > > >>> >>>>>>>> storage
> > > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I
> > > wrote
> > > >>> the
> > > >>> >>>>>>>> code so
> > > >>> >>>>>>>> > CHAP didn't have to be used.
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> > As I'm doing my testing, I can try to think about
> whether
> > > it is
> > > >>> >>>>>>>> generic
> > > >>> >>>>>>>> > enough to keep those names or not.
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> > My expectation is that it is generic enough.
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
> > > >>> >>>>>>>> shadowsor@gmail.com>wrote:
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> >> I added a comment to your diff. In general I think it
> > looks
> > > >>> good,
> > > >>> >>>>>>>> >> though I obviously can't vouch for whether or not it
> will
> > > >>> work.
> > > >>> >>>>>>>> One
> > > >>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
> > > >>> naming. If
> > > >>> >>>>>>>> you
> > > >>> >>>>>>>> >> think the code is generic enough that it will work for
> > > anyone
> > > >>> who
> > > >>> >>>>>>>> does
> > > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
> > > there's
> > > >>> >>>>>>>> anything
> > > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it
> > > likes
> > > >>> to
> > > >>> >>>>>>>> be
> > > >>> >>>>>>>> >> treated then I'd say that they should be named
> something
> > > less
> > > >>> >>>>>>>> generic
> > > >>> >>>>>>>> >> than iScsiAdmStorage.
> > > >>> >>>>>>>> >>
> > > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> > > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
> > > >>> >>>>>>>> >> > Great - thanks!
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > Just to give you an overview of what my code does
> (for
> > > when
> > > >>> you
> > > >>> >>>>>>>> get a
> > > >>> >>>>>>>> >> > chance to review it):
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > SolidFireHostListener is registered in
> > > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> > > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host
> connects
> > > with
> > > >>> the
> > > >>> >>>>>>>> CS MS. If
> > > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
> > > >>> >>>>>>>> ModifyStoragePoolCommand to
> > > >>> >>>>>>>> >> > the host. This logic was based off of
> > > DefaultHostListener.
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is
> unchanged.
> > It
> > > >>> >>>>>>>> invokes
> > > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
> > > >>> >>>>>>>> KVMStoragePoolManager
> > > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
> > > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
> > > >>> >>>>>>>> >> was
> > > >>> >>>>>>>> >> > registered in the constructor for
> KVMStoragePoolManager
> > > >>> under
> > > >>> >>>>>>>> the key of
> > > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes
> an
> > > >>> instance
> > > >>> >>>>>>>> of
> > > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns
> the
> > > >>> pointer
> > > >>> >>>>>>>> to the
> > > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the
> > > UUID
> > > >>> of
> > > >>> >>>>>>>> the storage
> > > >>> >>>>>>>> >> > pool.
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is
> > invoked
> > > for
> > > >>> >>>>>>>> managed
> > > >>> >>>>>>>> >> > storage rather than getPhysicalDisk.
> createPhysicalDisk
> > > uses
> > > >>> >>>>>>>> iscsiadm to
> > > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on the
> SAN
> > > and
> > > >>> a
> > > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach
> > > logic
> > > >>> that
> > > >>> >>>>>>>> follows.
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked
> > > with
> > > >>> the
> > > >>> >>>>>>>> IQN of the
> > > >>> >>>>>>>> >> > volume if the storage pool in question is managed
> > > storage.
> > > >>> >>>>>>>> Otherwise, the
> > > >>> >>>>>>>> >> > normal vol.getPath() is used.
> > > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> > > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used
> in
> > > the
> > > >>> >>>>>>>> detach logic.
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > Once the volume has been detached,
> > > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> > > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
> > > >>> deletePhysicalDisk
> > > >>> >>>>>>>> removes the
> > > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
> > > >>> >>>>>>>> shadowsor@gmail.com
> > > >>> >>>>>>>> >> >wrote:
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> >> Its the log4j properties file in
> /etc/cloudstack/agent
> > > >>> change
> > > >>> >>>>>>>> all INFO
> > > >>> >>>>>>>> >> to
> > > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you
> > can
> > > >>> tail
> > > >>> >>>>>>>> the log
> > > >>> >>>>>>>> >> when
> > > >>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
> > > >>> something
> > > >>> >>>>>>>> out into
> > > >>> >>>>>>>> >> one
> > > >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
> > > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> > > >>> >>>>>>>> mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> >> wrote:
> > > >>> >>>>>>>> >> >>
> > > >>> >>>>>>>> >> >> > This is how I've been trying to query for the
> status
> > > of
> > > >>> the
> > > >>> >>>>>>>> service (I
> > > >>> >>>>>>>> >> >> > assume it could be started this way, as well, by
> > > changing
> > > >>> >>>>>>>> "status" to
> > > >>> >>>>>>>> >> >> > "start" or "restart"?):
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> > > >>> >>>>>>>> /usr/sbin/service
> > > >>> >>>>>>>> >> >> > cloudstack-agent status
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > I get this back:
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
> > > >>> >>>>>>>> cloudstack-agent
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently,
> though,
> > > so I
> > > >>> >>>>>>>> think I'm
> > > >>> >>>>>>>> >> going
> > > >>> >>>>>>>> >> >> to
> > > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
> > > >>> enable.debug?
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > Thanks, Marcus!
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> > > >>> >>>>>>>> shadowsor@gmail.com
> > > >>> >>>>>>>> >> >> > >wrote:
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
> > > >>> mentioned,
> > > >>> >>>>>>>> you can
> > > >>> >>>>>>>> >> set
> > > >>> >>>>>>>> >> >> up
> > > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well
> if
> > > all
> > > >>> >>>>>>>> else fails.
> > > >>> >>>>>>>> >>  If
> > > >>> >>>>>>>> >> >> > you
> > > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM
> > > host,
> > > >>> then
> > > >>> >>>>>>>> you need
> > > >>> >>>>>>>> >> to
> > > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
> > > >>> >>>>>>>> complaining loudly
> > > >>> >>>>>>>> >> if
> > > >>> >>>>>>>> >> >> it
> > > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see
> > that
> > > in
> > > >>> >>>>>>>> your agent
> > > >>> >>>>>>>> >> log,
> > > >>> >>>>>>>> >> >> so
> > > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how
> to
> > > >>> >>>>>>>> stop/start the
> > > >>> >>>>>>>> >> agent
> > > >>> >>>>>>>> >> >> on
> > > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> > > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> > > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> > > >>> >>>>>>>> >> >> > > wrote:
> > > >>> >>>>>>>> >> >> > >
> > > >>> >>>>>>>> >> >> > > > Hey Marcus,
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code,
> > but I
> > > >>> >>>>>>>> thought you
> > > >>> >>>>>>>> >> would
> > > >>> >>>>>>>> >> >> > be a
> > > >>> >>>>>>>> >> >> > > > good person to ask to review it:
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >>
> > > >>> >>>>>>>> >>
> > > >>> >>>>>>>>
> > > >>>
> > >
> >
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach
> a
> > > data
> > > >>> >>>>>>>> disk (that
> > > >>> >>>>>>>> >> has
> > > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor.
> The
> > > data
> > > >>> >>>>>>>> disk
> > > >>> >>>>>>>> >> happens to
> > > >>> >>>>>>>> >> >> > be
> > > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have
> a
> > > 1:1
> > > >>> >>>>>>>> mapping
> > > >>> >>>>>>>> >> between a
> > > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots
> or
> > > stuff
> > > >>> >>>>>>>> like that
> > > >>> >>>>>>>> >> >> > (likely a
> > > >>> >>>>>>>> >> >> > > > future release)...just attaching and
> detaching a
> > > data
> > > >>> >>>>>>>> disk in 4.3.
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > Thanks!
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
> > Tutkowski <
> > > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
> > > >>> >>>>>>>> cloudstack-agent
> > > >>> >>>>>>>> >> >> first.
> > > >>> >>>>>>>> >> >> > > > Would
> > > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
> > > >>> install
> > > >>> >>>>>>>> >> >> > cloudstack-agent.
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
> > > Tutkowski <
> > > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > >> I get the same error running the command
> > > manually:
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$
> > sudo
> > > >>> >>>>>>>> >> /usr/sbin/service
> > > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
> > > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
> > > cloudstack-agent
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
> > > Tutkowski <
> > > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > > >>> >>>>>>>> >> >> (main:null)
> > > >>> >>>>>>>> >> >> > > > Agent
> > > >>> >>>>>>>> >> >> > > > >>> started
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > > >>> >>>>>>>> >> >> (main:null)
> > > >>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > > >>> >>>>>>>> >> >> (main:null)
> > > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
> > > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > > >>> >>>>>>>> >> >> (main:null)
> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
> > > storage
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> > > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > > >>> >>>>>>>> >> >> (main:null)
> > > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
> > > algorithm
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
> > > >>>  [cloud.utils.LogUtils]
> > > >>> >>>>>>>> >> (main:null)
> > > >>> >>>>>>>> >> >> > > log4j
> > > >>> >>>>>>>> >> >> > > > >>> configuration found at
> > > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
> > >  [cloud.agent.Agent]
> > > >>> >>>>>>>> (main:null)
> > > >>> >>>>>>>> >> id
> > > >>> >>>>>>>> >> >> > is 3
> > > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> > > >>> >>>>>>>> >> >> > > > >>>
> > >  [resource.virtualnetwork.VirtualRoutingResource]
> > > >>> >>>>>>>> (main:null)
> > > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> > > >>> >>>>>>>> >> >> scripts/network/domr/kvm
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
> > > >>> >>>>>>>> important. This
> > > >>> >>>>>>>> >> seems
> > > >>> >>>>>>>> >> >> to
> > > >>> >>>>>>>> >> >> > > be
> > > >>> >>>>>>>> >> >> > > > a
> > > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
> > > indicate:
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> > > >>> >>>>>>>> cloudstack-agent
> > > >>> >>>>>>>> >> status
> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not
> > > access
> > > >>> PID
> > > >>> >>>>>>>> file for
> > > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
> > > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> > > >>> >>>>>>>> cloudstack-agent
> > > >>> >>>>>>>> >> start
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
> > > >>> Sorensen <
> > > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
> > > >>> >>>>>>>> >> >> > > > >wrote:
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought
> it
> > > was
> > > >>> the
> > > >>> >>>>>>>> agent log
> > > >>> >>>>>>>> >> for
> > > >>> >>>>>>>> >> >> > > some
> > > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might
> be
> > > the
> > > >>> >>>>>>>> place to
> > > >>> >>>>>>>> >> look.
> > > >>> >>>>>>>> >> >> > There
> > > >>> >>>>>>>> >> >> > > > is
> > > >>> >>>>>>>> >> >> > > > >>>> an
> > > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the
> > setup
> > > >>> when
> > > >>> >>>>>>>> it adds
> > > >>> >>>>>>>> >> the
> > > >>> >>>>>>>> >> >> > host,
> > > >>> >>>>>>>> >> >> > > > >>>> both
> > > >>> >>>>>>>> >> >> > > > >>>> in /var/log
> > > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike
> Tutkowski"
> > <
> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> > > >>> >>>>>>>> >> >> > > > >>>> wrote:
> > > >>> >>>>>>>> >> >> > > > >>>>
> > > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP
> > > address
> > > >>> or
> > > >>> >>>>>>>> the KVM
> > > >>> >>>>>>>> >> host?
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> > > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
> > > >>> parameter:
> > > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> > > >>> >>>>>>>> server192.168.233.1
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties
> > has
> > > a
> > > >>> >>>>>>>> >> >> host=192.168.233.1
> > > >>> >>>>>>>> >> >> > > > value.
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM,
> Marcus
> > > >>> Sorensen
> > > >>> >>>>>>>> <
> > > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> > > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
> > > >>> >>>>>>>> 192.168.233.10? But you
> > > >>> >>>>>>>> >> >> tried
> > > >>> >>>>>>>> >> >> > > to
> > > >>> >>>>>>>> >> >> > > > >>>> telnet
> > > >>> >>>>>>>> >> >> > > > >>>> > to
> > > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to
> > > change
> > > >>> >>>>>>>> that in
> > > >>> >>>>>>>> >> >> > > > >>>> > >
> /etc/cloudstack/agent/agent.properties,
> > > but
> > > >>> you
> > > >>> >>>>>>>> may want
> > > >>> >>>>>>>> >> to
> > > >>> >>>>>>>> >> >> > edit
> > > >>> >>>>>>>> >> >> > > > the
> > > >>> >>>>>>>> >> >> > > > >>>> > config
> > > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
> > > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
> > > Tutkowski" <
> > > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > >>>> > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
> > > >>> >>>>>>>> >> >> > > > >>>> > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
> > /etc/network/interfaces
> > > >>> file
> > > >>> >>>>>>>> looks
> > > >>> >>>>>>>> >> like, if
> > > >>> >>>>>>>> >> >> > > that
> > > >>> >>>>>>>> >> >> > > > >>>> is of
> > > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network
> > is
> > > the
> > > >>> >>>>>>>> NAT network
> > > >>> >>>>>>>> >> >> > VMware
> > > >>> >>>>>>>> >> >> > > > >>>> Fusion
> > > >>> >>>>>>>> >> >> > > > >>>> > set
> > > >>> >>>>>>>> >> >> > > > >>>> > > > up):
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> > > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
> > > >>> >>>>>>>> 192.168.233.2 metric 1
> > > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
> > > >>> >>>>>>>> 192.168.233.2
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM,
> > Mike
> > > >>> >>>>>>>> Tutkowski <
> > > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com>
> wrote:
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is
> > > from
> > > >>> the
> > > >>> >>>>>>>> MS log
> > > >>> >>>>>>>> >> >> (below).
> > > >>> >>>>>>>> >> >> > > > >>>> Discovery
> > > >>> >>>>>>>> >> >> > > > >>>> > > > timed
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be.
> My
> > > >>> network
> > > >>> >>>>>>>> settings
> > > >>> >>>>>>>> >> >> > > shouldn't
> > > >>> >>>>>>>> >> >> > > > >>>> have
> > > >>> >>>>>>>> >> >> > > > >>>> > > > changed
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host
> from
> > > the
> > > >>> MS
> > > >>> >>>>>>>> host and
> > > >>> >>>>>>>> >> vice
> > > >>> >>>>>>>> >> >> > > > versa.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick
> off
> > a
> > > VM
> > > >>> on
> > > >>> >>>>>>>> the KVM
> > > >>> >>>>>>>> >> host
> > > >>> >>>>>>>> >> >> > and
> > > >>> >>>>>>>> >> >> > > > >>>> ping from
> > > >>> >>>>>>>> >> >> > > > >>>> > > it
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X
> > host
> > > >>> (also
> > > >>> >>>>>>>> running
> > > >>> >>>>>>>> >> the
> > > >>> >>>>>>>> >> >> CS
> > > >>> >>>>>>>> >> >> > > MS)
> > > >>> >>>>>>>> >> >> > > > >>>> to the
> > > >>> >>>>>>>> >> >> > > > >>>> > VM
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> > > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > > >>> :ctx-6b28dc48)
> > > >>> >>>>>>>> Timeout,
> > > >>> >>>>>>>> >> to
> > > >>> >>>>>>>> >> >> > wait
> > > >>> >>>>>>>> >> >> > > > for
> > > >>> >>>>>>>> >> >> > > > >>>> the
> > > >>> >>>>>>>> >> >> > > > >>>> > > host
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming
> it
> > is
> > > >>> failed
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> > > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > > >>> :ctx-6b28dc48)
> > > >>> >>>>>>>> Unable to
> > > >>> >>>>>>>> >> >> find
> > > >>> >>>>>>>> >> >> > > the
> > > >>> >>>>>>>> >> >> > > > >>>> server
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at
> http://192.168.233.10
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> > > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > > >>> :ctx-6b28dc48)
> > > >>> >>>>>>>> Could not
> > > >>> >>>>>>>> >> >> find
> > > >>> >>>>>>>> >> >> > > > >>>> exception:
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > com.cloud.exception.DiscoveryException
> > > >>> in
> > > >>> >>>>>>>> error code
> > > >>> >>>>>>>> >> >> list
> > > >>> >>>>>>>> >> >> > > for
> > > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> > > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > > >>> :ctx-6b28dc48)
> > > >>> >>>>>>>> Exception:
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > com.cloud.exception.DiscoveryException:
> > > >>> >>>>>>>> Unable to add
> > > >>> >>>>>>>> >> >> the
> > > >>> >>>>>>>> >> >> > > host
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > at
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>>
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >>
> > > >>> >>>>>>>> >>
> > > >>> >>>>>>>>
> > > >>>
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in
> > > from
> > > >>> my
> > > >>> >>>>>>>> KVM host to
> > > >>> >>>>>>>> >> >> the
> > > >>> >>>>>>>> >> >> > MS
> > > >>> >>>>>>>> >> >> > > > >>>> host's
> > > >>> >>>>>>>> >> >> > > > >>>> > > 8250
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
> > > >>> 192.168.233.1
> > > >>> >>>>>>>> 8250
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> > > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > > > --
> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
> > SolidFire
> > > >>> Inc.*
> > > >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses
> the
> > > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
> > > >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
> > > >>> >>>>>>>> >> >> > > > >>>> > > >
> > > >>> >>>>>>>> >> >> > > > >>>> > >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>> > --
> > > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire
> > > Inc.*
> > > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
> > > >>> >>>>>>>> >> >> > > > >>>> > cloud<
> > > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> >> >> > > > >>>> > *™*
> > > >>> >>>>>>>> >> >> > > > >>>> >
> > > >>> >>>>>>>> >> >> > > > >>>>
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>> --
> > > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire
> > Inc.*
> > > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the
> cloud<
> > > >>> >>>>>>>> >> >> > > >
> > > http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> >> >> > > > >>> *™*
> > > >>> >>>>>>>> >> >> > > > >>>
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >> --
> > > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire
> Inc.*
> > > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
> > > >>> >>>>>>>> >> >> > > >
> > > http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> >> >> > > > >> *™*
> > > >>> >>>>>>>> >> >> > > > >>
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > > > --
> > > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire
> Inc.*
> > > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
> > > >>> >>>>>>>> >> >> > > >
> > > http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> >> >> > > > > *™*
> > > >>> >>>>>>>> >> >> > > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > > > --
> > > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > > > o: 303.746.7302
> > > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
> > > >>> >>>>>>>> >> >> > > > cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> >> >> > > > *™*
> > > >>> >>>>>>>> >> >> > > >
> > > >>> >>>>>>>> >> >> > >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >> > --
> > > >>> >>>>>>>> >> >> > *Mike Tutkowski*
> > > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> >> > o: 303.746.7302
> > > >>> >>>>>>>> >> >> > Advancing the way the world uses the
> > > >>> >>>>>>>> >> >> > cloud<
> > > http://solidfire.com/solution/overview/?video=play
> > > >>> >
> > > >>> >>>>>>>> >> >> > *™*
> > > >>> >>>>>>>> >> >> >
> > > >>> >>>>>>>> >> >>
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> >
> > > >>> >>>>>>>> >> > --
> > > >>> >>>>>>>> >> > *Mike Tutkowski*
> > > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> >> > o: 303.746.7302
> > > >>> >>>>>>>> >> > Advancing the way the world uses the
> > > >>> >>>>>>>> >> > cloud<
> > http://solidfire.com/solution/overview/?video=play
> > > >
> > > >>> >>>>>>>> >> > *™*
> > > >>> >>>>>>>> >>
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> >
> > > >>> >>>>>>>> > --
> > > >>> >>>>>>>> > *Mike Tutkowski*
> > > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>>> > o: 303.746.7302
> > > >>> >>>>>>>> > Advancing the way the world uses the
> > > >>> >>>>>>>> > cloud<
> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>>> > *™*
> > > >>> >>>>>>>>
> > > >>> >>>>>>>
> > > >>> >>>>>>>
> > > >>> >>>>>>>
> > > >>> >>>>>>> --
> > > >>> >>>>>>> *Mike Tutkowski*
> > > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>>> e: mike.tutkowski@solidfire.com
> > > >>> >>>>>>> o: 303.746.7302
> > > >>> >>>>>>> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>>> *™*
> > > >>> >>>>>>>
> > > >>> >>>>>>
> > > >>> >>>>>>
> > > >>> >>>>>>
> > > >>> >>>>>> --
> > > >>> >>>>>> *Mike Tutkowski*
> > > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>>> e: mike.tutkowski@solidfire.com
> > > >>> >>>>>> o: 303.746.7302
> > > >>> >>>>>> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>>> *™*
> > > >>> >>>>>>
> > > >>> >>>>>
> > > >>> >>>>>
> > > >>> >>>>>
> > > >>> >>>>> --
> > > >>> >>>>> *Mike Tutkowski*
> > > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>>> e: mike.tutkowski@solidfire.com
> > > >>> >>>>> o: 303.746.7302
> > > >>> >>>>> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>>> *™*
> > > >>> >>>>>
> > > >>> >>>>
> > > >>> >>>>
> > > >>> >>>>
> > > >>> >>>> --
> > > >>> >>>> *Mike Tutkowski*
> > > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>>> e: mike.tutkowski@solidfire.com
> > > >>> >>>> o: 303.746.7302
> > > >>> >>>> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>>> *™*
> > > >>> >>>>
> > > >>> >>>
> > > >>> >>>
> > > >>> >>>
> > > >>> >>> --
> > > >>> >>> *Mike Tutkowski*
> > > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >>> e: mike.tutkowski@solidfire.com
> > > >>> >>> o: 303.746.7302
> > > >>> >>> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >>> *™*
> > > >>> >>>
> > > >>> >>
> > > >>> >>
> > > >>> >>
> > > >>> >> --
> > > >>> >> *Mike Tutkowski*
> > > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> >> e: mike.tutkowski@solidfire.com
> > > >>> >> o: 303.746.7302
> > > >>> >> Advancing the way the world uses the cloud<
> > > >>> http://solidfire.com/solution/overview/?video=play>
> > > >>> >> *™*
> > > >>> >>
> > > >>> >
> > > >>> >
> > > >>> >
> > > >>> > --
> > > >>> > *Mike Tutkowski*
> > > >>> > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> > e: mike.tutkowski@solidfire.com
> > > >>> > o: 303.746.7302
> > > >>> > Advancing the way the world uses the
> > > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >>> > *™*
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Mike Tutkowski*
> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> e: mike.tutkowski@solidfire.com
> > > >> o: 303.746.7302
> > > >> Advancing the way the world uses the cloud<
> > > http://solidfire.com/solution/overview/?video=play>
> > > >> *™*
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
It might be a centos specific thing. These are created by the init scripts.
Check your agent init script on Ubuntu and see I'd you can decipher where
it sends stdout.
On Sep 23, 2013 5:21 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Weird...no such file exists.
>
>
> On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > maybe cloudstack-agent.out
> >
> > On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> > > OK, so, nothing is screaming out in the logs. I did notice the
> following:
> > >
> > > From setup.log:
> > >
> > > DEBUG:root:execute:apparmor_status |grep libvirt
> > >
> > > DEBUG:root:Failed to execute:
> > >
> > >
> > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> > >
> > > DEBUG:root:Failed to execute: * could not access PID file for
> > > cloudstack-agent
> > >
> > >
> > > This is the final line in this log file:
> > >
> > > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> > >
> > >
> > > This is from agent.log:
> > >
> > > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
> > Checking
> > > to see if agent.pid exists.
> > >
> > > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > > Executing: bash -c echo $PPID
> > >
> > > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > > Execution is successful.
> > >
> > > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is
> > >
> > > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
> > > (main:null) Retrieving network interface: cloudbr0
> > >
> > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > > (main:null) Retrieving network interface: cloudbr0
> > >
> > > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > > (main:null) Retrieving network interface: null
> > >
> > > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
> > > (main:null) Retrieving network interface: null
> > >
> > >
> > > The following kinds of lines are repeated for a bunch of different .sh
> > > files. I think they often end up being found here:
> > > /usr/share/cloudstack-common/scripts/network/domr, so this is probably
> > not
> > > an issue.
> > >
> > >
> > > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null) Looking
> > for
> > > call_firewall.sh in the classpath
> > >
> > > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null) System
> > > resource: null
> > >
> > > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null)
> Classpath
> > > resource: null
> > >
> > > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null) Looking
> > for
> > > call_firewall.sh
> > >
> > >
> > > Is there a log file for the Java code that I could write stuff out to
> and
> > > see how far we get?
> > >
> > >
> > > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> > > mike.tutkowski@solidfire.com> wrote:
> > >
> > >> Thanks, Marcus
> > >>
> > >> I've been developing on Windows for most of my time, so a bunch of
> these
> > >> Linux-type commands are new to me and I don't always interpret the
> > output
> > >> correctly. Getting there. :)
> > >>
> > >>
> > >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> > >>
> > >>> Nope, not running. That's just your grep process. It would look like:
> > >>>
> > >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
> > >>>
> > >>>
> >
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
> > >>>
> > >>> Your agent log should tell you why it failed to start if you set it
> in
> > >>> debug and try to start... or maybe cloudstack-agent.out if it doesn't
> > >>> get far enough (say it's missing a class or something and can't
> > >>> start).
> > >>>
> > >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> > >>> <mi...@solidfire.com> wrote:
> > >>> > Looks like it's running, though:
> > >>> >
> > >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> > >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto
> > jsvc
> > >>> >
> > >>> >
> > >>> >
> > >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> > >>> > mike.tutkowski@solidfire.com> wrote:
> > >>> >
> > >>> >> Hey Marcus,
> > >>> >>
> > >>> >> Maybe you could give me a better idea of what the "flow" is when
> > >>> adding a
> > >>> >> KVM host.
> > >>> >>
> > >>> >> It looks like we SSH into the potential KVM host and execute a
> > startup
> > >>> >> script (giving it necessary info about the cloud and the
> management
> > >>> server
> > >>> >> it should talk to).
> > >>> >>
> > >>> >> After this, is the Java VM started?
> > >>> >>
> > >>> >> After a reboot, I assume the JVM is started automatically?
> > >>> >>
> > >>> >> How do you debug your KVM-side Java code?
> > >>> >>
> > >>> >> Been looking through the logs and nothing obvious sticks out. I
> will
> > >>> have
> > >>> >> another look.
> > >>> >>
> > >>> >> Thanks
> > >>> >>
> > >>> >>
> > >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> > >>> >> mike.tutkowski@solidfire.com> wrote:
> > >>> >>
> > >>> >>> Hey Marcus,
> > >>> >>>
> > >>> >>> I've been investigating my issue with not being able to add a KVM
> > >>> host to
> > >>> >>> CS.
> > >>> >>>
> > >>> >>> For what it's worth, this comes back successful:
> > >>> >>>
> > >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent
> > " +
> > >>> >>> parameters, 3);
> > >>> >>>
> > >>> >>> This is what the command looks like:
> > >>> >>>
> > >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> > >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> > >>> --prvNic=cloudbr0
> > >>> >>> --guestNic=cloudbr0
> > >>> >>>
> > >>> >>> The problem is this method in LibvirtServerDiscoverer never
> finds a
> > >>> >>> matching host in the DB:
> > >>> >>>
> > >>> >>> waitForHostConnect(long dcId, long podId, long clusterId, String
> > guid)
> > >>> >>>
> > >>> >>> I assume once the KVM host is up and running that it's supposed
> to
> > >>> call
> > >>> >>> into the CS MS so the DB can be updated as such?
> > >>> >>>
> > >>> >>> If so, the problem must be on the KVM side.
> > >>> >>>
> > >>> >>> I did run this again (from the KVM host) to see if the connection
> > was
> > >>> in
> > >>> >>> place:
> > >>> >>>
> > >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > >>> >>>
> > >>> >>> Trying 192.168.233.1...
> > >>> >>>
> > >>> >>> Connected to 192.168.233.1.
> > >>> >>>
> > >>> >>> Escape character is '^]'.
> > >>> >>> So that looks good.
> > >>> >>>
> > >>> >>> I turned on more info in the debug log, but nothing obvious jumps
> > out
> > >>> as
> > >>> >>> of yet.
> > >>> >>>
> > >>> >>> If you have any thoughts on this, please shoot them my way. :)
> > >>> >>>
> > >>> >>> Thanks!
> > >>> >>>
> > >>> >>>
> > >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> > >>> >>> mike.tutkowski@solidfire.com> wrote:
> > >>> >>>
> > >>> >>>> First step is for me to get this working for KVM, though. :)
> > >>> >>>>
> > >>> >>>> Once I do that, I can perhaps make modifications to the storage
> > >>> >>>> framework and hypervisor plug-ins to refactor the logic and
> such.
> > >>> >>>>
> > >>> >>>>
> > >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> > >>> >>>> mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>
> > >>> >>>>> Same would work for KVM.
> > >>> >>>>>
> > >>> >>>>> If CreateCommand and DestroyCommand were called at the
> > appropriate
> > >>> >>>>> times by the storage framework, I could move my connect and
> > >>> disconnect
> > >>> >>>>> logic out of the attach/detach logic.
> > >>> >>>>>
> > >>> >>>>>
> > >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> > >>> >>>>> mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>
> > >>> >>>>>> Conversely, if the storage framework called the DestroyCommand
> > for
> > >>> >>>>>> managed storage after the DetachCommand, then I could have had
> > my
> > >>> remove
> > >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling
> rather
> > >>> than in the
> > >>> >>>>>> DetachCommand handling.
> > >>> >>>>>>
> > >>> >>>>>>
> > >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> > >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>>
> > >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
> > >>> >>>>>>>
> > >>> >>>>>>> The initial approach that was discussed during 4.2 was for me
> > to
> > >>> >>>>>>> modify the attach/detach logic only in the XenServer and
> VMware
> > >>> hypervisor
> > >>> >>>>>>> plug-ins.
> > >>> >>>>>>>
> > >>> >>>>>>> Now that I think about it more, though, I kind of would have
> > >>> liked to
> > >>> >>>>>>> have the storage framework send a CreateCommand to the
> > hypervisor
> > >>> before
> > >>> >>>>>>> sending the AttachCommand if the storage in question was
> > managed.
> > >>> >>>>>>>
> > >>> >>>>>>> Then I could have created my SR/datastore in the
> CreateCommand
> > and
> > >>> >>>>>>> the AttachCommand would have had the SR/datastore that it was
> > >>> always
> > >>> >>>>>>> expecting (and I wouldn't have had to create the SR/datastore
> > in
> > >>> the
> > >>> >>>>>>> AttachCommand).
> > >>> >>>>>>>
> > >>> >>>>>>>
> > >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> > >>> >>>>>>> shadowsor@gmail.com> wrote:
> > >>> >>>>>>>
> > >>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd be
> > in a
> > >>> >>>>>>>> better position to tell.
> > >>> >>>>>>>>
> > >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
> > driver,
> > >>> does
> > >>> >>>>>>>> that mean that there's no template support? Or is it some
> > other
> > >>> call
> > >>> >>>>>>>> that does templating now? I'm still getting up to speed on
> all
> > >>> of the
> > >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> > >>> >>>>>>>> LibvirtComputingResource, since that's the only place
> > >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
> > >>> >>>>>>>> CreateCommand
> > >>> >>>>>>>> might be skipped altogether when utilizing storage plugins.
> > >>> >>>>>>>>
> > >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> > >>> >>>>>>>> <mi...@solidfire.com> wrote:
> > >>> >>>>>>>> > That's an interesting comment, Marcus.
> > >>> >>>>>>>> >
> > >>> >>>>>>>> > It was my intent that it should work with any CloudStack
> > >>> "managed"
> > >>> >>>>>>>> storage
> > >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I
> > wrote
> > >>> the
> > >>> >>>>>>>> code so
> > >>> >>>>>>>> > CHAP didn't have to be used.
> > >>> >>>>>>>> >
> > >>> >>>>>>>> > As I'm doing my testing, I can try to think about whether
> > it is
> > >>> >>>>>>>> generic
> > >>> >>>>>>>> > enough to keep those names or not.
> > >>> >>>>>>>> >
> > >>> >>>>>>>> > My expectation is that it is generic enough.
> > >>> >>>>>>>> >
> > >>> >>>>>>>> >
> > >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
> > >>> >>>>>>>> shadowsor@gmail.com>wrote:
> > >>> >>>>>>>> >
> > >>> >>>>>>>> >> I added a comment to your diff. In general I think it
> looks
> > >>> good,
> > >>> >>>>>>>> >> though I obviously can't vouch for whether or not it will
> > >>> work.
> > >>> >>>>>>>> One
> > >>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
> > >>> naming. If
> > >>> >>>>>>>> you
> > >>> >>>>>>>> >> think the code is generic enough that it will work for
> > anyone
> > >>> who
> > >>> >>>>>>>> does
> > >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
> > there's
> > >>> >>>>>>>> anything
> > >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it
> > likes
> > >>> to
> > >>> >>>>>>>> be
> > >>> >>>>>>>> >> treated then I'd say that they should be named something
> > less
> > >>> >>>>>>>> generic
> > >>> >>>>>>>> >> than iScsiAdmStorage.
> > >>> >>>>>>>> >>
> > >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> > >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
> > >>> >>>>>>>> >> > Great - thanks!
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > Just to give you an overview of what my code does (for
> > when
> > >>> you
> > >>> >>>>>>>> get a
> > >>> >>>>>>>> >> > chance to review it):
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > SolidFireHostListener is registered in
> > >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> > >>> >>>>>>>> >> > Its hostConnect method is invoked when a host connects
> > with
> > >>> the
> > >>> >>>>>>>> CS MS. If
> > >>> >>>>>>>> >> > the host is running KVM, the listener sends a
> > >>> >>>>>>>> ModifyStoragePoolCommand to
> > >>> >>>>>>>> >> > the host. This logic was based off of
> > DefaultHostListener.
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged.
> It
> > >>> >>>>>>>> invokes
> > >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
> > >>> >>>>>>>> KVMStoragePoolManager
> > >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
> > >>> >>>>>>>> iScsiAdmStorageAdaptor (which
> > >>> >>>>>>>> >> was
> > >>> >>>>>>>> >> > registered in the constructor for KVMStoragePoolManager
> > >>> under
> > >>> >>>>>>>> the key of
> > >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an
> > >>> instance
> > >>> >>>>>>>> of
> > >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the
> > >>> pointer
> > >>> >>>>>>>> to the
> > >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the
> > UUID
> > >>> of
> > >>> >>>>>>>> the storage
> > >>> >>>>>>>> >> > pool.
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is
> invoked
> > for
> > >>> >>>>>>>> managed
> > >>> >>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk
> > uses
> > >>> >>>>>>>> iscsiadm to
> > >>> >>>>>>>> >> > establish the iSCSI connection to the volume on the SAN
> > and
> > >>> a
> > >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach
> > logic
> > >>> that
> > >>> >>>>>>>> follows.
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked
> > with
> > >>> the
> > >>> >>>>>>>> IQN of the
> > >>> >>>>>>>> >> > volume if the storage pool in question is managed
> > storage.
> > >>> >>>>>>>> Otherwise, the
> > >>> >>>>>>>> >> > normal vol.getPath() is used.
> > >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> > >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in
> > the
> > >>> >>>>>>>> detach logic.
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > Once the volume has been detached,
> > >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> > >>> >>>>>>>> >> > is invoked if the storage pool is managed.
> > >>> deletePhysicalDisk
> > >>> >>>>>>>> removes the
> > >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
> > >>> >>>>>>>> shadowsor@gmail.com
> > >>> >>>>>>>> >> >wrote:
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent
> > >>> change
> > >>> >>>>>>>> all INFO
> > >>> >>>>>>>> >> to
> > >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you
> can
> > >>> tail
> > >>> >>>>>>>> the log
> > >>> >>>>>>>> >> when
> > >>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
> > >>> something
> > >>> >>>>>>>> out into
> > >>> >>>>>>>> >> one
> > >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
> > >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> > >>> >>>>>>>> mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> >> wrote:
> > >>> >>>>>>>> >> >>
> > >>> >>>>>>>> >> >> > This is how I've been trying to query for the status
> > of
> > >>> the
> > >>> >>>>>>>> service (I
> > >>> >>>>>>>> >> >> > assume it could be started this way, as well, by
> > changing
> > >>> >>>>>>>> "status" to
> > >>> >>>>>>>> >> >> > "start" or "restart"?):
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> > >>> >>>>>>>> /usr/sbin/service
> > >>> >>>>>>>> >> >> > cloudstack-agent status
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > I get this back:
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
> > >>> >>>>>>>> cloudstack-agent
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > I've made a bunch of code changes recently, though,
> > so I
> > >>> >>>>>>>> think I'm
> > >>> >>>>>>>> >> going
> > >>> >>>>>>>> >> >> to
> > >>> >>>>>>>> >> >> > rebuild and redeploy everything.
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
> > >>> enable.debug?
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > Thanks, Marcus!
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> > >>> >>>>>>>> shadowsor@gmail.com
> > >>> >>>>>>>> >> >> > >wrote:
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
> > >>> mentioned,
> > >>> >>>>>>>> you can
> > >>> >>>>>>>> >> set
> > >>> >>>>>>>> >> >> up
> > >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well if
> > all
> > >>> >>>>>>>> else fails.
> > >>> >>>>>>>> >>  If
> > >>> >>>>>>>> >> >> > you
> > >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM
> > host,
> > >>> then
> > >>> >>>>>>>> you need
> > >>> >>>>>>>> >> to
> > >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
> > >>> >>>>>>>> complaining loudly
> > >>> >>>>>>>> >> if
> > >>> >>>>>>>> >> >> it
> > >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see
> that
> > in
> > >>> >>>>>>>> your agent
> > >>> >>>>>>>> >> log,
> > >>> >>>>>>>> >> >> so
> > >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how to
> > >>> >>>>>>>> stop/start the
> > >>> >>>>>>>> >> agent
> > >>> >>>>>>>> >> >> on
> > >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> > >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> > >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> > >>> >>>>>>>> >> >> > > wrote:
> > >>> >>>>>>>> >> >> > >
> > >>> >>>>>>>> >> >> > > > Hey Marcus,
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code,
> but I
> > >>> >>>>>>>> thought you
> > >>> >>>>>>>> >> would
> > >>> >>>>>>>> >> >> > be a
> > >>> >>>>>>>> >> >> > > > good person to ask to review it:
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >>
> > >>> >>>>>>>> >>
> > >>> >>>>>>>>
> > >>>
> >
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach a
> > data
> > >>> >>>>>>>> disk (that
> > >>> >>>>>>>> >> has
> > >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The
> > data
> > >>> >>>>>>>> disk
> > >>> >>>>>>>> >> happens to
> > >>> >>>>>>>> >> >> > be
> > >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a
> > 1:1
> > >>> >>>>>>>> mapping
> > >>> >>>>>>>> >> between a
> > >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots or
> > stuff
> > >>> >>>>>>>> like that
> > >>> >>>>>>>> >> >> > (likely a
> > >>> >>>>>>>> >> >> > > > future release)...just attaching and detaching a
> > data
> > >>> >>>>>>>> disk in 4.3.
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > Thanks!
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike
> Tutkowski <
> > >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
> > >>> >>>>>>>> cloudstack-agent
> > >>> >>>>>>>> >> >> first.
> > >>> >>>>>>>> >> >> > > > Would
> > >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
> > >>> install
> > >>> >>>>>>>> >> >> > cloudstack-agent.
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
> > Tutkowski <
> > >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > >> I get the same error running the command
> > manually:
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$
> sudo
> > >>> >>>>>>>> >> /usr/sbin/service
> > >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
> > >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
> > cloudstack-agent
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
> > Tutkowski <
> > >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > >>> >>>>>>>> >> >> (main:null)
> > >>> >>>>>>>> >> >> > > > Agent
> > >>> >>>>>>>> >> >> > > > >>> started
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > >>> >>>>>>>> >> >> (main:null)
> > >>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > >>> >>>>>>>> >> >> (main:null)
> > >>> >>>>>>>> >> >> > > > >>> agent.properties found at
> > >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > >>> >>>>>>>> >> >> (main:null)
> > >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
> > storage
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> > >>> >>>>>>>>  [cloud.agent.AgentShell]
> > >>> >>>>>>>> >> >> (main:null)
> > >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
> > algorithm
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
> > >>>  [cloud.utils.LogUtils]
> > >>> >>>>>>>> >> (main:null)
> > >>> >>>>>>>> >> >> > > log4j
> > >>> >>>>>>>> >> >> > > > >>> configuration found at
> > >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
> >  [cloud.agent.Agent]
> > >>> >>>>>>>> (main:null)
> > >>> >>>>>>>> >> id
> > >>> >>>>>>>> >> >> > is 3
> > >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> > >>> >>>>>>>> >> >> > > > >>>
> >  [resource.virtualnetwork.VirtualRoutingResource]
> > >>> >>>>>>>> (main:null)
> > >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> > >>> >>>>>>>> >> >> scripts/network/domr/kvm
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
> > >>> >>>>>>>> important. This
> > >>> >>>>>>>> >> seems
> > >>> >>>>>>>> >> >> to
> > >>> >>>>>>>> >> >> > > be
> > >>> >>>>>>>> >> >> > > > a
> > >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
> > indicate:
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> > >>> >>>>>>>> cloudstack-agent
> > >>> >>>>>>>> >> status
> > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not
> > access
> > >>> PID
> > >>> >>>>>>>> file for
> > >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
> > >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> > >>> >>>>>>>> cloudstack-agent
> > >>> >>>>>>>> >> start
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
> > >>> Sorensen <
> > >>> >>>>>>>> >> >> > > shadowsor@gmail.com
> > >>> >>>>>>>> >> >> > > > >wrote:
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it
> > was
> > >>> the
> > >>> >>>>>>>> agent log
> > >>> >>>>>>>> >> for
> > >>> >>>>>>>> >> >> > > some
> > >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be
> > the
> > >>> >>>>>>>> place to
> > >>> >>>>>>>> >> look.
> > >>> >>>>>>>> >> >> > There
> > >>> >>>>>>>> >> >> > > > is
> > >>> >>>>>>>> >> >> > > > >>>> an
> > >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the
> setup
> > >>> when
> > >>> >>>>>>>> it adds
> > >>> >>>>>>>> >> the
> > >>> >>>>>>>> >> >> > host,
> > >>> >>>>>>>> >> >> > > > >>>> both
> > >>> >>>>>>>> >> >> > > > >>>> in /var/log
> > >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski"
> <
> > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> > >>> >>>>>>>> >> >> > > > >>>> wrote:
> > >>> >>>>>>>> >> >> > > > >>>>
> > >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP
> > address
> > >>> or
> > >>> >>>>>>>> the KVM
> > >>> >>>>>>>> >> host?
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> > >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
> > >>> parameter:
> > >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> > >>> >>>>>>>> server192.168.233.1
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties
> has
> > a
> > >>> >>>>>>>> >> >> host=192.168.233.1
> > >>> >>>>>>>> >> >> > > > value.
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus
> > >>> Sorensen
> > >>> >>>>>>>> <
> > >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> > >>> >>>>>>>> >> >> > > > >>>> > >wrote:
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
> > >>> >>>>>>>> 192.168.233.10? But you
> > >>> >>>>>>>> >> >> tried
> > >>> >>>>>>>> >> >> > > to
> > >>> >>>>>>>> >> >> > > > >>>> telnet
> > >>> >>>>>>>> >> >> > > > >>>> > to
> > >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to
> > change
> > >>> >>>>>>>> that in
> > >>> >>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties,
> > but
> > >>> you
> > >>> >>>>>>>> may want
> > >>> >>>>>>>> >> to
> > >>> >>>>>>>> >> >> > edit
> > >>> >>>>>>>> >> >> > > > the
> > >>> >>>>>>>> >> >> > > > >>>> > config
> > >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
> > >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
> > Tutkowski" <
> > >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > >>>> > >
> > >>> >>>>>>>> >> >> > > > >>>> > > wrote:
> > >>> >>>>>>>> >> >> > > > >>>> > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my
> /etc/network/interfaces
> > >>> file
> > >>> >>>>>>>> looks
> > >>> >>>>>>>> >> like, if
> > >>> >>>>>>>> >> >> > > that
> > >>> >>>>>>>> >> >> > > > >>>> is of
> > >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network
> is
> > the
> > >>> >>>>>>>> NAT network
> > >>> >>>>>>>> >> >> > VMware
> > >>> >>>>>>>> >> >> > > > >>>> Fusion
> > >>> >>>>>>>> >> >> > > > >>>> > set
> > >>> >>>>>>>> >> >> > > > >>>> > > > up):
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
> > >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> > >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> > >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> > >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> > >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> > >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> > >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> > >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> > >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> > >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
> > >>> >>>>>>>> 192.168.233.2 metric 1
> > >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
> > >>> >>>>>>>> 192.168.233.2
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM,
> Mike
> > >>> >>>>>>>> Tutkowski <
> > >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is
> > from
> > >>> the
> > >>> >>>>>>>> MS log
> > >>> >>>>>>>> >> >> (below).
> > >>> >>>>>>>> >> >> > > > >>>> Discovery
> > >>> >>>>>>>> >> >> > > > >>>> > > > timed
> > >>> >>>>>>>> >> >> > > > >>>> > > > > out.
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My
> > >>> network
> > >>> >>>>>>>> settings
> > >>> >>>>>>>> >> >> > > shouldn't
> > >>> >>>>>>>> >> >> > > > >>>> have
> > >>> >>>>>>>> >> >> > > > >>>> > > > changed
> > >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from
> > the
> > >>> MS
> > >>> >>>>>>>> host and
> > >>> >>>>>>>> >> vice
> > >>> >>>>>>>> >> >> > > > versa.
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off
> a
> > VM
> > >>> on
> > >>> >>>>>>>> the KVM
> > >>> >>>>>>>> >> host
> > >>> >>>>>>>> >> >> > and
> > >>> >>>>>>>> >> >> > > > >>>> ping from
> > >>> >>>>>>>> >> >> > > > >>>> > > it
> > >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X
> host
> > >>> (also
> > >>> >>>>>>>> running
> > >>> >>>>>>>> >> the
> > >>> >>>>>>>> >> >> CS
> > >>> >>>>>>>> >> >> > > MS)
> > >>> >>>>>>>> >> >> > > > >>>> to the
> > >>> >>>>>>>> >> >> > > > >>>> > VM
> > >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> > >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > >>> :ctx-6b28dc48)
> > >>> >>>>>>>> Timeout,
> > >>> >>>>>>>> >> to
> > >>> >>>>>>>> >> >> > wait
> > >>> >>>>>>>> >> >> > > > for
> > >>> >>>>>>>> >> >> > > > >>>> the
> > >>> >>>>>>>> >> >> > > > >>>> > > host
> > >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it
> is
> > >>> failed
> > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> > >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > >>> :ctx-6b28dc48)
> > >>> >>>>>>>> Unable to
> > >>> >>>>>>>> >> >> find
> > >>> >>>>>>>> >> >> > > the
> > >>> >>>>>>>> >> >> > > > >>>> server
> > >>> >>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
> > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> > >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > >>> :ctx-6b28dc48)
> > >>> >>>>>>>> Could not
> > >>> >>>>>>>> >> >> find
> > >>> >>>>>>>> >> >> > > > >>>> exception:
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > com.cloud.exception.DiscoveryException
> > >>> in
> > >>> >>>>>>>> error code
> > >>> >>>>>>>> >> >> list
> > >>> >>>>>>>> >> >> > > for
> > >>> >>>>>>>> >> >> > > > >>>> > > exceptions
> > >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> > >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> > >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> > >>> :ctx-6b28dc48)
> > >>> >>>>>>>> Exception:
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > com.cloud.exception.DiscoveryException:
> > >>> >>>>>>>> Unable to add
> > >>> >>>>>>>> >> >> the
> > >>> >>>>>>>> >> >> > > host
> > >>> >>>>>>>> >> >> > > > >>>> > > > > at
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>>
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >>
> > >>> >>>>>>>> >>
> > >>> >>>>>>>>
> > >>>
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in
> > from
> > >>> my
> > >>> >>>>>>>> KVM host to
> > >>> >>>>>>>> >> >> the
> > >>> >>>>>>>> >> >> > MS
> > >>> >>>>>>>> >> >> > > > >>>> host's
> > >>> >>>>>>>> >> >> > > > >>>> > > 8250
> > >>> >>>>>>>> >> >> > > > >>>> > > > > port:
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
> > >>> 192.168.233.1
> > >>> >>>>>>>> 8250
> > >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> > >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> > >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> > >>> >>>>>>>> >> >> > > > >>>> > > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > > > --
> > >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer,
> SolidFire
> > >>> Inc.*
> > >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
> > >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
> > >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> >> >> > > > >>>> > > > *™*
> > >>> >>>>>>>> >> >> > > > >>>> > > >
> > >>> >>>>>>>> >> >> > > > >>>> > >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>> > --
> > >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire
> > Inc.*
> > >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
> > >>> >>>>>>>> >> >> > > > >>>> > cloud<
> > >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> >> >> > > > >>>> > *™*
> > >>> >>>>>>>> >> >> > > > >>>> >
> > >>> >>>>>>>> >> >> > > > >>>>
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>> --
> > >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire
> Inc.*
> > >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
> > >>> >>>>>>>> >> >> > > >
> > http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> >> >> > > > >>> *™*
> > >>> >>>>>>>> >> >> > > > >>>
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >> --
> > >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
> > >>> >>>>>>>> >> >> > > >
> > http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> >> >> > > > >> *™*
> > >>> >>>>>>>> >> >> > > > >>
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > > > --
> > >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > > o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
> > >>> >>>>>>>> >> >> > > >
> > http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> >> >> > > > > *™*
> > >>> >>>>>>>> >> >> > > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > > > --
> > >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > > > o: 303.746.7302
> > >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
> > >>> >>>>>>>> >> >> > > > cloud<
> > >>> http://solidfire.com/solution/overview/?video=play
> > >>> >>>>>>>> >
> > >>> >>>>>>>> >> >> > > > *™*
> > >>> >>>>>>>> >> >> > > >
> > >>> >>>>>>>> >> >> > >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >> > --
> > >>> >>>>>>>> >> >> > *Mike Tutkowski*
> > >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> >> > o: 303.746.7302
> > >>> >>>>>>>> >> >> > Advancing the way the world uses the
> > >>> >>>>>>>> >> >> > cloud<
> > http://solidfire.com/solution/overview/?video=play
> > >>> >
> > >>> >>>>>>>> >> >> > *™*
> > >>> >>>>>>>> >> >> >
> > >>> >>>>>>>> >> >>
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> >
> > >>> >>>>>>>> >> > --
> > >>> >>>>>>>> >> > *Mike Tutkowski*
> > >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> >> > o: 303.746.7302
> > >>> >>>>>>>> >> > Advancing the way the world uses the
> > >>> >>>>>>>> >> > cloud<
> http://solidfire.com/solution/overview/?video=play
> > >
> > >>> >>>>>>>> >> > *™*
> > >>> >>>>>>>> >>
> > >>> >>>>>>>> >
> > >>> >>>>>>>> >
> > >>> >>>>>>>> >
> > >>> >>>>>>>> > --
> > >>> >>>>>>>> > *Mike Tutkowski*
> > >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
> > >>> >>>>>>>> > o: 303.746.7302
> > >>> >>>>>>>> > Advancing the way the world uses the
> > >>> >>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>>> > *™*
> > >>> >>>>>>>>
> > >>> >>>>>>>
> > >>> >>>>>>>
> > >>> >>>>>>>
> > >>> >>>>>>> --
> > >>> >>>>>>> *Mike Tutkowski*
> > >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>>> e: mike.tutkowski@solidfire.com
> > >>> >>>>>>> o: 303.746.7302
> > >>> >>>>>>> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>>> *™*
> > >>> >>>>>>>
> > >>> >>>>>>
> > >>> >>>>>>
> > >>> >>>>>>
> > >>> >>>>>> --
> > >>> >>>>>> *Mike Tutkowski*
> > >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>>> e: mike.tutkowski@solidfire.com
> > >>> >>>>>> o: 303.746.7302
> > >>> >>>>>> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>>> *™*
> > >>> >>>>>>
> > >>> >>>>>
> > >>> >>>>>
> > >>> >>>>>
> > >>> >>>>> --
> > >>> >>>>> *Mike Tutkowski*
> > >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>>> e: mike.tutkowski@solidfire.com
> > >>> >>>>> o: 303.746.7302
> > >>> >>>>> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>>> *™*
> > >>> >>>>>
> > >>> >>>>
> > >>> >>>>
> > >>> >>>>
> > >>> >>>> --
> > >>> >>>> *Mike Tutkowski*
> > >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>>> e: mike.tutkowski@solidfire.com
> > >>> >>>> o: 303.746.7302
> > >>> >>>> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>>> *™*
> > >>> >>>>
> > >>> >>>
> > >>> >>>
> > >>> >>>
> > >>> >>> --
> > >>> >>> *Mike Tutkowski*
> > >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >>> e: mike.tutkowski@solidfire.com
> > >>> >>> o: 303.746.7302
> > >>> >>> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >>> *™*
> > >>> >>>
> > >>> >>
> > >>> >>
> > >>> >>
> > >>> >> --
> > >>> >> *Mike Tutkowski*
> > >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> >> e: mike.tutkowski@solidfire.com
> > >>> >> o: 303.746.7302
> > >>> >> Advancing the way the world uses the cloud<
> > >>> http://solidfire.com/solution/overview/?video=play>
> > >>> >> *™*
> > >>> >>
> > >>> >
> > >>> >
> > >>> >
> > >>> > --
> > >>> > *Mike Tutkowski*
> > >>> > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> > e: mike.tutkowski@solidfire.com
> > >>> > o: 303.746.7302
> > >>> > Advancing the way the world uses the
> > >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > >>> > *™*
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkowski@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Weird...no such file exists.


On Mon, Sep 23, 2013 at 4:54 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> maybe cloudstack-agent.out
>
> On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > OK, so, nothing is screaming out in the logs. I did notice the following:
> >
> > From setup.log:
> >
> > DEBUG:root:execute:apparmor_status |grep libvirt
> >
> > DEBUG:root:Failed to execute:
> >
> >
> > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> >
> > DEBUG:root:Failed to execute: * could not access PID file for
> > cloudstack-agent
> >
> >
> > This is the final line in this log file:
> >
> > DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> >
> >
> > This is from agent.log:
> >
> > 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null)
> Checking
> > to see if agent.pid exists.
> >
> > 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > Executing: bash -c echo $PPID
> >
> > 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
> > Execution is successful.
> >
> > 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is
> >
> > 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
> > (main:null) Retrieving network interface: cloudbr0
> >
> > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > (main:null) Retrieving network interface: cloudbr0
> >
> > 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> > (main:null) Retrieving network interface: null
> >
> > 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
> > (main:null) Retrieving network interface: null
> >
> >
> > The following kinds of lines are repeated for a bunch of different .sh
> > files. I think they often end up being found here:
> > /usr/share/cloudstack-common/scripts/network/domr, so this is probably
> not
> > an issue.
> >
> >
> > 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null) Looking
> for
> > call_firewall.sh in the classpath
> >
> > 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null) System
> > resource: null
> >
> > 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null) Classpath
> > resource: null
> >
> > 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null) Looking
> for
> > call_firewall.sh
> >
> >
> > Is there a log file for the Java code that I could write stuff out to and
> > see how far we get?
> >
> >
> > On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> Thanks, Marcus
> >>
> >> I've been developing on Windows for most of my time, so a bunch of these
> >> Linux-type commands are new to me and I don't always interpret the
> output
> >> correctly. Getting there. :)
> >>
> >>
> >> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>
> >>> Nope, not running. That's just your grep process. It would look like:
> >>>
> >>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
> >>>
> >>>
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
> >>>
> >>> Your agent log should tell you why it failed to start if you set it in
> >>> debug and try to start... or maybe cloudstack-agent.out if it doesn't
> >>> get far enough (say it's missing a class or something and can't
> >>> start).
> >>>
> >>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> >>> <mi...@solidfire.com> wrote:
> >>> > Looks like it's running, though:
> >>> >
> >>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> >>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto
> jsvc
> >>> >
> >>> >
> >>> >
> >>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> >>> > mike.tutkowski@solidfire.com> wrote:
> >>> >
> >>> >> Hey Marcus,
> >>> >>
> >>> >> Maybe you could give me a better idea of what the "flow" is when
> >>> adding a
> >>> >> KVM host.
> >>> >>
> >>> >> It looks like we SSH into the potential KVM host and execute a
> startup
> >>> >> script (giving it necessary info about the cloud and the management
> >>> server
> >>> >> it should talk to).
> >>> >>
> >>> >> After this, is the Java VM started?
> >>> >>
> >>> >> After a reboot, I assume the JVM is started automatically?
> >>> >>
> >>> >> How do you debug your KVM-side Java code?
> >>> >>
> >>> >> Been looking through the logs and nothing obvious sticks out. I will
> >>> have
> >>> >> another look.
> >>> >>
> >>> >> Thanks
> >>> >>
> >>> >>
> >>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> >>> >> mike.tutkowski@solidfire.com> wrote:
> >>> >>
> >>> >>> Hey Marcus,
> >>> >>>
> >>> >>> I've been investigating my issue with not being able to add a KVM
> >>> host to
> >>> >>> CS.
> >>> >>>
> >>> >>> For what it's worth, this comes back successful:
> >>> >>>
> >>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent
> " +
> >>> >>> parameters, 3);
> >>> >>>
> >>> >>> This is what the command looks like:
> >>> >>>
> >>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> >>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> >>> --prvNic=cloudbr0
> >>> >>> --guestNic=cloudbr0
> >>> >>>
> >>> >>> The problem is this method in LibvirtServerDiscoverer never finds a
> >>> >>> matching host in the DB:
> >>> >>>
> >>> >>> waitForHostConnect(long dcId, long podId, long clusterId, String
> guid)
> >>> >>>
> >>> >>> I assume once the KVM host is up and running that it's supposed to
> >>> call
> >>> >>> into the CS MS so the DB can be updated as such?
> >>> >>>
> >>> >>> If so, the problem must be on the KVM side.
> >>> >>>
> >>> >>> I did run this again (from the KVM host) to see if the connection
> was
> >>> in
> >>> >>> place:
> >>> >>>
> >>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >>> >>>
> >>> >>> Trying 192.168.233.1...
> >>> >>>
> >>> >>> Connected to 192.168.233.1.
> >>> >>>
> >>> >>> Escape character is '^]'.
> >>> >>> So that looks good.
> >>> >>>
> >>> >>> I turned on more info in the debug log, but nothing obvious jumps
> out
> >>> as
> >>> >>> of yet.
> >>> >>>
> >>> >>> If you have any thoughts on this, please shoot them my way. :)
> >>> >>>
> >>> >>> Thanks!
> >>> >>>
> >>> >>>
> >>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> >>> >>> mike.tutkowski@solidfire.com> wrote:
> >>> >>>
> >>> >>>> First step is for me to get this working for KVM, though. :)
> >>> >>>>
> >>> >>>> Once I do that, I can perhaps make modifications to the storage
> >>> >>>> framework and hypervisor plug-ins to refactor the logic and such.
> >>> >>>>
> >>> >>>>
> >>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> >>> >>>> mike.tutkowski@solidfire.com> wrote:
> >>> >>>>
> >>> >>>>> Same would work for KVM.
> >>> >>>>>
> >>> >>>>> If CreateCommand and DestroyCommand were called at the
> appropriate
> >>> >>>>> times by the storage framework, I could move my connect and
> >>> disconnect
> >>> >>>>> logic out of the attach/detach logic.
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> >>> >>>>> mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>
> >>> >>>>>> Conversely, if the storage framework called the DestroyCommand
> for
> >>> >>>>>> managed storage after the DetachCommand, then I could have had
> my
> >>> remove
> >>> >>>>>> SR/datastore logic placed in the DestroyCommand handling rather
> >>> than in the
> >>> >>>>>> DetachCommand handling.
> >>> >>>>>>
> >>> >>>>>>
> >>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> >>> >>>>>> mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>>
> >>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
> >>> >>>>>>>
> >>> >>>>>>> The initial approach that was discussed during 4.2 was for me
> to
> >>> >>>>>>> modify the attach/detach logic only in the XenServer and VMware
> >>> hypervisor
> >>> >>>>>>> plug-ins.
> >>> >>>>>>>
> >>> >>>>>>> Now that I think about it more, though, I kind of would have
> >>> liked to
> >>> >>>>>>> have the storage framework send a CreateCommand to the
> hypervisor
> >>> before
> >>> >>>>>>> sending the AttachCommand if the storage in question was
> managed.
> >>> >>>>>>>
> >>> >>>>>>> Then I could have created my SR/datastore in the CreateCommand
> and
> >>> >>>>>>> the AttachCommand would have had the SR/datastore that it was
> >>> always
> >>> >>>>>>> expecting (and I wouldn't have had to create the SR/datastore
> in
> >>> the
> >>> >>>>>>> AttachCommand).
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> >>> >>>>>>> shadowsor@gmail.com> wrote:
> >>> >>>>>>>
> >>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd be
> in a
> >>> >>>>>>>> better position to tell.
> >>> >>>>>>>>
> >>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2
> driver,
> >>> does
> >>> >>>>>>>> that mean that there's no template support? Or is it some
> other
> >>> call
> >>> >>>>>>>> that does templating now? I'm still getting up to speed on all
> >>> of the
> >>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> >>> >>>>>>>> LibvirtComputingResource, since that's the only place
> >>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
> >>> >>>>>>>> CreateCommand
> >>> >>>>>>>> might be skipped altogether when utilizing storage plugins.
> >>> >>>>>>>>
> >>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> >>> >>>>>>>> <mi...@solidfire.com> wrote:
> >>> >>>>>>>> > That's an interesting comment, Marcus.
> >>> >>>>>>>> >
> >>> >>>>>>>> > It was my intent that it should work with any CloudStack
> >>> "managed"
> >>> >>>>>>>> storage
> >>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I
> wrote
> >>> the
> >>> >>>>>>>> code so
> >>> >>>>>>>> > CHAP didn't have to be used.
> >>> >>>>>>>> >
> >>> >>>>>>>> > As I'm doing my testing, I can try to think about whether
> it is
> >>> >>>>>>>> generic
> >>> >>>>>>>> > enough to keep those names or not.
> >>> >>>>>>>> >
> >>> >>>>>>>> > My expectation is that it is generic enough.
> >>> >>>>>>>> >
> >>> >>>>>>>> >
> >>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
> >>> >>>>>>>> shadowsor@gmail.com>wrote:
> >>> >>>>>>>> >
> >>> >>>>>>>> >> I added a comment to your diff. In general I think it looks
> >>> good,
> >>> >>>>>>>> >> though I obviously can't vouch for whether or not it will
> >>> work.
> >>> >>>>>>>> One
> >>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
> >>> naming. If
> >>> >>>>>>>> you
> >>> >>>>>>>> >> think the code is generic enough that it will work for
> anyone
> >>> who
> >>> >>>>>>>> does
> >>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if
> there's
> >>> >>>>>>>> anything
> >>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it
> likes
> >>> to
> >>> >>>>>>>> be
> >>> >>>>>>>> >> treated then I'd say that they should be named something
> less
> >>> >>>>>>>> generic
> >>> >>>>>>>> >> than iScsiAdmStorage.
> >>> >>>>>>>> >>
> >>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >>> >>>>>>>> >> <mi...@solidfire.com> wrote:
> >>> >>>>>>>> >> > Great - thanks!
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > Just to give you an overview of what my code does (for
> when
> >>> you
> >>> >>>>>>>> get a
> >>> >>>>>>>> >> > chance to review it):
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > SolidFireHostListener is registered in
> >>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> >>> >>>>>>>> >> > Its hostConnect method is invoked when a host connects
> with
> >>> the
> >>> >>>>>>>> CS MS. If
> >>> >>>>>>>> >> > the host is running KVM, the listener sends a
> >>> >>>>>>>> ModifyStoragePoolCommand to
> >>> >>>>>>>> >> > the host. This logic was based off of
> DefaultHostListener.
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
> >>> >>>>>>>> invokes
> >>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
> >>> >>>>>>>> KVMStoragePoolManager
> >>> >>>>>>>> >> > asks for an adaptor and finds my new one:
> >>> >>>>>>>> iScsiAdmStorageAdaptor (which
> >>> >>>>>>>> >> was
> >>> >>>>>>>> >> > registered in the constructor for KVMStoragePoolManager
> >>> under
> >>> >>>>>>>> the key of
> >>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an
> >>> instance
> >>> >>>>>>>> of
> >>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the
> >>> pointer
> >>> >>>>>>>> to the
> >>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the
> UUID
> >>> of
> >>> >>>>>>>> the storage
> >>> >>>>>>>> >> > pool.
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked
> for
> >>> >>>>>>>> managed
> >>> >>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk
> uses
> >>> >>>>>>>> iscsiadm to
> >>> >>>>>>>> >> > establish the iSCSI connection to the volume on the SAN
> and
> >>> a
> >>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach
> logic
> >>> that
> >>> >>>>>>>> follows.
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked
> with
> >>> the
> >>> >>>>>>>> IQN of the
> >>> >>>>>>>> >> > volume if the storage pool in question is managed
> storage.
> >>> >>>>>>>> Otherwise, the
> >>> >>>>>>>> >> > normal vol.getPath() is used.
> >>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> >>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in
> the
> >>> >>>>>>>> detach logic.
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > Once the volume has been detached,
> >>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> >>> >>>>>>>> >> > is invoked if the storage pool is managed.
> >>> deletePhysicalDisk
> >>> >>>>>>>> removes the
> >>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
> >>> >>>>>>>> shadowsor@gmail.com
> >>> >>>>>>>> >> >wrote:
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent
> >>> change
> >>> >>>>>>>> all INFO
> >>> >>>>>>>> >> to
> >>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can
> >>> tail
> >>> >>>>>>>> the log
> >>> >>>>>>>> >> when
> >>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
> >>> something
> >>> >>>>>>>> out into
> >>> >>>>>>>> >> one
> >>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
> >>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> >>> >>>>>>>> mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> >> wrote:
> >>> >>>>>>>> >> >>
> >>> >>>>>>>> >> >> > This is how I've been trying to query for the status
> of
> >>> the
> >>> >>>>>>>> service (I
> >>> >>>>>>>> >> >> > assume it could be started this way, as well, by
> changing
> >>> >>>>>>>> "status" to
> >>> >>>>>>>> >> >> > "start" or "restart"?):
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >>> >>>>>>>> /usr/sbin/service
> >>> >>>>>>>> >> >> > cloudstack-agent status
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > I get this back:
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
> >>> >>>>>>>> cloudstack-agent
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > I've made a bunch of code changes recently, though,
> so I
> >>> >>>>>>>> think I'm
> >>> >>>>>>>> >> going
> >>> >>>>>>>> >> >> to
> >>> >>>>>>>> >> >> > rebuild and redeploy everything.
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
> >>> enable.debug?
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > Thanks, Marcus!
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> >>> >>>>>>>> shadowsor@gmail.com
> >>> >>>>>>>> >> >> > >wrote:
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
> >>> mentioned,
> >>> >>>>>>>> you can
> >>> >>>>>>>> >> set
> >>> >>>>>>>> >> >> up
> >>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well if
> all
> >>> >>>>>>>> else fails.
> >>> >>>>>>>> >>  If
> >>> >>>>>>>> >> >> > you
> >>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM
> host,
> >>> then
> >>> >>>>>>>> you need
> >>> >>>>>>>> >> to
> >>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
> >>> >>>>>>>> complaining loudly
> >>> >>>>>>>> >> if
> >>> >>>>>>>> >> >> it
> >>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that
> in
> >>> >>>>>>>> your agent
> >>> >>>>>>>> >> log,
> >>> >>>>>>>> >> >> so
> >>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how to
> >>> >>>>>>>> stop/start the
> >>> >>>>>>>> >> agent
> >>> >>>>>>>> >> >> on
> >>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> >>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> >>> >>>>>>>> >> >> > > wrote:
> >>> >>>>>>>> >> >> > >
> >>> >>>>>>>> >> >> > > > Hey Marcus,
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
> >>> >>>>>>>> thought you
> >>> >>>>>>>> >> would
> >>> >>>>>>>> >> >> > be a
> >>> >>>>>>>> >> >> > > > good person to ask to review it:
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >>
> >>> >>>>>>>> >>
> >>> >>>>>>>>
> >>>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach a
> data
> >>> >>>>>>>> disk (that
> >>> >>>>>>>> >> has
> >>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The
> data
> >>> >>>>>>>> disk
> >>> >>>>>>>> >> happens to
> >>> >>>>>>>> >> >> > be
> >>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a
> 1:1
> >>> >>>>>>>> mapping
> >>> >>>>>>>> >> between a
> >>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots or
> stuff
> >>> >>>>>>>> like that
> >>> >>>>>>>> >> >> > (likely a
> >>> >>>>>>>> >> >> > > > future release)...just attaching and detaching a
> data
> >>> >>>>>>>> disk in 4.3.
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > Thanks!
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> >>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
> >>> >>>>>>>> cloudstack-agent
> >>> >>>>>>>> >> >> first.
> >>> >>>>>>>> >> >> > > > Would
> >>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
> >>> install
> >>> >>>>>>>> >> >> > cloudstack-agent.
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike
> Tutkowski <
> >>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > >> I get the same error running the command
> manually:
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >>> >>>>>>>> >> /usr/sbin/service
> >>> >>>>>>>> >> >> > > > >> cloudstack-agent status
> >>> >>>>>>>> >> >> > > > >>  * could not access PID file for
> cloudstack-agent
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike
> Tutkowski <
> >>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> >>> >>>>>>>>  [cloud.agent.AgentShell]
> >>> >>>>>>>> >> >> (main:null)
> >>> >>>>>>>> >> >> > > > Agent
> >>> >>>>>>>> >> >> > > > >>> started
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> >>> >>>>>>>>  [cloud.agent.AgentShell]
> >>> >>>>>>>> >> >> (main:null)
> >>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> >>> >>>>>>>>  [cloud.agent.AgentShell]
> >>> >>>>>>>> >> >> (main:null)
> >>> >>>>>>>> >> >> > > > >>> agent.properties found at
> >>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> >>> >>>>>>>>  [cloud.agent.AgentShell]
> >>> >>>>>>>> >> >> (main:null)
> >>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for
> storage
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> >>> >>>>>>>>  [cloud.agent.AgentShell]
> >>> >>>>>>>> >> >> (main:null)
> >>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff
> algorithm
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
> >>>  [cloud.utils.LogUtils]
> >>> >>>>>>>> >> (main:null)
> >>> >>>>>>>> >> >> > > log4j
> >>> >>>>>>>> >> >> > > > >>> configuration found at
> >>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO
>  [cloud.agent.Agent]
> >>> >>>>>>>> (main:null)
> >>> >>>>>>>> >> id
> >>> >>>>>>>> >> >> > is 3
> >>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >>> >>>>>>>> >> >> > > > >>>
>  [resource.virtualnetwork.VirtualRoutingResource]
> >>> >>>>>>>> (main:null)
> >>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> >>> >>>>>>>> >> >> scripts/network/domr/kvm
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
> >>> >>>>>>>> important. This
> >>> >>>>>>>> >> seems
> >>> >>>>>>>> >> >> to
> >>> >>>>>>>> >> >> > > be
> >>> >>>>>>>> >> >> > > > a
> >>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might
> indicate:
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> >>> >>>>>>>> cloudstack-agent
> >>> >>>>>>>> >> status
> >>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not
> access
> >>> PID
> >>> >>>>>>>> file for
> >>> >>>>>>>> >> >> > > > >>> cloudstack-agent
> >>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> >>> >>>>>>>> cloudstack-agent
> >>> >>>>>>>> >> start
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
> >>> Sorensen <
> >>> >>>>>>>> >> >> > > shadowsor@gmail.com
> >>> >>>>>>>> >> >> > > > >wrote:
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it
> was
> >>> the
> >>> >>>>>>>> agent log
> >>> >>>>>>>> >> for
> >>> >>>>>>>> >> >> > > some
> >>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be
> the
> >>> >>>>>>>> place to
> >>> >>>>>>>> >> look.
> >>> >>>>>>>> >> >> > There
> >>> >>>>>>>> >> >> > > > is
> >>> >>>>>>>> >> >> > > > >>>> an
> >>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup
> >>> when
> >>> >>>>>>>> it adds
> >>> >>>>>>>> >> the
> >>> >>>>>>>> >> >> > host,
> >>> >>>>>>>> >> >> > > > >>>> both
> >>> >>>>>>>> >> >> > > > >>>> in /var/log
> >>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >>> >>>>>>>> >> >> > > > >>>> wrote:
> >>> >>>>>>>> >> >> > > > >>>>
> >>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP
> address
> >>> or
> >>> >>>>>>>> the KVM
> >>> >>>>>>>> >> host?
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
> >>> parameter:
> >>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> >>> >>>>>>>> server192.168.233.1
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has
> a
> >>> >>>>>>>> >> >> host=192.168.233.1
> >>> >>>>>>>> >> >> > > > value.
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus
> >>> Sorensen
> >>> >>>>>>>> <
> >>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> >>> >>>>>>>> >> >> > > > >>>> > >wrote:
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
> >>> >>>>>>>> 192.168.233.10? But you
> >>> >>>>>>>> >> >> tried
> >>> >>>>>>>> >> >> > > to
> >>> >>>>>>>> >> >> > > > >>>> telnet
> >>> >>>>>>>> >> >> > > > >>>> > to
> >>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to
> change
> >>> >>>>>>>> that in
> >>> >>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties,
> but
> >>> you
> >>> >>>>>>>> may want
> >>> >>>>>>>> >> to
> >>> >>>>>>>> >> >> > edit
> >>> >>>>>>>> >> >> > > > the
> >>> >>>>>>>> >> >> > > > >>>> > config
> >>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
> >>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike
> Tutkowski" <
> >>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > >>>> > >
> >>> >>>>>>>> >> >> > > > >>>> > > wrote:
> >>> >>>>>>>> >> >> > > > >>>> > >
> >>> >>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces
> >>> file
> >>> >>>>>>>> looks
> >>> >>>>>>>> >> like, if
> >>> >>>>>>>> >> >> > > that
> >>> >>>>>>>> >> >> > > > >>>> is of
> >>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is
> the
> >>> >>>>>>>> NAT network
> >>> >>>>>>>> >> >> > VMware
> >>> >>>>>>>> >> >> > > > >>>> Fusion
> >>> >>>>>>>> >> >> > > > >>>> > set
> >>> >>>>>>>> >> >> > > > >>>> > > > up):
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > auto lo
> >>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> >>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> >>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> >>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> >>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> >>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> >>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
> >>> >>>>>>>> 192.168.233.2 metric 1
> >>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
> >>> >>>>>>>> 192.168.233.2
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
> >>> >>>>>>>> Tutkowski <
> >>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is
> from
> >>> the
> >>> >>>>>>>> MS log
> >>> >>>>>>>> >> >> (below).
> >>> >>>>>>>> >> >> > > > >>>> Discovery
> >>> >>>>>>>> >> >> > > > >>>> > > > timed
> >>> >>>>>>>> >> >> > > > >>>> > > > > out.
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My
> >>> network
> >>> >>>>>>>> settings
> >>> >>>>>>>> >> >> > > shouldn't
> >>> >>>>>>>> >> >> > > > >>>> have
> >>> >>>>>>>> >> >> > > > >>>> > > > changed
> >>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from
> the
> >>> MS
> >>> >>>>>>>> host and
> >>> >>>>>>>> >> vice
> >>> >>>>>>>> >> >> > > > versa.
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a
> VM
> >>> on
> >>> >>>>>>>> the KVM
> >>> >>>>>>>> >> host
> >>> >>>>>>>> >> >> > and
> >>> >>>>>>>> >> >> > > > >>>> ping from
> >>> >>>>>>>> >> >> > > > >>>> > > it
> >>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host
> >>> (also
> >>> >>>>>>>> running
> >>> >>>>>>>> >> the
> >>> >>>>>>>> >> >> CS
> >>> >>>>>>>> >> >> > > MS)
> >>> >>>>>>>> >> >> > > > >>>> to the
> >>> >>>>>>>> >> >> > > > >>>> > VM
> >>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >>> :ctx-6b28dc48)
> >>> >>>>>>>> Timeout,
> >>> >>>>>>>> >> to
> >>> >>>>>>>> >> >> > wait
> >>> >>>>>>>> >> >> > > > for
> >>> >>>>>>>> >> >> > > > >>>> the
> >>> >>>>>>>> >> >> > > > >>>> > > host
> >>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is
> >>> failed
> >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >>> :ctx-6b28dc48)
> >>> >>>>>>>> Unable to
> >>> >>>>>>>> >> >> find
> >>> >>>>>>>> >> >> > > the
> >>> >>>>>>>> >> >> > > > >>>> server
> >>> >>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
> >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >>> :ctx-6b28dc48)
> >>> >>>>>>>> Could not
> >>> >>>>>>>> >> >> find
> >>> >>>>>>>> >> >> > > > >>>> exception:
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> com.cloud.exception.DiscoveryException
> >>> in
> >>> >>>>>>>> error code
> >>> >>>>>>>> >> >> list
> >>> >>>>>>>> >> >> > > for
> >>> >>>>>>>> >> >> > > > >>>> > > exceptions
> >>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> >>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> >>> :ctx-6b28dc48)
> >>> >>>>>>>> Exception:
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> com.cloud.exception.DiscoveryException:
> >>> >>>>>>>> Unable to add
> >>> >>>>>>>> >> >> the
> >>> >>>>>>>> >> >> > > host
> >>> >>>>>>>> >> >> > > > >>>> > > > > at
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>>
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >>
> >>> >>>>>>>> >>
> >>> >>>>>>>>
> >>>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in
> from
> >>> my
> >>> >>>>>>>> KVM host to
> >>> >>>>>>>> >> >> the
> >>> >>>>>>>> >> >> > MS
> >>> >>>>>>>> >> >> > > > >>>> host's
> >>> >>>>>>>> >> >> > > > >>>> > > 8250
> >>> >>>>>>>> >> >> > > > >>>> > > > > port:
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
> >>> 192.168.233.1
> >>> >>>>>>>> 8250
> >>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> >>> >>>>>>>> >> >> > > > >>>> > > > >
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > > > --
> >>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire
> >>> Inc.*
> >>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> >>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
> >>> >>>>>>>> >> >> > > > >>>> > > > cloud<
> >>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> >> >> > > > >>>> > > > *™*
> >>> >>>>>>>> >> >> > > > >>>> > > >
> >>> >>>>>>>> >> >> > > > >>>> > >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>> > --
> >>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire
> Inc.*
> >>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> >>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
> >>> >>>>>>>> >> >> > > > >>>> > cloud<
> >>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> >> >> > > > >>>> > *™*
> >>> >>>>>>>> >> >> > > > >>>> >
> >>> >>>>>>>> >> >> > > > >>>>
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>> --
> >>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> >>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
> >>> >>>>>>>> >> >> > > >
> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> >> >> > > > >>> *™*
> >>> >>>>>>>> >> >> > > > >>>
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >> --
> >>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > >> o: 303.746.7302
> >>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
> >>> >>>>>>>> >> >> > > >
> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> >> >> > > > >> *™*
> >>> >>>>>>>> >> >> > > > >>
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > > > --
> >>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > > o: 303.746.7302
> >>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
> >>> >>>>>>>> >> >> > > >
> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> >> >> > > > > *™*
> >>> >>>>>>>> >> >> > > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > > > --
> >>> >>>>>>>> >> >> > > > *Mike Tutkowski*
> >>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > > > o: 303.746.7302
> >>> >>>>>>>> >> >> > > > Advancing the way the world uses the
> >>> >>>>>>>> >> >> > > > cloud<
> >>> http://solidfire.com/solution/overview/?video=play
> >>> >>>>>>>> >
> >>> >>>>>>>> >> >> > > > *™*
> >>> >>>>>>>> >> >> > > >
> >>> >>>>>>>> >> >> > >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >> > --
> >>> >>>>>>>> >> >> > *Mike Tutkowski*
> >>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> >> > o: 303.746.7302
> >>> >>>>>>>> >> >> > Advancing the way the world uses the
> >>> >>>>>>>> >> >> > cloud<
> http://solidfire.com/solution/overview/?video=play
> >>> >
> >>> >>>>>>>> >> >> > *™*
> >>> >>>>>>>> >> >> >
> >>> >>>>>>>> >> >>
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> >
> >>> >>>>>>>> >> > --
> >>> >>>>>>>> >> > *Mike Tutkowski*
> >>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> >> > o: 303.746.7302
> >>> >>>>>>>> >> > Advancing the way the world uses the
> >>> >>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play
> >
> >>> >>>>>>>> >> > *™*
> >>> >>>>>>>> >>
> >>> >>>>>>>> >
> >>> >>>>>>>> >
> >>> >>>>>>>> >
> >>> >>>>>>>> > --
> >>> >>>>>>>> > *Mike Tutkowski*
> >>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>>> > e: mike.tutkowski@solidfire.com
> >>> >>>>>>>> > o: 303.746.7302
> >>> >>>>>>>> > Advancing the way the world uses the
> >>> >>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>>> > *™*
> >>> >>>>>>>>
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> --
> >>> >>>>>>> *Mike Tutkowski*
> >>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>>> e: mike.tutkowski@solidfire.com
> >>> >>>>>>> o: 303.746.7302
> >>> >>>>>>> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>>> *™*
> >>> >>>>>>>
> >>> >>>>>>
> >>> >>>>>>
> >>> >>>>>>
> >>> >>>>>> --
> >>> >>>>>> *Mike Tutkowski*
> >>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>>> e: mike.tutkowski@solidfire.com
> >>> >>>>>> o: 303.746.7302
> >>> >>>>>> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>>> *™*
> >>> >>>>>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> --
> >>> >>>>> *Mike Tutkowski*
> >>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>>> e: mike.tutkowski@solidfire.com
> >>> >>>>> o: 303.746.7302
> >>> >>>>> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>>>> *™*
> >>> >>>>>
> >>> >>>>
> >>> >>>>
> >>> >>>>
> >>> >>>> --
> >>> >>>> *Mike Tutkowski*
> >>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>>> e: mike.tutkowski@solidfire.com
> >>> >>>> o: 303.746.7302
> >>> >>>> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>>> *™*
> >>> >>>>
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> --
> >>> >>> *Mike Tutkowski*
> >>> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >>> e: mike.tutkowski@solidfire.com
> >>> >>> o: 303.746.7302
> >>> >>> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >>> *™*
> >>> >>>
> >>> >>
> >>> >>
> >>> >>
> >>> >> --
> >>> >> *Mike Tutkowski*
> >>> >> *Senior CloudStack Developer, SolidFire Inc.*
> >>> >> e: mike.tutkowski@solidfire.com
> >>> >> o: 303.746.7302
> >>> >> Advancing the way the world uses the cloud<
> >>> http://solidfire.com/solution/overview/?video=play>
> >>> >> *™*
> >>> >>
> >>> >
> >>> >
> >>> >
> >>> > --
> >>> > *Mike Tutkowski*
> >>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>> > e: mike.tutkowski@solidfire.com
> >>> > o: 303.746.7302
> >>> > Advancing the way the world uses the
> >>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> > *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
maybe cloudstack-agent.out

On Mon, Sep 23, 2013 at 4:44 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> OK, so, nothing is screaming out in the logs. I did notice the following:
>
> From setup.log:
>
> DEBUG:root:execute:apparmor_status |grep libvirt
>
> DEBUG:root:Failed to execute:
>
>
> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>
> DEBUG:root:Failed to execute: * could not access PID file for
> cloudstack-agent
>
>
> This is the final line in this log file:
>
> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>
>
> This is from agent.log:
>
> 2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null) Checking
> to see if agent.pid exists.
>
> 2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
> Executing: bash -c echo $PPID
>
> 2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
> Execution is successful.
>
> 2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is
>
> 2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
> (main:null) Retrieving network interface: cloudbr0
>
> 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> (main:null) Retrieving network interface: cloudbr0
>
> 2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
> (main:null) Retrieving network interface: null
>
> 2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
> (main:null) Retrieving network interface: null
>
>
> The following kinds of lines are repeated for a bunch of different .sh
> files. I think they often end up being found here:
> /usr/share/cloudstack-common/scripts/network/domr, so this is probably not
> an issue.
>
>
> 2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null) Looking for
> call_firewall.sh in the classpath
>
> 2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null) System
> resource: null
>
> 2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null) Classpath
> resource: null
>
> 2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null) Looking for
> call_firewall.sh
>
>
> Is there a log file for the Java code that I could write stuff out to and
> see how far we get?
>
>
> On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Thanks, Marcus
>>
>> I've been developing on Windows for most of my time, so a bunch of these
>> Linux-type commands are new to me and I don't always interpret the output
>> correctly. Getting there. :)
>>
>>
>> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Nope, not running. That's just your grep process. It would look like:
>>>
>>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
>>>
>>> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>>>
>>> Your agent log should tell you why it failed to start if you set it in
>>> debug and try to start... or maybe cloudstack-agent.out if it doesn't
>>> get far enough (say it's missing a class or something and can't
>>> start).
>>>
>>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > Looks like it's running, though:
>>> >
>>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto jsvc
>>> >
>>> >
>>> >
>>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
>>> > mike.tutkowski@solidfire.com> wrote:
>>> >
>>> >> Hey Marcus,
>>> >>
>>> >> Maybe you could give me a better idea of what the "flow" is when
>>> adding a
>>> >> KVM host.
>>> >>
>>> >> It looks like we SSH into the potential KVM host and execute a startup
>>> >> script (giving it necessary info about the cloud and the management
>>> server
>>> >> it should talk to).
>>> >>
>>> >> After this, is the Java VM started?
>>> >>
>>> >> After a reboot, I assume the JVM is started automatically?
>>> >>
>>> >> How do you debug your KVM-side Java code?
>>> >>
>>> >> Been looking through the logs and nothing obvious sticks out. I will
>>> have
>>> >> another look.
>>> >>
>>> >> Thanks
>>> >>
>>> >>
>>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>>> >> mike.tutkowski@solidfire.com> wrote:
>>> >>
>>> >>> Hey Marcus,
>>> >>>
>>> >>> I've been investigating my issue with not being able to add a KVM
>>> host to
>>> >>> CS.
>>> >>>
>>> >>> For what it's worth, this comes back successful:
>>> >>>
>>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
>>> >>> parameters, 3);
>>> >>>
>>> >>> This is what the command looks like:
>>> >>>
>>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>>> --prvNic=cloudbr0
>>> >>> --guestNic=cloudbr0
>>> >>>
>>> >>> The problem is this method in LibvirtServerDiscoverer never finds a
>>> >>> matching host in the DB:
>>> >>>
>>> >>> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>>> >>>
>>> >>> I assume once the KVM host is up and running that it's supposed to
>>> call
>>> >>> into the CS MS so the DB can be updated as such?
>>> >>>
>>> >>> If so, the problem must be on the KVM side.
>>> >>>
>>> >>> I did run this again (from the KVM host) to see if the connection was
>>> in
>>> >>> place:
>>> >>>
>>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>> >>>
>>> >>> Trying 192.168.233.1...
>>> >>>
>>> >>> Connected to 192.168.233.1.
>>> >>>
>>> >>> Escape character is '^]'.
>>> >>> So that looks good.
>>> >>>
>>> >>> I turned on more info in the debug log, but nothing obvious jumps out
>>> as
>>> >>> of yet.
>>> >>>
>>> >>> If you have any thoughts on this, please shoot them my way. :)
>>> >>>
>>> >>> Thanks!
>>> >>>
>>> >>>
>>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>>> >>> mike.tutkowski@solidfire.com> wrote:
>>> >>>
>>> >>>> First step is for me to get this working for KVM, though. :)
>>> >>>>
>>> >>>> Once I do that, I can perhaps make modifications to the storage
>>> >>>> framework and hypervisor plug-ins to refactor the logic and such.
>>> >>>>
>>> >>>>
>>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>>> >>>> mike.tutkowski@solidfire.com> wrote:
>>> >>>>
>>> >>>>> Same would work for KVM.
>>> >>>>>
>>> >>>>> If CreateCommand and DestroyCommand were called at the appropriate
>>> >>>>> times by the storage framework, I could move my connect and
>>> disconnect
>>> >>>>> logic out of the attach/detach logic.
>>> >>>>>
>>> >>>>>
>>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>>> >>>>> mike.tutkowski@solidfire.com> wrote:
>>> >>>>>
>>> >>>>>> Conversely, if the storage framework called the DestroyCommand for
>>> >>>>>> managed storage after the DetachCommand, then I could have had my
>>> remove
>>> >>>>>> SR/datastore logic placed in the DestroyCommand handling rather
>>> than in the
>>> >>>>>> DetachCommand handling.
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>> >>>>>> mike.tutkowski@solidfire.com> wrote:
>>> >>>>>>
>>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>> >>>>>>>
>>> >>>>>>> The initial approach that was discussed during 4.2 was for me to
>>> >>>>>>> modify the attach/detach logic only in the XenServer and VMware
>>> hypervisor
>>> >>>>>>> plug-ins.
>>> >>>>>>>
>>> >>>>>>> Now that I think about it more, though, I kind of would have
>>> liked to
>>> >>>>>>> have the storage framework send a CreateCommand to the hypervisor
>>> before
>>> >>>>>>> sending the AttachCommand if the storage in question was managed.
>>> >>>>>>>
>>> >>>>>>> Then I could have created my SR/datastore in the CreateCommand and
>>> >>>>>>> the AttachCommand would have had the SR/datastore that it was
>>> always
>>> >>>>>>> expecting (and I wouldn't have had to create the SR/datastore in
>>> the
>>> >>>>>>> AttachCommand).
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>>> >>>>>>> shadowsor@gmail.com> wrote:
>>> >>>>>>>
>>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>> >>>>>>>> better position to tell.
>>> >>>>>>>>
>>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2 driver,
>>> does
>>> >>>>>>>> that mean that there's no template support? Or is it some other
>>> call
>>> >>>>>>>> that does templating now? I'm still getting up to speed on all
>>> of the
>>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>>> >>>>>>>> LibvirtComputingResource, since that's the only place
>>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
>>> >>>>>>>> CreateCommand
>>> >>>>>>>> might be skipped altogether when utilizing storage plugins.
>>> >>>>>>>>
>>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>> >>>>>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>> > That's an interesting comment, Marcus.
>>> >>>>>>>> >
>>> >>>>>>>> > It was my intent that it should work with any CloudStack
>>> "managed"
>>> >>>>>>>> storage
>>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote
>>> the
>>> >>>>>>>> code so
>>> >>>>>>>> > CHAP didn't have to be used.
>>> >>>>>>>> >
>>> >>>>>>>> > As I'm doing my testing, I can try to think about whether it is
>>> >>>>>>>> generic
>>> >>>>>>>> > enough to keep those names or not.
>>> >>>>>>>> >
>>> >>>>>>>> > My expectation is that it is generic enough.
>>> >>>>>>>> >
>>> >>>>>>>> >
>>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>> >>>>>>>> shadowsor@gmail.com>wrote:
>>> >>>>>>>> >
>>> >>>>>>>> >> I added a comment to your diff. In general I think it looks
>>> good,
>>> >>>>>>>> >> though I obviously can't vouch for whether or not it will
>>> work.
>>> >>>>>>>> One
>>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
>>> naming. If
>>> >>>>>>>> you
>>> >>>>>>>> >> think the code is generic enough that it will work for anyone
>>> who
>>> >>>>>>>> does
>>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>> >>>>>>>> anything
>>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it likes
>>> to
>>> >>>>>>>> be
>>> >>>>>>>> >> treated then I'd say that they should be named something less
>>> >>>>>>>> generic
>>> >>>>>>>> >> than iScsiAdmStorage.
>>> >>>>>>>> >>
>>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>> >>>>>>>> >> <mi...@solidfire.com> wrote:
>>> >>>>>>>> >> > Great - thanks!
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > Just to give you an overview of what my code does (for when
>>> you
>>> >>>>>>>> get a
>>> >>>>>>>> >> > chance to review it):
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > SolidFireHostListener is registered in
>>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
>>> >>>>>>>> >> > Its hostConnect method is invoked when a host connects with
>>> the
>>> >>>>>>>> CS MS. If
>>> >>>>>>>> >> > the host is running KVM, the listener sends a
>>> >>>>>>>> ModifyStoragePoolCommand to
>>> >>>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
>>> >>>>>>>> invokes
>>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>> >>>>>>>> KVMStoragePoolManager
>>> >>>>>>>> >> > asks for an adaptor and finds my new one:
>>> >>>>>>>> iScsiAdmStorageAdaptor (which
>>> >>>>>>>> >> was
>>> >>>>>>>> >> > registered in the constructor for KVMStoragePoolManager
>>> under
>>> >>>>>>>> the key of
>>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an
>>> instance
>>> >>>>>>>> of
>>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the
>>> pointer
>>> >>>>>>>> to the
>>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID
>>> of
>>> >>>>>>>> the storage
>>> >>>>>>>> >> > pool.
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>> >>>>>>>> managed
>>> >>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>> >>>>>>>> iscsiadm to
>>> >>>>>>>> >> > establish the iSCSI connection to the volume on the SAN and
>>> a
>>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic
>>> that
>>> >>>>>>>> follows.
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with
>>> the
>>> >>>>>>>> IQN of the
>>> >>>>>>>> >> > volume if the storage pool in question is managed storage.
>>> >>>>>>>> Otherwise, the
>>> >>>>>>>> >> > normal vol.getPath() is used.
>>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>> >>>>>>>> detach logic.
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > Once the volume has been detached,
>>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>> >>>>>>>> >> > is invoked if the storage pool is managed.
>>> deletePhysicalDisk
>>> >>>>>>>> removes the
>>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>> >>>>>>>> >> >
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>> >>>>>>>> shadowsor@gmail.com
>>> >>>>>>>> >> >wrote:
>>> >>>>>>>> >> >
>>> >>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent
>>> change
>>> >>>>>>>> all INFO
>>> >>>>>>>> >> to
>>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can
>>> tail
>>> >>>>>>>> the log
>>> >>>>>>>> >> when
>>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
>>> something
>>> >>>>>>>> out into
>>> >>>>>>>> >> one
>>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>> >>>>>>>> mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >
>>> >>>>>>>> >> >> wrote:
>>> >>>>>>>> >> >>
>>> >>>>>>>> >> >> > This is how I've been trying to query for the status of
>>> the
>>> >>>>>>>> service (I
>>> >>>>>>>> >> >> > assume it could be started this way, as well, by changing
>>> >>>>>>>> "status" to
>>> >>>>>>>> >> >> > "start" or "restart"?):
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>> >>>>>>>> /usr/sbin/service
>>> >>>>>>>> >> >> > cloudstack-agent status
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > I get this back:
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
>>> >>>>>>>> cloudstack-agent
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
>>> >>>>>>>> think I'm
>>> >>>>>>>> >> going
>>> >>>>>>>> >> >> to
>>> >>>>>>>> >> >> > rebuild and redeploy everything.
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
>>> enable.debug?
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > Thanks, Marcus!
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>> >>>>>>>> shadowsor@gmail.com
>>> >>>>>>>> >> >> > >wrote:
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
>>> mentioned,
>>> >>>>>>>> you can
>>> >>>>>>>> >> set
>>> >>>>>>>> >> >> up
>>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well if all
>>> >>>>>>>> else fails.
>>> >>>>>>>> >>  If
>>> >>>>>>>> >> >> > you
>>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host,
>>> then
>>> >>>>>>>> you need
>>> >>>>>>>> >> to
>>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
>>> >>>>>>>> complaining loudly
>>> >>>>>>>> >> if
>>> >>>>>>>> >> >> it
>>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in
>>> >>>>>>>> your agent
>>> >>>>>>>> >> log,
>>> >>>>>>>> >> >> so
>>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how to
>>> >>>>>>>> stop/start the
>>> >>>>>>>> >> agent
>>> >>>>>>>> >> >> on
>>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
>>> >>>>>>>> >> >> > > wrote:
>>> >>>>>>>> >> >> > >
>>> >>>>>>>> >> >> > > > Hey Marcus,
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
>>> >>>>>>>> thought you
>>> >>>>>>>> >> would
>>> >>>>>>>> >> >> > be a
>>> >>>>>>>> >> >> > > > good person to ask to review it:
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >>
>>> >>>>>>>> >>
>>> >>>>>>>>
>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach a data
>>> >>>>>>>> disk (that
>>> >>>>>>>> >> has
>>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data
>>> >>>>>>>> disk
>>> >>>>>>>> >> happens to
>>> >>>>>>>> >> >> > be
>>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
>>> >>>>>>>> mapping
>>> >>>>>>>> >> between a
>>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
>>> >>>>>>>> like that
>>> >>>>>>>> >> >> > (likely a
>>> >>>>>>>> >> >> > > > future release)...just attaching and detaching a data
>>> >>>>>>>> disk in 4.3.
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > Thanks!
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>> >>>>>>>> cloudstack-agent
>>> >>>>>>>> >> >> first.
>>> >>>>>>>> >> >> > > > Would
>>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
>>> install
>>> >>>>>>>> >> >> > cloudstack-agent.
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > >> I get the same error running the command manually:
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>> >>>>>>>> >> /usr/sbin/service
>>> >>>>>>>> >> >> > > > >> cloudstack-agent status
>>> >>>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>>> >>>>>>>>  [cloud.agent.AgentShell]
>>> >>>>>>>> >> >> (main:null)
>>> >>>>>>>> >> >> > > > Agent
>>> >>>>>>>> >> >> > > > >>> started
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>>> >>>>>>>>  [cloud.agent.AgentShell]
>>> >>>>>>>> >> >> (main:null)
>>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>>> >>>>>>>>  [cloud.agent.AgentShell]
>>> >>>>>>>> >> >> (main:null)
>>> >>>>>>>> >> >> > > > >>> agent.properties found at
>>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>>> >>>>>>>>  [cloud.agent.AgentShell]
>>> >>>>>>>> >> >> (main:null)
>>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>>> >>>>>>>>  [cloud.agent.AgentShell]
>>> >>>>>>>> >> >> (main:null)
>>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>>>  [cloud.utils.LogUtils]
>>> >>>>>>>> >> (main:null)
>>> >>>>>>>> >> >> > > log4j
>>> >>>>>>>> >> >> > > > >>> configuration found at
>>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>> >>>>>>>> (main:null)
>>> >>>>>>>> >> id
>>> >>>>>>>> >> >> > is 3
>>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>> >>>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>> >>>>>>>> (main:null)
>>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>> >>>>>>>> >> >> scripts/network/domr/kvm
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
>>> >>>>>>>> important. This
>>> >>>>>>>> >> seems
>>> >>>>>>>> >> >> to
>>> >>>>>>>> >> >> > > be
>>> >>>>>>>> >> >> > > > a
>>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>> >>>>>>>> cloudstack-agent
>>> >>>>>>>> >> status
>>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access
>>> PID
>>> >>>>>>>> file for
>>> >>>>>>>> >> >> > > > >>> cloudstack-agent
>>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>> >>>>>>>> cloudstack-agent
>>> >>>>>>>> >> start
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
>>> Sorensen <
>>> >>>>>>>> >> >> > > shadowsor@gmail.com
>>> >>>>>>>> >> >> > > > >wrote:
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was
>>> the
>>> >>>>>>>> agent log
>>> >>>>>>>> >> for
>>> >>>>>>>> >> >> > > some
>>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
>>> >>>>>>>> place to
>>> >>>>>>>> >> look.
>>> >>>>>>>> >> >> > There
>>> >>>>>>>> >> >> > > > is
>>> >>>>>>>> >> >> > > > >>>> an
>>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup
>>> when
>>> >>>>>>>> it adds
>>> >>>>>>>> >> the
>>> >>>>>>>> >> >> > host,
>>> >>>>>>>> >> >> > > > >>>> both
>>> >>>>>>>> >> >> > > > >>>> in /var/log
>>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>> >>>>>>>> >> >> > > > >>>> wrote:
>>> >>>>>>>> >> >> > > > >>>>
>>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address
>>> or
>>> >>>>>>>> the KVM
>>> >>>>>>>> >> host?
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
>>> parameter:
>>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>>> >>>>>>>> server192.168.233.1
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>> >>>>>>>> >> >> host=192.168.233.1
>>> >>>>>>>> >> >> > > > value.
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus
>>> Sorensen
>>> >>>>>>>> <
>>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>> >>>>>>>> >> >> > > > >>>> > >wrote:
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>>> >>>>>>>> 192.168.233.10? But you
>>> >>>>>>>> >> >> tried
>>> >>>>>>>> >> >> > > to
>>> >>>>>>>> >> >> > > > >>>> telnet
>>> >>>>>>>> >> >> > > > >>>> > to
>>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change
>>> >>>>>>>> that in
>>> >>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but
>>> you
>>> >>>>>>>> may want
>>> >>>>>>>> >> to
>>> >>>>>>>> >> >> > edit
>>> >>>>>>>> >> >> > > > the
>>> >>>>>>>> >> >> > > > >>>> > config
>>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > >>>> > >
>>> >>>>>>>> >> >> > > > >>>> > > wrote:
>>> >>>>>>>> >> >> > > > >>>> > >
>>> >>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces
>>> file
>>> >>>>>>>> looks
>>> >>>>>>>> >> like, if
>>> >>>>>>>> >> >> > > that
>>> >>>>>>>> >> >> > > > >>>> is of
>>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the
>>> >>>>>>>> NAT network
>>> >>>>>>>> >> >> > VMware
>>> >>>>>>>> >> >> > > > >>>> Fusion
>>> >>>>>>>> >> >> > > > >>>> > set
>>> >>>>>>>> >> >> > > > >>>> > > > up):
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > auto lo
>>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
>>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>>> >>>>>>>> 192.168.233.2 metric 1
>>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
>>> >>>>>>>> 192.168.233.2
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>> >>>>>>>> Tutkowski <
>>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from
>>> the
>>> >>>>>>>> MS log
>>> >>>>>>>> >> >> (below).
>>> >>>>>>>> >> >> > > > >>>> Discovery
>>> >>>>>>>> >> >> > > > >>>> > > > timed
>>> >>>>>>>> >> >> > > > >>>> > > > > out.
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My
>>> network
>>> >>>>>>>> settings
>>> >>>>>>>> >> >> > > shouldn't
>>> >>>>>>>> >> >> > > > >>>> have
>>> >>>>>>>> >> >> > > > >>>> > > > changed
>>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the
>>> MS
>>> >>>>>>>> host and
>>> >>>>>>>> >> vice
>>> >>>>>>>> >> >> > > > versa.
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM
>>> on
>>> >>>>>>>> the KVM
>>> >>>>>>>> >> host
>>> >>>>>>>> >> >> > and
>>> >>>>>>>> >> >> > > > >>>> ping from
>>> >>>>>>>> >> >> > > > >>>> > > it
>>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host
>>> (also
>>> >>>>>>>> running
>>> >>>>>>>> >> the
>>> >>>>>>>> >> >> CS
>>> >>>>>>>> >> >> > > MS)
>>> >>>>>>>> >> >> > > > >>>> to the
>>> >>>>>>>> >> >> > > > >>>> > VM
>>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>>> :ctx-6b28dc48)
>>> >>>>>>>> Timeout,
>>> >>>>>>>> >> to
>>> >>>>>>>> >> >> > wait
>>> >>>>>>>> >> >> > > > for
>>> >>>>>>>> >> >> > > > >>>> the
>>> >>>>>>>> >> >> > > > >>>> > > host
>>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is
>>> failed
>>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>>> :ctx-6b28dc48)
>>> >>>>>>>> Unable to
>>> >>>>>>>> >> >> find
>>> >>>>>>>> >> >> > > the
>>> >>>>>>>> >> >> > > > >>>> server
>>> >>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>>> :ctx-6b28dc48)
>>> >>>>>>>> Could not
>>> >>>>>>>> >> >> find
>>> >>>>>>>> >> >> > > > >>>> exception:
>>> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException
>>> in
>>> >>>>>>>> error code
>>> >>>>>>>> >> >> list
>>> >>>>>>>> >> >> > > for
>>> >>>>>>>> >> >> > > > >>>> > > exceptions
>>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>>> :ctx-6b28dc48)
>>> >>>>>>>> Exception:
>>> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>> >>>>>>>> Unable to add
>>> >>>>>>>> >> >> the
>>> >>>>>>>> >> >> > > host
>>> >>>>>>>> >> >> > > > >>>> > > > > at
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>>
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >>
>>> >>>>>>>> >>
>>> >>>>>>>>
>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from
>>> my
>>> >>>>>>>> KVM host to
>>> >>>>>>>> >> >> the
>>> >>>>>>>> >> >> > MS
>>> >>>>>>>> >> >> > > > >>>> host's
>>> >>>>>>>> >> >> > > > >>>> > > 8250
>>> >>>>>>>> >> >> > > > >>>> > > > > port:
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
>>> 192.168.233.1
>>> >>>>>>>> 8250
>>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>> >>>>>>>> >> >> > > > >>>> > > > >
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > > > --
>>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire
>>> Inc.*
>>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>> >>>>>>>> >> >> > > > >>>> > > > cloud<
>>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> >> > > > >>>> > > > *™*
>>> >>>>>>>> >> >> > > > >>>> > > >
>>> >>>>>>>> >> >> > > > >>>> > >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>> > --
>>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>> >>>>>>>> >> >> > > > >>>> > cloud<
>>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> >> > > > >>>> > *™*
>>> >>>>>>>> >> >> > > > >>>> >
>>> >>>>>>>> >> >> > > > >>>>
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>> --
>>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
>>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> >> > > > >>> *™*
>>> >>>>>>>> >> >> > > > >>>
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >> --
>>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > >> o: 303.746.7302
>>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> >> > > > >> *™*
>>> >>>>>>>> >> >> > > > >>
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > > > --
>>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > > o: 303.746.7302
>>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> >> > > > > *™*
>>> >>>>>>>> >> >> > > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > > > --
>>> >>>>>>>> >> >> > > > *Mike Tutkowski*
>>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > > > o: 303.746.7302
>>> >>>>>>>> >> >> > > > Advancing the way the world uses the
>>> >>>>>>>> >> >> > > > cloud<
>>> http://solidfire.com/solution/overview/?video=play
>>> >>>>>>>> >
>>> >>>>>>>> >> >> > > > *™*
>>> >>>>>>>> >> >> > > >
>>> >>>>>>>> >> >> > >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >> > --
>>> >>>>>>>> >> >> > *Mike Tutkowski*
>>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> >> > o: 303.746.7302
>>> >>>>>>>> >> >> > Advancing the way the world uses the
>>> >>>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play
>>> >
>>> >>>>>>>> >> >> > *™*
>>> >>>>>>>> >> >> >
>>> >>>>>>>> >> >>
>>> >>>>>>>> >> >
>>> >>>>>>>> >> >
>>> >>>>>>>> >> >
>>> >>>>>>>> >> > --
>>> >>>>>>>> >> > *Mike Tutkowski*
>>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> >> > o: 303.746.7302
>>> >>>>>>>> >> > Advancing the way the world uses the
>>> >>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> >> > *™*
>>> >>>>>>>> >>
>>> >>>>>>>> >
>>> >>>>>>>> >
>>> >>>>>>>> >
>>> >>>>>>>> > --
>>> >>>>>>>> > *Mike Tutkowski*
>>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>>> > e: mike.tutkowski@solidfire.com
>>> >>>>>>>> > o: 303.746.7302
>>> >>>>>>>> > Advancing the way the world uses the
>>> >>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>>> > *™*
>>> >>>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> *Mike Tutkowski*
>>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>> o: 303.746.7302
>>> >>>>>>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>>> *™*
>>> >>>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> --
>>> >>>>>> *Mike Tutkowski*
>>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>> o: 303.746.7302
>>> >>>>>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>>> *™*
>>> >>>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> --
>>> >>>>> *Mike Tutkowski*
>>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>>> e: mike.tutkowski@solidfire.com
>>> >>>>> o: 303.746.7302
>>> >>>>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>>> *™*
>>> >>>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> *Mike Tutkowski*
>>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>> o: 303.746.7302
>>> >>>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>>> *™*
>>> >>>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> *Mike Tutkowski*
>>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >>> e: mike.tutkowski@solidfire.com
>>> >>> o: 303.746.7302
>>> >>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >>> *™*
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> *Mike Tutkowski*
>>> >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
OK, so, nothing is screaming out in the logs. I did notice the following:

>From setup.log:

DEBUG:root:execute:apparmor_status |grep libvirt

DEBUG:root:Failed to execute:


DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status

DEBUG:root:Failed to execute: * could not access PID file for
cloudstack-agent


This is the final line in this log file:

DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start


This is from agent.log:

2013-09-23 15:30:55,549 DEBUG [cloud.agent.AgentShell] (main:null) Checking
to see if agent.pid exists.

2013-09-23 15:30:55,655 DEBUG [cloud.utils.ProcessUtil] (main:null)
Executing: bash -c echo $PPID

2013-09-23 15:30:55,742 DEBUG [cloud.utils.ProcessUtil] (main:null)
Execution is successful.

2013-09-23 15:30:56,000 INFO  [cloud.agent.Agent] (main:null) id is

2013-09-23 15:30:56,000 DEBUG [cloud.resource.ServerResourceBase]
(main:null) Retrieving network interface: cloudbr0

2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
(main:null) Retrieving network interface: cloudbr0

2013-09-23 15:30:56,016 DEBUG [cloud.resource.ServerResourceBase]
(main:null) Retrieving network interface: null

2013-09-23 15:30:56,017 DEBUG [cloud.resource.ServerResourceBase]
(main:null) Retrieving network interface: null


The following kinds of lines are repeated for a bunch of different .sh
files. I think they often end up being found here:
/usr/share/cloudstack-common/scripts/network/domr, so this is probably not
an issue.


2013-09-23 15:30:56,111 DEBUG [utils.script.Script] (main:null) Looking for
call_firewall.sh in the classpath

2013-09-23 15:30:56,112 DEBUG [utils.script.Script] (main:null) System
resource: null

2013-09-23 15:30:56,113 DEBUG [utils.script.Script] (main:null) Classpath
resource: null

2013-09-23 15:30:56,123 DEBUG [utils.script.Script] (main:null) Looking for
call_firewall.sh


Is there a log file for the Java code that I could write stuff out to and
see how far we get?


On Mon, Sep 23, 2013 at 3:17 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Thanks, Marcus
>
> I've been developing on Windows for most of my time, so a bunch of these
> Linux-type commands are new to me and I don't always interpret the output
> correctly. Getting there. :)
>
>
> On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Nope, not running. That's just your grep process. It would look like:
>>
>> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
>>
>> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>>
>> Your agent log should tell you why it failed to start if you set it in
>> debug and try to start... or maybe cloudstack-agent.out if it doesn't
>> get far enough (say it's missing a class or something and can't
>> start).
>>
>> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > Looks like it's running, though:
>> >
>> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
>> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto jsvc
>> >
>> >
>> >
>> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> >> Hey Marcus,
>> >>
>> >> Maybe you could give me a better idea of what the "flow" is when
>> adding a
>> >> KVM host.
>> >>
>> >> It looks like we SSH into the potential KVM host and execute a startup
>> >> script (giving it necessary info about the cloud and the management
>> server
>> >> it should talk to).
>> >>
>> >> After this, is the Java VM started?
>> >>
>> >> After a reboot, I assume the JVM is started automatically?
>> >>
>> >> How do you debug your KVM-side Java code?
>> >>
>> >> Been looking through the logs and nothing obvious sticks out. I will
>> have
>> >> another look.
>> >>
>> >> Thanks
>> >>
>> >>
>> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>> >> mike.tutkowski@solidfire.com> wrote:
>> >>
>> >>> Hey Marcus,
>> >>>
>> >>> I've been investigating my issue with not being able to add a KVM
>> host to
>> >>> CS.
>> >>>
>> >>> For what it's worth, this comes back successful:
>> >>>
>> >>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
>> >>> parameters, 3);
>> >>>
>> >>> This is what the command looks like:
>> >>>
>> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
>> --prvNic=cloudbr0
>> >>> --guestNic=cloudbr0
>> >>>
>> >>> The problem is this method in LibvirtServerDiscoverer never finds a
>> >>> matching host in the DB:
>> >>>
>> >>> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>> >>>
>> >>> I assume once the KVM host is up and running that it's supposed to
>> call
>> >>> into the CS MS so the DB can be updated as such?
>> >>>
>> >>> If so, the problem must be on the KVM side.
>> >>>
>> >>> I did run this again (from the KVM host) to see if the connection was
>> in
>> >>> place:
>> >>>
>> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >>>
>> >>> Trying 192.168.233.1...
>> >>>
>> >>> Connected to 192.168.233.1.
>> >>>
>> >>> Escape character is '^]'.
>> >>> So that looks good.
>> >>>
>> >>> I turned on more info in the debug log, but nothing obvious jumps out
>> as
>> >>> of yet.
>> >>>
>> >>> If you have any thoughts on this, please shoot them my way. :)
>> >>>
>> >>> Thanks!
>> >>>
>> >>>
>> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>> >>> mike.tutkowski@solidfire.com> wrote:
>> >>>
>> >>>> First step is for me to get this working for KVM, though. :)
>> >>>>
>> >>>> Once I do that, I can perhaps make modifications to the storage
>> >>>> framework and hypervisor plug-ins to refactor the logic and such.
>> >>>>
>> >>>>
>> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> >>>> mike.tutkowski@solidfire.com> wrote:
>> >>>>
>> >>>>> Same would work for KVM.
>> >>>>>
>> >>>>> If CreateCommand and DestroyCommand were called at the appropriate
>> >>>>> times by the storage framework, I could move my connect and
>> disconnect
>> >>>>> logic out of the attach/detach logic.
>> >>>>>
>> >>>>>
>> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>> >>>>> mike.tutkowski@solidfire.com> wrote:
>> >>>>>
>> >>>>>> Conversely, if the storage framework called the DestroyCommand for
>> >>>>>> managed storage after the DetachCommand, then I could have had my
>> remove
>> >>>>>> SR/datastore logic placed in the DestroyCommand handling rather
>> than in the
>> >>>>>> DetachCommand handling.
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>> >>>>>> mike.tutkowski@solidfire.com> wrote:
>> >>>>>>
>> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>> >>>>>>>
>> >>>>>>> The initial approach that was discussed during 4.2 was for me to
>> >>>>>>> modify the attach/detach logic only in the XenServer and VMware
>> hypervisor
>> >>>>>>> plug-ins.
>> >>>>>>>
>> >>>>>>> Now that I think about it more, though, I kind of would have
>> liked to
>> >>>>>>> have the storage framework send a CreateCommand to the hypervisor
>> before
>> >>>>>>> sending the AttachCommand if the storage in question was managed.
>> >>>>>>>
>> >>>>>>> Then I could have created my SR/datastore in the CreateCommand and
>> >>>>>>> the AttachCommand would have had the SR/datastore that it was
>> always
>> >>>>>>> expecting (and I wouldn't have had to create the SR/datastore in
>> the
>> >>>>>>> AttachCommand).
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>> >>>>>>> shadowsor@gmail.com> wrote:
>> >>>>>>>
>> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>> >>>>>>>> better position to tell.
>> >>>>>>>>
>> >>>>>>>> I see that copyAsync is unsupported in your current 4.2 driver,
>> does
>> >>>>>>>> that mean that there's no template support? Or is it some other
>> call
>> >>>>>>>> that does templating now? I'm still getting up to speed on all
>> of the
>> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>> >>>>>>>> LibvirtComputingResource, since that's the only place
>> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
>> >>>>>>>> CreateCommand
>> >>>>>>>> might be skipped altogether when utilizing storage plugins.
>> >>>>>>>>
>> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>> >>>>>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>> > That's an interesting comment, Marcus.
>> >>>>>>>> >
>> >>>>>>>> > It was my intent that it should work with any CloudStack
>> "managed"
>> >>>>>>>> storage
>> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote
>> the
>> >>>>>>>> code so
>> >>>>>>>> > CHAP didn't have to be used.
>> >>>>>>>> >
>> >>>>>>>> > As I'm doing my testing, I can try to think about whether it is
>> >>>>>>>> generic
>> >>>>>>>> > enough to keep those names or not.
>> >>>>>>>> >
>> >>>>>>>> > My expectation is that it is generic enough.
>> >>>>>>>> >
>> >>>>>>>> >
>> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>> >>>>>>>> shadowsor@gmail.com>wrote:
>> >>>>>>>> >
>> >>>>>>>> >> I added a comment to your diff. In general I think it looks
>> good,
>> >>>>>>>> >> though I obviously can't vouch for whether or not it will
>> work.
>> >>>>>>>> One
>> >>>>>>>> >> thing I do have reservations about is the adaptor/pool
>> naming. If
>> >>>>>>>> you
>> >>>>>>>> >> think the code is generic enough that it will work for anyone
>> who
>> >>>>>>>> does
>> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>> >>>>>>>> anything
>> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it likes
>> to
>> >>>>>>>> be
>> >>>>>>>> >> treated then I'd say that they should be named something less
>> >>>>>>>> generic
>> >>>>>>>> >> than iScsiAdmStorage.
>> >>>>>>>> >>
>> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> >>>>>>>> >> <mi...@solidfire.com> wrote:
>> >>>>>>>> >> > Great - thanks!
>> >>>>>>>> >> >
>> >>>>>>>> >> > Just to give you an overview of what my code does (for when
>> you
>> >>>>>>>> get a
>> >>>>>>>> >> > chance to review it):
>> >>>>>>>> >> >
>> >>>>>>>> >> > SolidFireHostListener is registered in
>> >>>>>>>> SolidfirePrimaryDataStoreProvider.
>> >>>>>>>> >> > Its hostConnect method is invoked when a host connects with
>> the
>> >>>>>>>> CS MS. If
>> >>>>>>>> >> > the host is running KVM, the listener sends a
>> >>>>>>>> ModifyStoragePoolCommand to
>> >>>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>> >>>>>>>> >> >
>> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
>> >>>>>>>> invokes
>> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>> >>>>>>>> KVMStoragePoolManager
>> >>>>>>>> >> > asks for an adaptor and finds my new one:
>> >>>>>>>> iScsiAdmStorageAdaptor (which
>> >>>>>>>> >> was
>> >>>>>>>> >> > registered in the constructor for KVMStoragePoolManager
>> under
>> >>>>>>>> the key of
>> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>> >>>>>>>> >> >
>> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an
>> instance
>> >>>>>>>> of
>> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the
>> pointer
>> >>>>>>>> to the
>> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID
>> of
>> >>>>>>>> the storage
>> >>>>>>>> >> > pool.
>> >>>>>>>> >> >
>> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>> >>>>>>>> managed
>> >>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>> >>>>>>>> iscsiadm to
>> >>>>>>>> >> > establish the iSCSI connection to the volume on the SAN and
>> a
>> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic
>> that
>> >>>>>>>> follows.
>> >>>>>>>> >> >
>> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with
>> the
>> >>>>>>>> IQN of the
>> >>>>>>>> >> > volume if the storage pool in question is managed storage.
>> >>>>>>>> Otherwise, the
>> >>>>>>>> >> > normal vol.getPath() is used.
>> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>> >>>>>>>> detach logic.
>> >>>>>>>> >> >
>> >>>>>>>> >> > Once the volume has been detached,
>> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>> >>>>>>>> >> > is invoked if the storage pool is managed.
>> deletePhysicalDisk
>> >>>>>>>> removes the
>> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>> >>>>>>>> >> >
>> >>>>>>>> >> >
>> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>> >>>>>>>> shadowsor@gmail.com
>> >>>>>>>> >> >wrote:
>> >>>>>>>> >> >
>> >>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent
>> change
>> >>>>>>>> all INFO
>> >>>>>>>> >> to
>> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can
>> tail
>> >>>>>>>> the log
>> >>>>>>>> >> when
>> >>>>>>>> >> >> you try to start the service, or maybe it will spit
>> something
>> >>>>>>>> out into
>> >>>>>>>> >> one
>> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> >>>>>>>> mike.tutkowski@solidfire.com
>> >>>>>>>> >> >
>> >>>>>>>> >> >> wrote:
>> >>>>>>>> >> >>
>> >>>>>>>> >> >> > This is how I've been trying to query for the status of
>> the
>> >>>>>>>> service (I
>> >>>>>>>> >> >> > assume it could be started this way, as well, by changing
>> >>>>>>>> "status" to
>> >>>>>>>> >> >> > "start" or "restart"?):
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> >>>>>>>> /usr/sbin/service
>> >>>>>>>> >> >> > cloudstack-agent status
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > I get this back:
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
>> >>>>>>>> cloudstack-agent
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
>> >>>>>>>> think I'm
>> >>>>>>>> >> going
>> >>>>>>>> >> >> to
>> >>>>>>>> >> >> > rebuild and redeploy everything.
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
>> enable.debug?
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > Thanks, Marcus!
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>> >>>>>>>> shadowsor@gmail.com
>> >>>>>>>> >> >> > >wrote:
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
>> mentioned,
>> >>>>>>>> you can
>> >>>>>>>> >> set
>> >>>>>>>> >> >> up
>> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well if all
>> >>>>>>>> else fails.
>> >>>>>>>> >>  If
>> >>>>>>>> >> >> > you
>> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host,
>> then
>> >>>>>>>> you need
>> >>>>>>>> >> to
>> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
>> >>>>>>>> complaining loudly
>> >>>>>>>> >> if
>> >>>>>>>> >> >> it
>> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in
>> >>>>>>>> your agent
>> >>>>>>>> >> log,
>> >>>>>>>> >> >> so
>> >>>>>>>> >> >> > > perhaps its not running. I assume you know how to
>> >>>>>>>> stop/start the
>> >>>>>>>> >> agent
>> >>>>>>>> >> >> on
>> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
>> >>>>>>>> >> >> > > wrote:
>> >>>>>>>> >> >> > >
>> >>>>>>>> >> >> > > > Hey Marcus,
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
>> >>>>>>>> thought you
>> >>>>>>>> >> would
>> >>>>>>>> >> >> > be a
>> >>>>>>>> >> >> > > > good person to ask to review it:
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >>
>> >>>>>>>> >>
>> >>>>>>>>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach a data
>> >>>>>>>> disk (that
>> >>>>>>>> >> has
>> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data
>> >>>>>>>> disk
>> >>>>>>>> >> happens to
>> >>>>>>>> >> >> > be
>> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
>> >>>>>>>> mapping
>> >>>>>>>> >> between a
>> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
>> >>>>>>>> like that
>> >>>>>>>> >> >> > (likely a
>> >>>>>>>> >> >> > > > future release)...just attaching and detaching a data
>> >>>>>>>> disk in 4.3.
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > Thanks!
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>> >>>>>>>> cloudstack-agent
>> >>>>>>>> >> >> first.
>> >>>>>>>> >> >> > > > Would
>> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get
>> install
>> >>>>>>>> >> >> > cloudstack-agent.
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > >> I get the same error running the command manually:
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> >>>>>>>> >> /usr/sbin/service
>> >>>>>>>> >> >> > > > >> cloudstack-agent status
>> >>>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>> >>>>>>>>  [cloud.agent.AgentShell]
>> >>>>>>>> >> >> (main:null)
>> >>>>>>>> >> >> > > > Agent
>> >>>>>>>> >> >> > > > >>> started
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>> >>>>>>>>  [cloud.agent.AgentShell]
>> >>>>>>>> >> >> (main:null)
>> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>> >>>>>>>>  [cloud.agent.AgentShell]
>> >>>>>>>> >> >> (main:null)
>> >>>>>>>> >> >> > > > >>> agent.properties found at
>> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>> >>>>>>>>  [cloud.agent.AgentShell]
>> >>>>>>>> >> >> (main:null)
>> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>> >>>>>>>>  [cloud.agent.AgentShell]
>> >>>>>>>> >> >> (main:null)
>> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>>  [cloud.utils.LogUtils]
>> >>>>>>>> >> (main:null)
>> >>>>>>>> >> >> > > log4j
>> >>>>>>>> >> >> > > > >>> configuration found at
>> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>> >>>>>>>> (main:null)
>> >>>>>>>> >> id
>> >>>>>>>> >> >> > is 3
>> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >>>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>> >>>>>>>> (main:null)
>> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>> >>>>>>>> >> >> scripts/network/domr/kvm
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
>> >>>>>>>> important. This
>> >>>>>>>> >> seems
>> >>>>>>>> >> >> to
>> >>>>>>>> >> >> > > be
>> >>>>>>>> >> >> > > > a
>> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>> >>>>>>>> cloudstack-agent
>> >>>>>>>> >> status
>> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access
>> PID
>> >>>>>>>> file for
>> >>>>>>>> >> >> > > > >>> cloudstack-agent
>> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>> >>>>>>>> cloudstack-agent
>> >>>>>>>> >> start
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus
>> Sorensen <
>> >>>>>>>> >> >> > > shadowsor@gmail.com
>> >>>>>>>> >> >> > > > >wrote:
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was
>> the
>> >>>>>>>> agent log
>> >>>>>>>> >> for
>> >>>>>>>> >> >> > > some
>> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
>> >>>>>>>> place to
>> >>>>>>>> >> look.
>> >>>>>>>> >> >> > There
>> >>>>>>>> >> >> > > > is
>> >>>>>>>> >> >> > > > >>>> an
>> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup
>> when
>> >>>>>>>> it adds
>> >>>>>>>> >> the
>> >>>>>>>> >> >> > host,
>> >>>>>>>> >> >> > > > >>>> both
>> >>>>>>>> >> >> > > > >>>> in /var/log
>> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >>>>>>>> >> >> > > > >>>> wrote:
>> >>>>>>>> >> >> > > > >>>>
>> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address
>> or
>> >>>>>>>> the KVM
>> >>>>>>>> >> host?
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
>> parameter:
>> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>> >>>>>>>> server192.168.233.1
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>> >>>>>>>> >> >> host=192.168.233.1
>> >>>>>>>> >> >> > > > value.
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus
>> Sorensen
>> >>>>>>>> <
>> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>> >>>>>>>> >> >> > > > >>>> > >wrote:
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>> >>>>>>>> 192.168.233.10? But you
>> >>>>>>>> >> >> tried
>> >>>>>>>> >> >> > > to
>> >>>>>>>> >> >> > > > >>>> telnet
>> >>>>>>>> >> >> > > > >>>> > to
>> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change
>> >>>>>>>> that in
>> >>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but
>> you
>> >>>>>>>> may want
>> >>>>>>>> >> to
>> >>>>>>>> >> >> > edit
>> >>>>>>>> >> >> > > > the
>> >>>>>>>> >> >> > > > >>>> > config
>> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > >>>> > >
>> >>>>>>>> >> >> > > > >>>> > > wrote:
>> >>>>>>>> >> >> > > > >>>> > >
>> >>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces
>> file
>> >>>>>>>> looks
>> >>>>>>>> >> like, if
>> >>>>>>>> >> >> > > that
>> >>>>>>>> >> >> > > > >>>> is of
>> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the
>> >>>>>>>> NAT network
>> >>>>>>>> >> >> > VMware
>> >>>>>>>> >> >> > > > >>>> Fusion
>> >>>>>>>> >> >> > > > >>>> > set
>> >>>>>>>> >> >> > > > >>>> > > > up):
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > auto lo
>> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > auto eth0
>> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>> >>>>>>>> 192.168.233.2 metric 1
>> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
>> >>>>>>>> 192.168.233.2
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>> >>>>>>>> Tutkowski <
>> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from
>> the
>> >>>>>>>> MS log
>> >>>>>>>> >> >> (below).
>> >>>>>>>> >> >> > > > >>>> Discovery
>> >>>>>>>> >> >> > > > >>>> > > > timed
>> >>>>>>>> >> >> > > > >>>> > > > > out.
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My
>> network
>> >>>>>>>> settings
>> >>>>>>>> >> >> > > shouldn't
>> >>>>>>>> >> >> > > > >>>> have
>> >>>>>>>> >> >> > > > >>>> > > > changed
>> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the
>> MS
>> >>>>>>>> host and
>> >>>>>>>> >> vice
>> >>>>>>>> >> >> > > > versa.
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM
>> on
>> >>>>>>>> the KVM
>> >>>>>>>> >> host
>> >>>>>>>> >> >> > and
>> >>>>>>>> >> >> > > > >>>> ping from
>> >>>>>>>> >> >> > > > >>>> > > it
>> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host
>> (also
>> >>>>>>>> running
>> >>>>>>>> >> the
>> >>>>>>>> >> >> CS
>> >>>>>>>> >> >> > > MS)
>> >>>>>>>> >> >> > > > >>>> to the
>> >>>>>>>> >> >> > > > >>>> > VM
>> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> :ctx-6b28dc48)
>> >>>>>>>> Timeout,
>> >>>>>>>> >> to
>> >>>>>>>> >> >> > wait
>> >>>>>>>> >> >> > > > for
>> >>>>>>>> >> >> > > > >>>> the
>> >>>>>>>> >> >> > > > >>>> > > host
>> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is
>> failed
>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> :ctx-6b28dc48)
>> >>>>>>>> Unable to
>> >>>>>>>> >> >> find
>> >>>>>>>> >> >> > > the
>> >>>>>>>> >> >> > > > >>>> server
>> >>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> :ctx-6b28dc48)
>> >>>>>>>> Could not
>> >>>>>>>> >> >> find
>> >>>>>>>> >> >> > > > >>>> exception:
>> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException
>> in
>> >>>>>>>> error code
>> >>>>>>>> >> >> list
>> >>>>>>>> >> >> > > for
>> >>>>>>>> >> >> > > > >>>> > > exceptions
>> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
>> :ctx-6b28dc48)
>> >>>>>>>> Exception:
>> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>> >>>>>>>> Unable to add
>> >>>>>>>> >> >> the
>> >>>>>>>> >> >> > > host
>> >>>>>>>> >> >> > > > >>>> > > > > at
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>>
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >>
>> >>>>>>>> >>
>> >>>>>>>>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from
>> my
>> >>>>>>>> KVM host to
>> >>>>>>>> >> >> the
>> >>>>>>>> >> >> > MS
>> >>>>>>>> >> >> > > > >>>> host's
>> >>>>>>>> >> >> > > > >>>> > > 8250
>> >>>>>>>> >> >> > > > >>>> > > > > port:
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
>> 192.168.233.1
>> >>>>>>>> 8250
>> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>> >>>>>>>> >> >> > > > >>>> > > > >
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > > > --
>> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire
>> Inc.*
>> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>> >>>>>>>> >> >> > > > >>>> > > > cloud<
>> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> >> > > > >>>> > > > *™*
>> >>>>>>>> >> >> > > > >>>> > > >
>> >>>>>>>> >> >> > > > >>>> > >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>> > --
>> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>> >>>>>>>> >> >> > > > >>>> > cloud<
>> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> >> > > > >>>> > *™*
>> >>>>>>>> >> >> > > > >>>> >
>> >>>>>>>> >> >> > > > >>>>
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>> --
>> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > >>> o: 303.746.7302
>> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> >> > > > >>> *™*
>> >>>>>>>> >> >> > > > >>>
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >> --
>> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
>> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > >> o: 303.746.7302
>> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> >> > > > >> *™*
>> >>>>>>>> >> >> > > > >>
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > > > --
>> >>>>>>>> >> >> > > > > *Mike Tutkowski*
>> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > > o: 303.746.7302
>> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> >> > > > > *™*
>> >>>>>>>> >> >> > > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > > > --
>> >>>>>>>> >> >> > > > *Mike Tutkowski*
>> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > > > o: 303.746.7302
>> >>>>>>>> >> >> > > > Advancing the way the world uses the
>> >>>>>>>> >> >> > > > cloud<
>> http://solidfire.com/solution/overview/?video=play
>> >>>>>>>> >
>> >>>>>>>> >> >> > > > *™*
>> >>>>>>>> >> >> > > >
>> >>>>>>>> >> >> > >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >> > --
>> >>>>>>>> >> >> > *Mike Tutkowski*
>> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> >> > o: 303.746.7302
>> >>>>>>>> >> >> > Advancing the way the world uses the
>> >>>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play
>> >
>> >>>>>>>> >> >> > *™*
>> >>>>>>>> >> >> >
>> >>>>>>>> >> >>
>> >>>>>>>> >> >
>> >>>>>>>> >> >
>> >>>>>>>> >> >
>> >>>>>>>> >> > --
>> >>>>>>>> >> > *Mike Tutkowski*
>> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
>> >>>>>>>> >> > o: 303.746.7302
>> >>>>>>>> >> > Advancing the way the world uses the
>> >>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> >> > *™*
>> >>>>>>>> >>
>> >>>>>>>> >
>> >>>>>>>> >
>> >>>>>>>> >
>> >>>>>>>> > --
>> >>>>>>>> > *Mike Tutkowski*
>> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>>> > e: mike.tutkowski@solidfire.com
>> >>>>>>>> > o: 303.746.7302
>> >>>>>>>> > Advancing the way the world uses the
>> >>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >>>>>>>> > *™*
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> *Mike Tutkowski*
>> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>>> o: 303.746.7302
>> >>>>>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>>> *™*
>> >>>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>> *Mike Tutkowski*
>> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>> o: 303.746.7302
>> >>>>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>>>> *™*
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> *Mike Tutkowski*
>> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>>> e: mike.tutkowski@solidfire.com
>> >>>>> o: 303.746.7302
>> >>>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>>> *™*
>> >>>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> *Mike Tutkowski*
>> >>>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> o: 303.746.7302
>> >>>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>>> *™*
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> *Mike Tutkowski*
>> >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >>> *™*
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> *Mike Tutkowski*
>> >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Thanks, Marcus

I've been developing on Windows for most of my time, so a bunch of these
Linux-type commands are new to me and I don't always interpret the output
correctly. Getting there. :)


On Mon, Sep 23, 2013 at 2:37 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Nope, not running. That's just your grep process. It would look like:
>
> root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
>
> /usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.
>
> Your agent log should tell you why it failed to start if you set it in
> debug and try to start... or maybe cloudstack-agent.out if it doesn't
> get far enough (say it's missing a class or something and can't
> start).
>
> On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > Looks like it's running, though:
> >
> > mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> > 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto jsvc
> >
> >
> >
> > On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> Hey Marcus,
> >>
> >> Maybe you could give me a better idea of what the "flow" is when adding
> a
> >> KVM host.
> >>
> >> It looks like we SSH into the potential KVM host and execute a startup
> >> script (giving it necessary info about the cloud and the management
> server
> >> it should talk to).
> >>
> >> After this, is the Java VM started?
> >>
> >> After a reboot, I assume the JVM is started automatically?
> >>
> >> How do you debug your KVM-side Java code?
> >>
> >> Been looking through the logs and nothing obvious sticks out. I will
> have
> >> another look.
> >>
> >> Thanks
> >>
> >>
> >> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> >> mike.tutkowski@solidfire.com> wrote:
> >>
> >>> Hey Marcus,
> >>>
> >>> I've been investigating my issue with not being able to add a KVM host
> to
> >>> CS.
> >>>
> >>> For what it's worth, this comes back successful:
> >>>
> >>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
> >>> parameters, 3);
> >>>
> >>> This is what the command looks like:
> >>>
> >>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> >>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0
> --prvNic=cloudbr0
> >>> --guestNic=cloudbr0
> >>>
> >>> The problem is this method in LibvirtServerDiscoverer never finds a
> >>> matching host in the DB:
> >>>
> >>> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
> >>>
> >>> I assume once the KVM host is up and running that it's supposed to call
> >>> into the CS MS so the DB can be updated as such?
> >>>
> >>> If so, the problem must be on the KVM side.
> >>>
> >>> I did run this again (from the KVM host) to see if the connection was
> in
> >>> place:
> >>>
> >>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >>>
> >>> Trying 192.168.233.1...
> >>>
> >>> Connected to 192.168.233.1.
> >>>
> >>> Escape character is '^]'.
> >>> So that looks good.
> >>>
> >>> I turned on more info in the debug log, but nothing obvious jumps out
> as
> >>> of yet.
> >>>
> >>> If you have any thoughts on this, please shoot them my way. :)
> >>>
> >>> Thanks!
> >>>
> >>>
> >>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> >>> mike.tutkowski@solidfire.com> wrote:
> >>>
> >>>> First step is for me to get this working for KVM, though. :)
> >>>>
> >>>> Once I do that, I can perhaps make modifications to the storage
> >>>> framework and hypervisor plug-ins to refactor the logic and such.
> >>>>
> >>>>
> >>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> >>>> mike.tutkowski@solidfire.com> wrote:
> >>>>
> >>>>> Same would work for KVM.
> >>>>>
> >>>>> If CreateCommand and DestroyCommand were called at the appropriate
> >>>>> times by the storage framework, I could move my connect and
> disconnect
> >>>>> logic out of the attach/detach logic.
> >>>>>
> >>>>>
> >>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> >>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>
> >>>>>> Conversely, if the storage framework called the DestroyCommand for
> >>>>>> managed storage after the DetachCommand, then I could have had my
> remove
> >>>>>> SR/datastore logic placed in the DestroyCommand handling rather
> than in the
> >>>>>> DetachCommand handling.
> >>>>>>
> >>>>>>
> >>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> >>>>>> mike.tutkowski@solidfire.com> wrote:
> >>>>>>
> >>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
> >>>>>>>
> >>>>>>> The initial approach that was discussed during 4.2 was for me to
> >>>>>>> modify the attach/detach logic only in the XenServer and VMware
> hypervisor
> >>>>>>> plug-ins.
> >>>>>>>
> >>>>>>> Now that I think about it more, though, I kind of would have liked
> to
> >>>>>>> have the storage framework send a CreateCommand to the hypervisor
> before
> >>>>>>> sending the AttachCommand if the storage in question was managed.
> >>>>>>>
> >>>>>>> Then I could have created my SR/datastore in the CreateCommand and
> >>>>>>> the AttachCommand would have had the SR/datastore that it was
> always
> >>>>>>> expecting (and I wouldn't have had to create the SR/datastore in
> the
> >>>>>>> AttachCommand).
> >>>>>>>
> >>>>>>>
> >>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
> >>>>>>> shadowsor@gmail.com> wrote:
> >>>>>>>
> >>>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
> >>>>>>>> better position to tell.
> >>>>>>>>
> >>>>>>>> I see that copyAsync is unsupported in your current 4.2 driver,
> does
> >>>>>>>> that mean that there's no template support? Or is it some other
> call
> >>>>>>>> that does templating now? I'm still getting up to speed on all of
> the
> >>>>>>>> 4.2 changes. I was just looking at CreateCommand in
> >>>>>>>> LibvirtComputingResource, since that's the only place
> >>>>>>>> createPhysicalDisk is called, and it occurred to me that
> >>>>>>>> CreateCommand
> >>>>>>>> might be skipped altogether when utilizing storage plugins.
> >>>>>>>>
> >>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> >>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>> > That's an interesting comment, Marcus.
> >>>>>>>> >
> >>>>>>>> > It was my intent that it should work with any CloudStack
> "managed"
> >>>>>>>> storage
> >>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote
> the
> >>>>>>>> code so
> >>>>>>>> > CHAP didn't have to be used.
> >>>>>>>> >
> >>>>>>>> > As I'm doing my testing, I can try to think about whether it is
> >>>>>>>> generic
> >>>>>>>> > enough to keep those names or not.
> >>>>>>>> >
> >>>>>>>> > My expectation is that it is generic enough.
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
> >>>>>>>> shadowsor@gmail.com>wrote:
> >>>>>>>> >
> >>>>>>>> >> I added a comment to your diff. In general I think it looks
> good,
> >>>>>>>> >> though I obviously can't vouch for whether or not it will work.
> >>>>>>>> One
> >>>>>>>> >> thing I do have reservations about is the adaptor/pool naming.
> If
> >>>>>>>> you
> >>>>>>>> >> think the code is generic enough that it will work for anyone
> who
> >>>>>>>> does
> >>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
> >>>>>>>> anything
> >>>>>>>> >> about it that's specific to YOUR iscsi target or how it likes
> to
> >>>>>>>> be
> >>>>>>>> >> treated then I'd say that they should be named something less
> >>>>>>>> generic
> >>>>>>>> >> than iScsiAdmStorage.
> >>>>>>>> >>
> >>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >>>>>>>> >> <mi...@solidfire.com> wrote:
> >>>>>>>> >> > Great - thanks!
> >>>>>>>> >> >
> >>>>>>>> >> > Just to give you an overview of what my code does (for when
> you
> >>>>>>>> get a
> >>>>>>>> >> > chance to review it):
> >>>>>>>> >> >
> >>>>>>>> >> > SolidFireHostListener is registered in
> >>>>>>>> SolidfirePrimaryDataStoreProvider.
> >>>>>>>> >> > Its hostConnect method is invoked when a host connects with
> the
> >>>>>>>> CS MS. If
> >>>>>>>> >> > the host is running KVM, the listener sends a
> >>>>>>>> ModifyStoragePoolCommand to
> >>>>>>>> >> > the host. This logic was based off of DefaultHostListener.
> >>>>>>>> >> >
> >>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
> >>>>>>>> invokes
> >>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
> >>>>>>>> KVMStoragePoolManager
> >>>>>>>> >> > asks for an adaptor and finds my new one:
> >>>>>>>> iScsiAdmStorageAdaptor (which
> >>>>>>>> >> was
> >>>>>>>> >> > registered in the constructor for KVMStoragePoolManager under
> >>>>>>>> the key of
> >>>>>>>> >> > StoragePoolType.Iscsi.toString()).
> >>>>>>>> >> >
> >>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an
> instance
> >>>>>>>> of
> >>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the
> pointer
> >>>>>>>> to the
> >>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of
> >>>>>>>> the storage
> >>>>>>>> >> > pool.
> >>>>>>>> >> >
> >>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
> >>>>>>>> managed
> >>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
> >>>>>>>> iscsiadm to
> >>>>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
> >>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic
> that
> >>>>>>>> follows.
> >>>>>>>> >> >
> >>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with
> the
> >>>>>>>> IQN of the
> >>>>>>>> >> > volume if the storage pool in question is managed storage.
> >>>>>>>> Otherwise, the
> >>>>>>>> >> > normal vol.getPath() is used.
> >>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
> >>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
> >>>>>>>> detach logic.
> >>>>>>>> >> >
> >>>>>>>> >> > Once the volume has been detached,
> >>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
> >>>>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
> >>>>>>>> removes the
> >>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
> >>>>>>>> shadowsor@gmail.com
> >>>>>>>> >> >wrote:
> >>>>>>>> >> >
> >>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent
> change
> >>>>>>>> all INFO
> >>>>>>>> >> to
> >>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can
> tail
> >>>>>>>> the log
> >>>>>>>> >> when
> >>>>>>>> >> >> you try to start the service, or maybe it will spit
> something
> >>>>>>>> out into
> >>>>>>>> >> one
> >>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
> >>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> >>>>>>>> mike.tutkowski@solidfire.com
> >>>>>>>> >> >
> >>>>>>>> >> >> wrote:
> >>>>>>>> >> >>
> >>>>>>>> >> >> > This is how I've been trying to query for the status of
> the
> >>>>>>>> service (I
> >>>>>>>> >> >> > assume it could be started this way, as well, by changing
> >>>>>>>> "status" to
> >>>>>>>> >> >> > "start" or "restart"?):
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >>>>>>>> /usr/sbin/service
> >>>>>>>> >> >> > cloudstack-agent status
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > I get this back:
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > Failed to execute: * could not access PID file for
> >>>>>>>> cloudstack-agent
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
> >>>>>>>> think I'm
> >>>>>>>> >> going
> >>>>>>>> >> >> to
> >>>>>>>> >> >> > rebuild and redeploy everything.
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > The debug info sounds helpful. Where can I set
> enable.debug?
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > Thanks, Marcus!
> >>>>>>>> >> >> >
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> >>>>>>>> shadowsor@gmail.com
> >>>>>>>> >> >> > >wrote:
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > > OK, will check it out in the next few days. As
> mentioned,
> >>>>>>>> you can
> >>>>>>>> >> set
> >>>>>>>> >> >> up
> >>>>>>>> >> >> > > your Ubuntu vm as the management server as well if all
> >>>>>>>> else fails.
> >>>>>>>> >>  If
> >>>>>>>> >> >> > you
> >>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host,
> then
> >>>>>>>> you need
> >>>>>>>> >> to
> >>>>>>>> >> >> > > enable.debug on the agent. It won't run without
> >>>>>>>> complaining loudly
> >>>>>>>> >> if
> >>>>>>>> >> >> it
> >>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in
> >>>>>>>> your agent
> >>>>>>>> >> log,
> >>>>>>>> >> >> so
> >>>>>>>> >> >> > > perhaps its not running. I assume you know how to
> >>>>>>>> stop/start the
> >>>>>>>> >> agent
> >>>>>>>> >> >> on
> >>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
> >>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >>>>>>>> >> >> mike.tutkowski@solidfire.com>
> >>>>>>>> >> >> > > wrote:
> >>>>>>>> >> >> > >
> >>>>>>>> >> >> > > > Hey Marcus,
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
> >>>>>>>> thought you
> >>>>>>>> >> would
> >>>>>>>> >> >> > be a
> >>>>>>>> >> >> > > > good person to ask to review it:
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > >
> >>>>>>>> >> >> >
> >>>>>>>> >> >>
> >>>>>>>> >>
> >>>>>>>>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > All it is supposed to do is attach and detach a data
> >>>>>>>> disk (that
> >>>>>>>> >> has
> >>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data
> >>>>>>>> disk
> >>>>>>>> >> happens to
> >>>>>>>> >> >> > be
> >>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
> >>>>>>>> mapping
> >>>>>>>> >> between a
> >>>>>>>> >> >> > > > CloudStack volume and a data disk.
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
> >>>>>>>> like that
> >>>>>>>> >> >> > (likely a
> >>>>>>>> >> >> > > > future release)...just attaching and detaching a data
> >>>>>>>> disk in 4.3.
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > Thanks!
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> >>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
> >>>>>>>> cloudstack-agent
> >>>>>>>> >> >> first.
> >>>>>>>> >> >> > > > Would
> >>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
> >>>>>>>> >> >> > cloudstack-agent.
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> >>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > >> I get the same error running the command manually:
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >>>>>>>> >> /usr/sbin/service
> >>>>>>>> >> >> > > > >> cloudstack-agent status
> >>>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> >>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >>> agent.log looks OK to me:
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
> >>>>>>>>  [cloud.agent.AgentShell]
> >>>>>>>> >> >> (main:null)
> >>>>>>>> >> >> > > > Agent
> >>>>>>>> >> >> > > > >>> started
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
> >>>>>>>>  [cloud.agent.AgentShell]
> >>>>>>>> >> >> (main:null)
> >>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
> >>>>>>>>  [cloud.agent.AgentShell]
> >>>>>>>> >> >> (main:null)
> >>>>>>>> >> >> > > > >>> agent.properties found at
> >>>>>>>> >> /etc/cloudstack/agent/agent.properties
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
> >>>>>>>>  [cloud.agent.AgentShell]
> >>>>>>>> >> >> (main:null)
> >>>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
> >>>>>>>>  [cloud.agent.AgentShell]
> >>>>>>>> >> >> (main:null)
> >>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO
>  [cloud.utils.LogUtils]
> >>>>>>>> >> (main:null)
> >>>>>>>> >> >> > > log4j
> >>>>>>>> >> >> > > > >>> configuration found at
> >>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
> >>>>>>>> (main:null)
> >>>>>>>> >> id
> >>>>>>>> >> >> > is 3
> >>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >>>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
> >>>>>>>> (main:null)
> >>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> >>>>>>>> >> >> scripts/network/domr/kvm
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
> >>>>>>>> important. This
> >>>>>>>> >> seems
> >>>>>>>> >> >> to
> >>>>>>>> >> >> > > be
> >>>>>>>> >> >> > > > a
> >>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> >>>>>>>> cloudstack-agent
> >>>>>>>> >> status
> >>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access
> PID
> >>>>>>>> file for
> >>>>>>>> >> >> > > > >>> cloudstack-agent
> >>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
> >>>>>>>> cloudstack-agent
> >>>>>>>> >> start
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen
> <
> >>>>>>>> >> >> > > shadowsor@gmail.com
> >>>>>>>> >> >> > > > >wrote:
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was
> the
> >>>>>>>> agent log
> >>>>>>>> >> for
> >>>>>>>> >> >> > > some
> >>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
> >>>>>>>> place to
> >>>>>>>> >> look.
> >>>>>>>> >> >> > There
> >>>>>>>> >> >> > > > is
> >>>>>>>> >> >> > > > >>>> an
> >>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup
> when
> >>>>>>>> it adds
> >>>>>>>> >> the
> >>>>>>>> >> >> > host,
> >>>>>>>> >> >> > > > >>>> both
> >>>>>>>> >> >> > > > >>>> in /var/log
> >>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >>>>>>>> >> >> > > > >>>> wrote:
> >>>>>>>> >> >> > > > >>>>
> >>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address
> or
> >>>>>>>> the KVM
> >>>>>>>> >> host?
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings
> parameter:
> >>>>>>>> >> >> > > > >>>> > hostThe ip address of management
> >>>>>>>> server192.168.233.1
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
> >>>>>>>> >> >> host=192.168.233.1
> >>>>>>>> >> >> > > > value.
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus
> Sorensen
> >>>>>>>> <
> >>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
> >>>>>>>> >> >> > > > >>>> > >wrote:
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
> >>>>>>>> 192.168.233.10? But you
> >>>>>>>> >> >> tried
> >>>>>>>> >> >> > > to
> >>>>>>>> >> >> > > > >>>> telnet
> >>>>>>>> >> >> > > > >>>> > to
> >>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change
> >>>>>>>> that in
> >>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but
> you
> >>>>>>>> may want
> >>>>>>>> >> to
> >>>>>>>> >> >> > edit
> >>>>>>>> >> >> > > > the
> >>>>>>>> >> >> > > > >>>> > config
> >>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
> >>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> >>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > >>>> > >
> >>>>>>>> >> >> > > > >>>> > > wrote:
> >>>>>>>> >> >> > > > >>>> > >
> >>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
> >>>>>>>> looks
> >>>>>>>> >> like, if
> >>>>>>>> >> >> > > that
> >>>>>>>> >> >> > > > >>>> is of
> >>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the
> >>>>>>>> NAT network
> >>>>>>>> >> >> > VMware
> >>>>>>>> >> >> > > > >>>> Fusion
> >>>>>>>> >> >> > > > >>>> > set
> >>>>>>>> >> >> > > > >>>> > > > up):
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > auto lo
> >>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > auto eth0
> >>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
> >>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
> >>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
> >>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
> >>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
> >>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
> >>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
> >>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
> >>>>>>>> 192.168.233.2 metric 1
> >>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
> >>>>>>>> 192.168.233.2
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
> >>>>>>>> Tutkowski <
> >>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from
> the
> >>>>>>>> MS log
> >>>>>>>> >> >> (below).
> >>>>>>>> >> >> > > > >>>> Discovery
> >>>>>>>> >> >> > > > >>>> > > > timed
> >>>>>>>> >> >> > > > >>>> > > > > out.
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My
> network
> >>>>>>>> settings
> >>>>>>>> >> >> > > shouldn't
> >>>>>>>> >> >> > > > >>>> have
> >>>>>>>> >> >> > > > >>>> > > > changed
> >>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the
> MS
> >>>>>>>> host and
> >>>>>>>> >> vice
> >>>>>>>> >> >> > > > versa.
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM
> on
> >>>>>>>> the KVM
> >>>>>>>> >> host
> >>>>>>>> >> >> > and
> >>>>>>>> >> >> > > > >>>> ping from
> >>>>>>>> >> >> > > > >>>> > > it
> >>>>>>>> >> >> > > > >>>> > > > > to the MS host.
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host
> (also
> >>>>>>>> running
> >>>>>>>> >> the
> >>>>>>>> >> >> CS
> >>>>>>>> >> >> > > MS)
> >>>>>>>> >> >> > > > >>>> to the
> >>>>>>>> >> >> > > > >>>> > VM
> >>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> :ctx-6b28dc48)
> >>>>>>>> Timeout,
> >>>>>>>> >> to
> >>>>>>>> >> >> > wait
> >>>>>>>> >> >> > > > for
> >>>>>>>> >> >> > > > >>>> the
> >>>>>>>> >> >> > > > >>>> > > host
> >>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is
> failed
> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> :ctx-6b28dc48)
> >>>>>>>> Unable to
> >>>>>>>> >> >> find
> >>>>>>>> >> >> > > the
> >>>>>>>> >> >> > > > >>>> server
> >>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> :ctx-6b28dc48)
> >>>>>>>> Could not
> >>>>>>>> >> >> find
> >>>>>>>> >> >> > > > >>>> exception:
> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
> >>>>>>>> error code
> >>>>>>>> >> >> list
> >>>>>>>> >> >> > > for
> >>>>>>>> >> >> > > > >>>> > > exceptions
> >>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> >>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3
> :ctx-6b28dc48)
> >>>>>>>> Exception:
> >>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
> >>>>>>>> Unable to add
> >>>>>>>> >> >> the
> >>>>>>>> >> >> > > host
> >>>>>>>> >> >> > > > >>>> > > > > at
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>>
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > >
> >>>>>>>> >> >> >
> >>>>>>>> >> >>
> >>>>>>>> >>
> >>>>>>>>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my
> >>>>>>>> KVM host to
> >>>>>>>> >> >> the
> >>>>>>>> >> >> > MS
> >>>>>>>> >> >> > > > >>>> host's
> >>>>>>>> >> >> > > > >>>> > > 8250
> >>>>>>>> >> >> > > > >>>> > > > > port:
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet
> 192.168.233.1
> >>>>>>>> 8250
> >>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
> >>>>>>>> >> >> > > > >>>> > > > >
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > > > --
> >>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
> >>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire
> Inc.*
> >>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
> >>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
> >>>>>>>> >> >> > > > >>>> > > > cloud<
> >>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > > > >>>> > > > *™*
> >>>>>>>> >> >> > > > >>>> > > >
> >>>>>>>> >> >> > > > >>>> > >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>> > --
> >>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
> >>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > >>>> > o: 303.746.7302
> >>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
> >>>>>>>> >> >> > > > >>>> > cloud<
> >>>>>>>> http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > > > >>>> > *™*
> >>>>>>>> >> >> > > > >>>> >
> >>>>>>>> >> >> > > > >>>>
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>> --
> >>>>>>>> >> >> > > > >>> *Mike Tutkowski*
> >>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > >>> o: 303.746.7302
> >>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > > > >>> *™*
> >>>>>>>> >> >> > > > >>>
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >> --
> >>>>>>>> >> >> > > > >> *Mike Tutkowski*
> >>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > >> o: 303.746.7302
> >>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > > > >> *™*
> >>>>>>>> >> >> > > > >>
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > > > --
> >>>>>>>> >> >> > > > > *Mike Tutkowski*
> >>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > > o: 303.746.7302
> >>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
> >>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > > > > *™*
> >>>>>>>> >> >> > > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > > > --
> >>>>>>>> >> >> > > > *Mike Tutkowski*
> >>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > > > o: 303.746.7302
> >>>>>>>> >> >> > > > Advancing the way the world uses the
> >>>>>>>> >> >> > > > cloud<
> http://solidfire.com/solution/overview/?video=play
> >>>>>>>> >
> >>>>>>>> >> >> > > > *™*
> >>>>>>>> >> >> > > >
> >>>>>>>> >> >> > >
> >>>>>>>> >> >> >
> >>>>>>>> >> >> >
> >>>>>>>> >> >> >
> >>>>>>>> >> >> > --
> >>>>>>>> >> >> > *Mike Tutkowski*
> >>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> >> > o: 303.746.7302
> >>>>>>>> >> >> > Advancing the way the world uses the
> >>>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> >> > *™*
> >>>>>>>> >> >> >
> >>>>>>>> >> >>
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> >
> >>>>>>>> >> > --
> >>>>>>>> >> > *Mike Tutkowski*
> >>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> >> > e: mike.tutkowski@solidfire.com
> >>>>>>>> >> > o: 303.746.7302
> >>>>>>>> >> > Advancing the way the world uses the
> >>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> >> > *™*
> >>>>>>>> >>
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> >
> >>>>>>>> > --
> >>>>>>>> > *Mike Tutkowski*
> >>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>>> > e: mike.tutkowski@solidfire.com
> >>>>>>>> > o: 303.746.7302
> >>>>>>>> > Advancing the way the world uses the
> >>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>>>>>> > *™*
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> *Mike Tutkowski*
> >>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>> o: 303.746.7302
> >>>>>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>>>>> *™*
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> *Mike Tutkowski*
> >>>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>>> e: mike.tutkowski@solidfire.com
> >>>>>> o: 303.746.7302
> >>>>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>>>> *™*
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> *Mike Tutkowski*
> >>>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>>> e: mike.tutkowski@solidfire.com
> >>>>> o: 303.746.7302
> >>>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>>> *™*
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> *Mike Tutkowski*
> >>>> *Senior CloudStack Developer, SolidFire Inc.*
> >>>> e: mike.tutkowski@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>>> *™*
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>> *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Nope, not running. That's just your grep process. It would look like:

root     24429 24428  1 14:25 ?        00:00:08 jsvc.exec -cp
/usr/share/java/commons-daemon.jar:/usr/share/java/jna.jar:/usr/share/cloudstack-agent/lib/activation-1.1.jar:/usr/share/cloudstack-agent/lib/antisamy-1.4.3.jar:/usr/share/cloudstack-agent/lib/aopalliance-1.0.jar:/usr/share/cloudstack-agent/lib/apache-log4j-extras-1.1.jar:/usr/share/cloudstack-agent/lib/aspectjrt-1.7.

Your agent log should tell you why it failed to start if you set it in
debug and try to start... or maybe cloudstack-agent.out if it doesn't
get far enough (say it's missing a class or something and can't
start).

On Mon, Sep 23, 2013 at 2:33 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Looks like it's running, though:
>
> mtutkowski@ubuntu:~$ ps -ef | grep jsvc
> 1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto jsvc
>
>
>
> On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hey Marcus,
>>
>> Maybe you could give me a better idea of what the "flow" is when adding a
>> KVM host.
>>
>> It looks like we SSH into the potential KVM host and execute a startup
>> script (giving it necessary info about the cloud and the management server
>> it should talk to).
>>
>> After this, is the Java VM started?
>>
>> After a reboot, I assume the JVM is started automatically?
>>
>> How do you debug your KVM-side Java code?
>>
>> Been looking through the logs and nothing obvious sticks out. I will have
>> another look.
>>
>> Thanks
>>
>>
>> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Hey Marcus,
>>>
>>> I've been investigating my issue with not being able to add a KVM host to
>>> CS.
>>>
>>> For what it's worth, this comes back successful:
>>>
>>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
>>> parameters, 3);
>>>
>>> This is what the command looks like:
>>>
>>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0 --prvNic=cloudbr0
>>> --guestNic=cloudbr0
>>>
>>> The problem is this method in LibvirtServerDiscoverer never finds a
>>> matching host in the DB:
>>>
>>> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>>>
>>> I assume once the KVM host is up and running that it's supposed to call
>>> into the CS MS so the DB can be updated as such?
>>>
>>> If so, the problem must be on the KVM side.
>>>
>>> I did run this again (from the KVM host) to see if the connection was in
>>> place:
>>>
>>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>>
>>> Trying 192.168.233.1...
>>>
>>> Connected to 192.168.233.1.
>>>
>>> Escape character is '^]'.
>>> So that looks good.
>>>
>>> I turned on more info in the debug log, but nothing obvious jumps out as
>>> of yet.
>>>
>>> If you have any thoughts on this, please shoot them my way. :)
>>>
>>> Thanks!
>>>
>>>
>>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> First step is for me to get this working for KVM, though. :)
>>>>
>>>> Once I do that, I can perhaps make modifications to the storage
>>>> framework and hypervisor plug-ins to refactor the logic and such.
>>>>
>>>>
>>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Same would work for KVM.
>>>>>
>>>>> If CreateCommand and DestroyCommand were called at the appropriate
>>>>> times by the storage framework, I could move my connect and disconnect
>>>>> logic out of the attach/detach logic.
>>>>>
>>>>>
>>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Conversely, if the storage framework called the DestroyCommand for
>>>>>> managed storage after the DetachCommand, then I could have had my remove
>>>>>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>>>>>> DetachCommand handling.
>>>>>>
>>>>>>
>>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>
>>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>>>>>
>>>>>>> The initial approach that was discussed during 4.2 was for me to
>>>>>>> modify the attach/detach logic only in the XenServer and VMware hypervisor
>>>>>>> plug-ins.
>>>>>>>
>>>>>>> Now that I think about it more, though, I kind of would have liked to
>>>>>>> have the storage framework send a CreateCommand to the hypervisor before
>>>>>>> sending the AttachCommand if the storage in question was managed.
>>>>>>>
>>>>>>> Then I could have created my SR/datastore in the CreateCommand and
>>>>>>> the AttachCommand would have had the SR/datastore that it was always
>>>>>>> expecting (and I wouldn't have had to create the SR/datastore in the
>>>>>>> AttachCommand).
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>
>>>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>>>>>> better position to tell.
>>>>>>>>
>>>>>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>>>>>> that mean that there's no template support? Or is it some other call
>>>>>>>> that does templating now? I'm still getting up to speed on all of the
>>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>>>>>>>> LibvirtComputingResource, since that's the only place
>>>>>>>> createPhysicalDisk is called, and it occurred to me that
>>>>>>>> CreateCommand
>>>>>>>> might be skipped altogether when utilizing storage plugins.
>>>>>>>>
>>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>> > That's an interesting comment, Marcus.
>>>>>>>> >
>>>>>>>> > It was my intent that it should work with any CloudStack "managed"
>>>>>>>> storage
>>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>>>>>> code so
>>>>>>>> > CHAP didn't have to be used.
>>>>>>>> >
>>>>>>>> > As I'm doing my testing, I can try to think about whether it is
>>>>>>>> generic
>>>>>>>> > enough to keep those names or not.
>>>>>>>> >
>>>>>>>> > My expectation is that it is generic enough.
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com>wrote:
>>>>>>>> >
>>>>>>>> >> I added a comment to your diff. In general I think it looks good,
>>>>>>>> >> though I obviously can't vouch for whether or not it will work.
>>>>>>>> One
>>>>>>>> >> thing I do have reservations about is the adaptor/pool naming. If
>>>>>>>> you
>>>>>>>> >> think the code is generic enough that it will work for anyone who
>>>>>>>> does
>>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>>>>>>> anything
>>>>>>>> >> about it that's specific to YOUR iscsi target or how it likes to
>>>>>>>> be
>>>>>>>> >> treated then I'd say that they should be named something less
>>>>>>>> generic
>>>>>>>> >> than iScsiAdmStorage.
>>>>>>>> >>
>>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>> >> > Great - thanks!
>>>>>>>> >> >
>>>>>>>> >> > Just to give you an overview of what my code does (for when you
>>>>>>>> get a
>>>>>>>> >> > chance to review it):
>>>>>>>> >> >
>>>>>>>> >> > SolidFireHostListener is registered in
>>>>>>>> SolidfirePrimaryDataStoreProvider.
>>>>>>>> >> > Its hostConnect method is invoked when a host connects with the
>>>>>>>> CS MS. If
>>>>>>>> >> > the host is running KVM, the listener sends a
>>>>>>>> ModifyStoragePoolCommand to
>>>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>>>>>> >> >
>>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
>>>>>>>> invokes
>>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>>>>>> KVMStoragePoolManager
>>>>>>>> >> > asks for an adaptor and finds my new one:
>>>>>>>> iScsiAdmStorageAdaptor (which
>>>>>>>> >> was
>>>>>>>> >> > registered in the constructor for KVMStoragePoolManager under
>>>>>>>> the key of
>>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>>>>>>>> >> >
>>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance
>>>>>>>> of
>>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer
>>>>>>>> to the
>>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of
>>>>>>>> the storage
>>>>>>>> >> > pool.
>>>>>>>> >> >
>>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>>>>>> managed
>>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>>>>>> iscsiadm to
>>>>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>>>>>> follows.
>>>>>>>> >> >
>>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the
>>>>>>>> IQN of the
>>>>>>>> >> > volume if the storage pool in question is managed storage.
>>>>>>>> Otherwise, the
>>>>>>>> >> > normal vol.getPath() is used.
>>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>>>>>>> detach logic.
>>>>>>>> >> >
>>>>>>>> >> > Once the volume has been detached,
>>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>>>>>> removes the
>>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com
>>>>>>>> >> >wrote:
>>>>>>>> >> >
>>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change
>>>>>>>> all INFO
>>>>>>>> >> to
>>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail
>>>>>>>> the log
>>>>>>>> >> when
>>>>>>>> >> >> you try to start the service, or maybe it will spit something
>>>>>>>> out into
>>>>>>>> >> one
>>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>>>>>> mike.tutkowski@solidfire.com
>>>>>>>> >> >
>>>>>>>> >> >> wrote:
>>>>>>>> >> >>
>>>>>>>> >> >> > This is how I've been trying to query for the status of the
>>>>>>>> service (I
>>>>>>>> >> >> > assume it could be started this way, as well, by changing
>>>>>>>> "status" to
>>>>>>>> >> >> > "start" or "restart"?):
>>>>>>>> >> >> >
>>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>>>> /usr/sbin/service
>>>>>>>> >> >> > cloudstack-agent status
>>>>>>>> >> >> >
>>>>>>>> >> >> > I get this back:
>>>>>>>> >> >> >
>>>>>>>> >> >> > Failed to execute: * could not access PID file for
>>>>>>>> cloudstack-agent
>>>>>>>> >> >> >
>>>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
>>>>>>>> think I'm
>>>>>>>> >> going
>>>>>>>> >> >> to
>>>>>>>> >> >> > rebuild and redeploy everything.
>>>>>>>> >> >> >
>>>>>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>>>>>> >> >> >
>>>>>>>> >> >> > Thanks, Marcus!
>>>>>>>> >> >> >
>>>>>>>> >> >> >
>>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com
>>>>>>>> >> >> > >wrote:
>>>>>>>> >> >> >
>>>>>>>> >> >> > > OK, will check it out in the next few days. As mentioned,
>>>>>>>> you can
>>>>>>>> >> set
>>>>>>>> >> >> up
>>>>>>>> >> >> > > your Ubuntu vm as the management server as well if all
>>>>>>>> else fails.
>>>>>>>> >>  If
>>>>>>>> >> >> > you
>>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>>>>>> you need
>>>>>>>> >> to
>>>>>>>> >> >> > > enable.debug on the agent. It won't run without
>>>>>>>> complaining loudly
>>>>>>>> >> if
>>>>>>>> >> >> it
>>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in
>>>>>>>> your agent
>>>>>>>> >> log,
>>>>>>>> >> >> so
>>>>>>>> >> >> > > perhaps its not running. I assume you know how to
>>>>>>>> stop/start the
>>>>>>>> >> agent
>>>>>>>> >> >> on
>>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>>>>>> >> >> mike.tutkowski@solidfire.com>
>>>>>>>> >> >> > > wrote:
>>>>>>>> >> >> > >
>>>>>>>> >> >> > > > Hey Marcus,
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
>>>>>>>> thought you
>>>>>>>> >> would
>>>>>>>> >> >> > be a
>>>>>>>> >> >> > > > good person to ask to review it:
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > >
>>>>>>>> >> >> >
>>>>>>>> >> >>
>>>>>>>> >>
>>>>>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > All it is supposed to do is attach and detach a data
>>>>>>>> disk (that
>>>>>>>> >> has
>>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data
>>>>>>>> disk
>>>>>>>> >> happens to
>>>>>>>> >> >> > be
>>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
>>>>>>>> mapping
>>>>>>>> >> between a
>>>>>>>> >> >> > > > CloudStack volume and a data disk.
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
>>>>>>>> like that
>>>>>>>> >> >> > (likely a
>>>>>>>> >> >> > > > future release)...just attaching and detaching a data
>>>>>>>> disk in 4.3.
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > Thanks!
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>>>>>> cloudstack-agent
>>>>>>>> >> >> first.
>>>>>>>> >> >> > > > Would
>>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>>>>>> >> >> > cloudstack-agent.
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > >> I get the same error running the command manually:
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>>>> >> /usr/sbin/service
>>>>>>>> >> >> > > > >> cloudstack-agent status
>>>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>>>>>>>>  [cloud.agent.AgentShell]
>>>>>>>> >> >> (main:null)
>>>>>>>> >> >> > > > Agent
>>>>>>>> >> >> > > > >>> started
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>>>>>>>>  [cloud.agent.AgentShell]
>>>>>>>> >> >> (main:null)
>>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>>>>>>>>  [cloud.agent.AgentShell]
>>>>>>>> >> >> (main:null)
>>>>>>>> >> >> > > > >>> agent.properties found at
>>>>>>>> >> /etc/cloudstack/agent/agent.properties
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>>>>>>>>  [cloud.agent.AgentShell]
>>>>>>>> >> >> (main:null)
>>>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>>>>>>>>  [cloud.agent.AgentShell]
>>>>>>>> >> >> (main:null)
>>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>>>>>> >> (main:null)
>>>>>>>> >> >> > > log4j
>>>>>>>> >> >> > > > >>> configuration found at
>>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>>>>>> (main:null)
>>>>>>>> >> id
>>>>>>>> >> >> > is 3
>>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>>>>>> (main:null)
>>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>>>>>> >> >> scripts/network/domr/kvm
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
>>>>>>>> important. This
>>>>>>>> >> seems
>>>>>>>> >> >> to
>>>>>>>> >> >> > > be
>>>>>>>> >> >> > > > a
>>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>>>> cloudstack-agent
>>>>>>>> >> status
>>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>>>>>> file for
>>>>>>>> >> >> > > > >>> cloudstack-agent
>>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>>>> cloudstack-agent
>>>>>>>> >> start
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>>>>>> >> >> > > shadowsor@gmail.com
>>>>>>>> >> >> > > > >wrote:
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>>>>>> agent log
>>>>>>>> >> for
>>>>>>>> >> >> > > some
>>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
>>>>>>>> place to
>>>>>>>> >> look.
>>>>>>>> >> >> > There
>>>>>>>> >> >> > > > is
>>>>>>>> >> >> > > > >>>> an
>>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup when
>>>>>>>> it adds
>>>>>>>> >> the
>>>>>>>> >> >> > host,
>>>>>>>> >> >> > > > >>>> both
>>>>>>>> >> >> > > > >>>> in /var/log
>>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>>>>>> >> >> > > > >>>> wrote:
>>>>>>>> >> >> > > > >>>>
>>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or
>>>>>>>> the KVM
>>>>>>>> >> host?
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>>>>>>>> server192.168.233.1
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>>>>>> >> >> host=192.168.233.1
>>>>>>>> >> >> > > > value.
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen
>>>>>>>> <
>>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>>>>>> >> >> > > > >>>> > >wrote:
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>>>>>>>> 192.168.233.10? But you
>>>>>>>> >> >> tried
>>>>>>>> >> >> > > to
>>>>>>>> >> >> > > > >>>> telnet
>>>>>>>> >> >> > > > >>>> > to
>>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change
>>>>>>>> that in
>>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>>>>>> may want
>>>>>>>> >> to
>>>>>>>> >> >> > edit
>>>>>>>> >> >> > > > the
>>>>>>>> >> >> > > > >>>> > config
>>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > >>>> > >
>>>>>>>> >> >> > > > >>>> > > wrote:
>>>>>>>> >> >> > > > >>>> > >
>>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
>>>>>>>> looks
>>>>>>>> >> like, if
>>>>>>>> >> >> > > that
>>>>>>>> >> >> > > > >>>> is of
>>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the
>>>>>>>> NAT network
>>>>>>>> >> >> > VMware
>>>>>>>> >> >> > > > >>>> Fusion
>>>>>>>> >> >> > > > >>>> > set
>>>>>>>> >> >> > > > >>>> > > > up):
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > auto lo
>>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > auto eth0
>>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>>>>>>>> 192.168.233.2 metric 1
>>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
>>>>>>>> 192.168.233.2
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>>>>>>> Tutkowski <
>>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the
>>>>>>>> MS log
>>>>>>>> >> >> (below).
>>>>>>>> >> >> > > > >>>> Discovery
>>>>>>>> >> >> > > > >>>> > > > timed
>>>>>>>> >> >> > > > >>>> > > > > out.
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>>>>>> settings
>>>>>>>> >> >> > > shouldn't
>>>>>>>> >> >> > > > >>>> have
>>>>>>>> >> >> > > > >>>> > > > changed
>>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>>>>>> host and
>>>>>>>> >> vice
>>>>>>>> >> >> > > > versa.
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on
>>>>>>>> the KVM
>>>>>>>> >> host
>>>>>>>> >> >> > and
>>>>>>>> >> >> > > > >>>> ping from
>>>>>>>> >> >> > > > >>>> > > it
>>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>>>>>> running
>>>>>>>> >> the
>>>>>>>> >> >> CS
>>>>>>>> >> >> > > MS)
>>>>>>>> >> >> > > > >>>> to the
>>>>>>>> >> >> > > > >>>> > VM
>>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>>> Timeout,
>>>>>>>> >> to
>>>>>>>> >> >> > wait
>>>>>>>> >> >> > > > for
>>>>>>>> >> >> > > > >>>> the
>>>>>>>> >> >> > > > >>>> > > host
>>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>>> Unable to
>>>>>>>> >> >> find
>>>>>>>> >> >> > > the
>>>>>>>> >> >> > > > >>>> server
>>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>>> Could not
>>>>>>>> >> >> find
>>>>>>>> >> >> > > > >>>> exception:
>>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>>>>>> error code
>>>>>>>> >> >> list
>>>>>>>> >> >> > > for
>>>>>>>> >> >> > > > >>>> > > exceptions
>>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>>> Exception:
>>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>>>>>>> Unable to add
>>>>>>>> >> >> the
>>>>>>>> >> >> > > host
>>>>>>>> >> >> > > > >>>> > > > > at
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>>
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > >
>>>>>>>> >> >> >
>>>>>>>> >> >>
>>>>>>>> >>
>>>>>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my
>>>>>>>> KVM host to
>>>>>>>> >> >> the
>>>>>>>> >> >> > MS
>>>>>>>> >> >> > > > >>>> host's
>>>>>>>> >> >> > > > >>>> > > 8250
>>>>>>>> >> >> > > > >>>> > > > > port:
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1
>>>>>>>> 8250
>>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>>>>>> >> >> > > > >>>> > > > >
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > > > --
>>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>>>>>> >> >> > > > >>>> > > > cloud<
>>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > > > >>>> > > > *™*
>>>>>>>> >> >> > > > >>>> > > >
>>>>>>>> >> >> > > > >>>> > >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>> > --
>>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>>>>>> >> >> > > > >>>> > cloud<
>>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > > > >>>> > *™*
>>>>>>>> >> >> > > > >>>> >
>>>>>>>> >> >> > > > >>>>
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>> --
>>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > >>> o: 303.746.7302
>>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > > > >>> *™*
>>>>>>>> >> >> > > > >>>
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >> --
>>>>>>>> >> >> > > > >> *Mike Tutkowski*
>>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > >> o: 303.746.7302
>>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > > > >> *™*
>>>>>>>> >> >> > > > >>
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > > > --
>>>>>>>> >> >> > > > > *Mike Tutkowski*
>>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > > o: 303.746.7302
>>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > > > > *™*
>>>>>>>> >> >> > > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > > > --
>>>>>>>> >> >> > > > *Mike Tutkowski*
>>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > > > o: 303.746.7302
>>>>>>>> >> >> > > > Advancing the way the world uses the
>>>>>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play
>>>>>>>> >
>>>>>>>> >> >> > > > *™*
>>>>>>>> >> >> > > >
>>>>>>>> >> >> > >
>>>>>>>> >> >> >
>>>>>>>> >> >> >
>>>>>>>> >> >> >
>>>>>>>> >> >> > --
>>>>>>>> >> >> > *Mike Tutkowski*
>>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>>>>>> >> >> > o: 303.746.7302
>>>>>>>> >> >> > Advancing the way the world uses the
>>>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> >> > *™*
>>>>>>>> >> >> >
>>>>>>>> >> >>
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> >
>>>>>>>> >> > --
>>>>>>>> >> > *Mike Tutkowski*
>>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>>>>> >> > o: 303.746.7302
>>>>>>>> >> > Advancing the way the world uses the
>>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> >> > *™*
>>>>>>>> >>
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > --
>>>>>>>> > *Mike Tutkowski*
>>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>> > o: 303.746.7302
>>>>>>>> > Advancing the way the world uses the
>>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> > *™*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Looks like it's running, though:

mtutkowski@ubuntu:~$ ps -ef | grep jsvc
1000      7097  7013  0 14:32 pts/1    00:00:00 grep --color=auto jsvc



On Mon, Sep 23, 2013 at 2:31 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hey Marcus,
>
> Maybe you could give me a better idea of what the "flow" is when adding a
> KVM host.
>
> It looks like we SSH into the potential KVM host and execute a startup
> script (giving it necessary info about the cloud and the management server
> it should talk to).
>
> After this, is the Java VM started?
>
> After a reboot, I assume the JVM is started automatically?
>
> How do you debug your KVM-side Java code?
>
> Been looking through the logs and nothing obvious sticks out. I will have
> another look.
>
> Thanks
>
>
> On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hey Marcus,
>>
>> I've been investigating my issue with not being able to add a KVM host to
>> CS.
>>
>> For what it's worth, this comes back successful:
>>
>> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
>> parameters, 3);
>>
>> This is what the command looks like:
>>
>> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
>> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0 --prvNic=cloudbr0
>> --guestNic=cloudbr0
>>
>> The problem is this method in LibvirtServerDiscoverer never finds a
>> matching host in the DB:
>>
>> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>>
>> I assume once the KVM host is up and running that it's supposed to call
>> into the CS MS so the DB can be updated as such?
>>
>> If so, the problem must be on the KVM side.
>>
>> I did run this again (from the KVM host) to see if the connection was in
>> place:
>>
>> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>
>> Trying 192.168.233.1...
>>
>> Connected to 192.168.233.1.
>>
>> Escape character is '^]'.
>> So that looks good.
>>
>> I turned on more info in the debug log, but nothing obvious jumps out as
>> of yet.
>>
>> If you have any thoughts on this, please shoot them my way. :)
>>
>> Thanks!
>>
>>
>> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> First step is for me to get this working for KVM, though. :)
>>>
>>> Once I do that, I can perhaps make modifications to the storage
>>> framework and hypervisor plug-ins to refactor the logic and such.
>>>
>>>
>>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Same would work for KVM.
>>>>
>>>> If CreateCommand and DestroyCommand were called at the appropriate
>>>> times by the storage framework, I could move my connect and disconnect
>>>> logic out of the attach/detach logic.
>>>>
>>>>
>>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Conversely, if the storage framework called the DestroyCommand for
>>>>> managed storage after the DetachCommand, then I could have had my remove
>>>>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>>>>> DetachCommand handling.
>>>>>
>>>>>
>>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>>>>
>>>>>> The initial approach that was discussed during 4.2 was for me to
>>>>>> modify the attach/detach logic only in the XenServer and VMware hypervisor
>>>>>> plug-ins.
>>>>>>
>>>>>> Now that I think about it more, though, I kind of would have liked to
>>>>>> have the storage framework send a CreateCommand to the hypervisor before
>>>>>> sending the AttachCommand if the storage in question was managed.
>>>>>>
>>>>>> Then I could have created my SR/datastore in the CreateCommand and
>>>>>> the AttachCommand would have had the SR/datastore that it was always
>>>>>> expecting (and I wouldn't have had to create the SR/datastore in the
>>>>>> AttachCommand).
>>>>>>
>>>>>>
>>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com> wrote:
>>>>>>
>>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>>>>> better position to tell.
>>>>>>>
>>>>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>>>>> that mean that there's no template support? Or is it some other call
>>>>>>> that does templating now? I'm still getting up to speed on all of the
>>>>>>> 4.2 changes. I was just looking at CreateCommand in
>>>>>>> LibvirtComputingResource, since that's the only place
>>>>>>> createPhysicalDisk is called, and it occurred to me that
>>>>>>> CreateCommand
>>>>>>> might be skipped altogether when utilizing storage plugins.
>>>>>>>
>>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>> > That's an interesting comment, Marcus.
>>>>>>> >
>>>>>>> > It was my intent that it should work with any CloudStack "managed"
>>>>>>> storage
>>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>>>>> code so
>>>>>>> > CHAP didn't have to be used.
>>>>>>> >
>>>>>>> > As I'm doing my testing, I can try to think about whether it is
>>>>>>> generic
>>>>>>> > enough to keep those names or not.
>>>>>>> >
>>>>>>> > My expectation is that it is generic enough.
>>>>>>> >
>>>>>>> >
>>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com>wrote:
>>>>>>> >
>>>>>>> >> I added a comment to your diff. In general I think it looks good,
>>>>>>> >> though I obviously can't vouch for whether or not it will work.
>>>>>>> One
>>>>>>> >> thing I do have reservations about is the adaptor/pool naming. If
>>>>>>> you
>>>>>>> >> think the code is generic enough that it will work for anyone who
>>>>>>> does
>>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>>>>>> anything
>>>>>>> >> about it that's specific to YOUR iscsi target or how it likes to
>>>>>>> be
>>>>>>> >> treated then I'd say that they should be named something less
>>>>>>> generic
>>>>>>> >> than iScsiAdmStorage.
>>>>>>> >>
>>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>> >> > Great - thanks!
>>>>>>> >> >
>>>>>>> >> > Just to give you an overview of what my code does (for when you
>>>>>>> get a
>>>>>>> >> > chance to review it):
>>>>>>> >> >
>>>>>>> >> > SolidFireHostListener is registered in
>>>>>>> SolidfirePrimaryDataStoreProvider.
>>>>>>> >> > Its hostConnect method is invoked when a host connects with the
>>>>>>> CS MS. If
>>>>>>> >> > the host is running KVM, the listener sends a
>>>>>>> ModifyStoragePoolCommand to
>>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>>>>> >> >
>>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It
>>>>>>> invokes
>>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>>>>> KVMStoragePoolManager
>>>>>>> >> > asks for an adaptor and finds my new one:
>>>>>>> iScsiAdmStorageAdaptor (which
>>>>>>> >> was
>>>>>>> >> > registered in the constructor for KVMStoragePoolManager under
>>>>>>> the key of
>>>>>>> >> > StoragePoolType.Iscsi.toString()).
>>>>>>> >> >
>>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance
>>>>>>> of
>>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer
>>>>>>> to the
>>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of
>>>>>>> the storage
>>>>>>> >> > pool.
>>>>>>> >> >
>>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>>>>> managed
>>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>>>>> iscsiadm to
>>>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>>>>> follows.
>>>>>>> >> >
>>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the
>>>>>>> IQN of the
>>>>>>> >> > volume if the storage pool in question is managed storage.
>>>>>>> Otherwise, the
>>>>>>> >> > normal vol.getPath() is used.
>>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>>>>>> detach logic.
>>>>>>> >> >
>>>>>>> >> > Once the volume has been detached,
>>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>>>>> removes the
>>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>>>>> >> >
>>>>>>> >> >
>>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com
>>>>>>> >> >wrote:
>>>>>>> >> >
>>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change
>>>>>>> all INFO
>>>>>>> >> to
>>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail
>>>>>>> the log
>>>>>>> >> when
>>>>>>> >> >> you try to start the service, or maybe it will spit something
>>>>>>> out into
>>>>>>> >> one
>>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>>>>> mike.tutkowski@solidfire.com
>>>>>>> >> >
>>>>>>> >> >> wrote:
>>>>>>> >> >>
>>>>>>> >> >> > This is how I've been trying to query for the status of the
>>>>>>> service (I
>>>>>>> >> >> > assume it could be started this way, as well, by changing
>>>>>>> "status" to
>>>>>>> >> >> > "start" or "restart"?):
>>>>>>> >> >> >
>>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>>> /usr/sbin/service
>>>>>>> >> >> > cloudstack-agent status
>>>>>>> >> >> >
>>>>>>> >> >> > I get this back:
>>>>>>> >> >> >
>>>>>>> >> >> > Failed to execute: * could not access PID file for
>>>>>>> cloudstack-agent
>>>>>>> >> >> >
>>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
>>>>>>> think I'm
>>>>>>> >> going
>>>>>>> >> >> to
>>>>>>> >> >> > rebuild and redeploy everything.
>>>>>>> >> >> >
>>>>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>>>>> >> >> >
>>>>>>> >> >> > Thanks, Marcus!
>>>>>>> >> >> >
>>>>>>> >> >> >
>>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com
>>>>>>> >> >> > >wrote:
>>>>>>> >> >> >
>>>>>>> >> >> > > OK, will check it out in the next few days. As mentioned,
>>>>>>> you can
>>>>>>> >> set
>>>>>>> >> >> up
>>>>>>> >> >> > > your Ubuntu vm as the management server as well if all
>>>>>>> else fails.
>>>>>>> >>  If
>>>>>>> >> >> > you
>>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>>>>> you need
>>>>>>> >> to
>>>>>>> >> >> > > enable.debug on the agent. It won't run without
>>>>>>> complaining loudly
>>>>>>> >> if
>>>>>>> >> >> it
>>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in
>>>>>>> your agent
>>>>>>> >> log,
>>>>>>> >> >> so
>>>>>>> >> >> > > perhaps its not running. I assume you know how to
>>>>>>> stop/start the
>>>>>>> >> agent
>>>>>>> >> >> on
>>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>>>>> >> >> mike.tutkowski@solidfire.com>
>>>>>>> >> >> > > wrote:
>>>>>>> >> >> > >
>>>>>>> >> >> > > > Hey Marcus,
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
>>>>>>> thought you
>>>>>>> >> would
>>>>>>> >> >> > be a
>>>>>>> >> >> > > > good person to ask to review it:
>>>>>>> >> >> > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > >
>>>>>>> >> >> >
>>>>>>> >> >>
>>>>>>> >>
>>>>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > All it is supposed to do is attach and detach a data
>>>>>>> disk (that
>>>>>>> >> has
>>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data
>>>>>>> disk
>>>>>>> >> happens to
>>>>>>> >> >> > be
>>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
>>>>>>> mapping
>>>>>>> >> between a
>>>>>>> >> >> > > > CloudStack volume and a data disk.
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
>>>>>>> like that
>>>>>>> >> >> > (likely a
>>>>>>> >> >> > > > future release)...just attaching and detaching a data
>>>>>>> disk in 4.3.
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > Thanks!
>>>>>>> >> >> > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>>>>> cloudstack-agent
>>>>>>> >> >> first.
>>>>>>> >> >> > > > Would
>>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>>>>> >> >> > cloudstack-agent.
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > >> I get the same error running the command manually:
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>>> >> /usr/sbin/service
>>>>>>> >> >> > > > >> cloudstack-agent status
>>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >>> agent.log looks OK to me:
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO
>>>>>>>  [cloud.agent.AgentShell]
>>>>>>> >> >> (main:null)
>>>>>>> >> >> > > > Agent
>>>>>>> >> >> > > > >>> started
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO
>>>>>>>  [cloud.agent.AgentShell]
>>>>>>> >> >> (main:null)
>>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO
>>>>>>>  [cloud.agent.AgentShell]
>>>>>>> >> >> (main:null)
>>>>>>> >> >> > > > >>> agent.properties found at
>>>>>>> >> /etc/cloudstack/agent/agent.properties
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO
>>>>>>>  [cloud.agent.AgentShell]
>>>>>>> >> >> (main:null)
>>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO
>>>>>>>  [cloud.agent.AgentShell]
>>>>>>> >> >> (main:null)
>>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>>>>> >> (main:null)
>>>>>>> >> >> > > log4j
>>>>>>> >> >> > > > >>> configuration found at
>>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>>>>> (main:null)
>>>>>>> >> id
>>>>>>> >> >> > is 3
>>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>>>>> (main:null)
>>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>>>>> >> >> scripts/network/domr/kvm
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was
>>>>>>> important. This
>>>>>>> >> seems
>>>>>>> >> >> to
>>>>>>> >> >> > > be
>>>>>>> >> >> > > > a
>>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>>> cloudstack-agent
>>>>>>> >> status
>>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>>>>> file for
>>>>>>> >> >> > > > >>> cloudstack-agent
>>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>>> cloudstack-agent
>>>>>>> >> start
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>>>>> >> >> > > shadowsor@gmail.com
>>>>>>> >> >> > > > >wrote:
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>>>>> agent log
>>>>>>> >> for
>>>>>>> >> >> > > some
>>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
>>>>>>> place to
>>>>>>> >> look.
>>>>>>> >> >> > There
>>>>>>> >> >> > > > is
>>>>>>> >> >> > > > >>>> an
>>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup when
>>>>>>> it adds
>>>>>>> >> the
>>>>>>> >> >> > host,
>>>>>>> >> >> > > > >>>> both
>>>>>>> >> >> > > > >>>> in /var/log
>>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>>>>> >> >> > > > >>>> wrote:
>>>>>>> >> >> > > > >>>>
>>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or
>>>>>>> the KVM
>>>>>>> >> host?
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>>>>> >> >> > > > >>>> > hostThe ip address of management
>>>>>>> server192.168.233.1
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>>>>> >> >> host=192.168.233.1
>>>>>>> >> >> > > > value.
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen
>>>>>>> <
>>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>>>>> >> >> > > > >>>> > >wrote:
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > > The log says your mgmt server is
>>>>>>> 192.168.233.10? But you
>>>>>>> >> >> tried
>>>>>>> >> >> > > to
>>>>>>> >> >> > > > >>>> telnet
>>>>>>> >> >> > > > >>>> > to
>>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change
>>>>>>> that in
>>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>>>>> may want
>>>>>>> >> to
>>>>>>> >> >> > edit
>>>>>>> >> >> > > > the
>>>>>>> >> >> > > > >>>> > config
>>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > >>>> > >
>>>>>>> >> >> > > > >>>> > > wrote:
>>>>>>> >> >> > > > >>>> > >
>>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
>>>>>>> looks
>>>>>>> >> like, if
>>>>>>> >> >> > > that
>>>>>>> >> >> > > > >>>> is of
>>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the
>>>>>>> NAT network
>>>>>>> >> >> > VMware
>>>>>>> >> >> > > > >>>> Fusion
>>>>>>> >> >> > > > >>>> > set
>>>>>>> >> >> > > > >>>> > > > up):
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > auto lo
>>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > auto eth0
>>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>>>>> >> >> > > > >>>> > > >     post-up route add default gw
>>>>>>> 192.168.233.2 metric 1
>>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw
>>>>>>> 192.168.233.2
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>>>>>> Tutkowski <
>>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the
>>>>>>> MS log
>>>>>>> >> >> (below).
>>>>>>> >> >> > > > >>>> Discovery
>>>>>>> >> >> > > > >>>> > > > timed
>>>>>>> >> >> > > > >>>> > > > > out.
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>>>>> settings
>>>>>>> >> >> > > shouldn't
>>>>>>> >> >> > > > >>>> have
>>>>>>> >> >> > > > >>>> > > > changed
>>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>>>>> host and
>>>>>>> >> vice
>>>>>>> >> >> > > > versa.
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on
>>>>>>> the KVM
>>>>>>> >> host
>>>>>>> >> >> > and
>>>>>>> >> >> > > > >>>> ping from
>>>>>>> >> >> > > > >>>> > > it
>>>>>>> >> >> > > > >>>> > > > > to the MS host.
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>>>>> running
>>>>>>> >> the
>>>>>>> >> >> CS
>>>>>>> >> >> > > MS)
>>>>>>> >> >> > > > >>>> to the
>>>>>>> >> >> > > > >>>> > VM
>>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>> Timeout,
>>>>>>> >> to
>>>>>>> >> >> > wait
>>>>>>> >> >> > > > for
>>>>>>> >> >> > > > >>>> the
>>>>>>> >> >> > > > >>>> > > host
>>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>> Unable to
>>>>>>> >> >> find
>>>>>>> >> >> > > the
>>>>>>> >> >> > > > >>>> server
>>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>> Could not
>>>>>>> >> >> find
>>>>>>> >> >> > > > >>>> exception:
>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>>>>> error code
>>>>>>> >> >> list
>>>>>>> >> >> > > for
>>>>>>> >> >> > > > >>>> > > exceptions
>>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>>> Exception:
>>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>>>>>> Unable to add
>>>>>>> >> >> the
>>>>>>> >> >> > > host
>>>>>>> >> >> > > > >>>> > > > > at
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>>
>>>>>>> >> >> > > >
>>>>>>> >> >> > >
>>>>>>> >> >> >
>>>>>>> >> >>
>>>>>>> >>
>>>>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my
>>>>>>> KVM host to
>>>>>>> >> >> the
>>>>>>> >> >> > MS
>>>>>>> >> >> > > > >>>> host's
>>>>>>> >> >> > > > >>>> > > 8250
>>>>>>> >> >> > > > >>>> > > > > port:
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1
>>>>>>> 8250
>>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>>>>> >> >> > > > >>>> > > > >
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > > > --
>>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>>>>> >> >> > > > >>>> > > > cloud<
>>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > > > >>>> > > > *™*
>>>>>>> >> >> > > > >>>> > > >
>>>>>>> >> >> > > > >>>> > >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>> > --
>>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > >>>> > o: 303.746.7302
>>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>>>>> >> >> > > > >>>> > cloud<
>>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > > > >>>> > *™*
>>>>>>> >> >> > > > >>>> >
>>>>>>> >> >> > > > >>>>
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>> --
>>>>>>> >> >> > > > >>> *Mike Tutkowski*
>>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > >>> o: 303.746.7302
>>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > > > >>> *™*
>>>>>>> >> >> > > > >>>
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >> --
>>>>>>> >> >> > > > >> *Mike Tutkowski*
>>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > >> o: 303.746.7302
>>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > > > >> *™*
>>>>>>> >> >> > > > >>
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > > > --
>>>>>>> >> >> > > > > *Mike Tutkowski*
>>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > > o: 303.746.7302
>>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > > > > *™*
>>>>>>> >> >> > > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > > >
>>>>>>> >> >> > > > --
>>>>>>> >> >> > > > *Mike Tutkowski*
>>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > > > o: 303.746.7302
>>>>>>> >> >> > > > Advancing the way the world uses the
>>>>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play
>>>>>>> >
>>>>>>> >> >> > > > *™*
>>>>>>> >> >> > > >
>>>>>>> >> >> > >
>>>>>>> >> >> >
>>>>>>> >> >> >
>>>>>>> >> >> >
>>>>>>> >> >> > --
>>>>>>> >> >> > *Mike Tutkowski*
>>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>>>>> >> >> > o: 303.746.7302
>>>>>>> >> >> > Advancing the way the world uses the
>>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> >> > *™*
>>>>>>> >> >> >
>>>>>>> >> >>
>>>>>>> >> >
>>>>>>> >> >
>>>>>>> >> >
>>>>>>> >> > --
>>>>>>> >> > *Mike Tutkowski*
>>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>>>> >> > o: 303.746.7302
>>>>>>> >> > Advancing the way the world uses the
>>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> >> > *™*
>>>>>>> >>
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> > --
>>>>>>> > *Mike Tutkowski*
>>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> > e: mike.tutkowski@solidfire.com
>>>>>>> > o: 303.746.7302
>>>>>>> > Advancing the way the world uses the
>>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> > *™*
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

Maybe you could give me a better idea of what the "flow" is when adding a
KVM host.

It looks like we SSH into the potential KVM host and execute a startup
script (giving it necessary info about the cloud and the management server
it should talk to).

After this, is the Java VM started?

After a reboot, I assume the JVM is started automatically?

How do you debug your KVM-side Java code?

Been looking through the logs and nothing obvious sticks out. I will have
another look.

Thanks


On Mon, Sep 23, 2013 at 2:15 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hey Marcus,
>
> I've been investigating my issue with not being able to add a KVM host to
> CS.
>
> For what it's worth, this comes back successful:
>
> SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
> parameters, 3);
>
> This is what the command looks like:
>
> cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
> 6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0 --prvNic=cloudbr0
> --guestNic=cloudbr0
>
> The problem is this method in LibvirtServerDiscoverer never finds a
> matching host in the DB:
>
> waitForHostConnect(long dcId, long podId, long clusterId, String guid)
>
> I assume once the KVM host is up and running that it's supposed to call
> into the CS MS so the DB can be updated as such?
>
> If so, the problem must be on the KVM side.
>
> I did run this again (from the KVM host) to see if the connection was in
> place:
>
> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>
> Trying 192.168.233.1...
>
> Connected to 192.168.233.1.
>
> Escape character is '^]'.
> So that looks good.
>
> I turned on more info in the debug log, but nothing obvious jumps out as
> of yet.
>
> If you have any thoughts on this, please shoot them my way. :)
>
> Thanks!
>
>
> On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> First step is for me to get this working for KVM, though. :)
>>
>> Once I do that, I can perhaps make modifications to the storage framework
>> and hypervisor plug-ins to refactor the logic and such.
>>
>>
>> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Same would work for KVM.
>>>
>>> If CreateCommand and DestroyCommand were called at the appropriate times
>>> by the storage framework, I could move my connect and disconnect logic out
>>> of the attach/detach logic.
>>>
>>>
>>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Conversely, if the storage framework called the DestroyCommand for
>>>> managed storage after the DetachCommand, then I could have had my remove
>>>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>>>> DetachCommand handling.
>>>>
>>>>
>>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>>>
>>>>> The initial approach that was discussed during 4.2 was for me to
>>>>> modify the attach/detach logic only in the XenServer and VMware hypervisor
>>>>> plug-ins.
>>>>>
>>>>> Now that I think about it more, though, I kind of would have liked to
>>>>> have the storage framework send a CreateCommand to the hypervisor before
>>>>> sending the AttachCommand if the storage in question was managed.
>>>>>
>>>>> Then I could have created my SR/datastore in the CreateCommand and the
>>>>> AttachCommand would have had the SR/datastore that it was always expecting
>>>>> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>>>>>
>>>>>
>>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>>>> better position to tell.
>>>>>>
>>>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>>>> that mean that there's no template support? Or is it some other call
>>>>>> that does templating now? I'm still getting up to speed on all of the
>>>>>> 4.2 changes. I was just looking at CreateCommand in
>>>>>> LibvirtComputingResource, since that's the only place
>>>>>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>>>>>> might be skipped altogether when utilizing storage plugins.
>>>>>>
>>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>>>> <mi...@solidfire.com> wrote:
>>>>>> > That's an interesting comment, Marcus.
>>>>>> >
>>>>>> > It was my intent that it should work with any CloudStack "managed"
>>>>>> storage
>>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>>>> code so
>>>>>> > CHAP didn't have to be used.
>>>>>> >
>>>>>> > As I'm doing my testing, I can try to think about whether it is
>>>>>> generic
>>>>>> > enough to keep those names or not.
>>>>>> >
>>>>>> > My expectation is that it is generic enough.
>>>>>> >
>>>>>> >
>>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com>wrote:
>>>>>> >
>>>>>> >> I added a comment to your diff. In general I think it looks good,
>>>>>> >> though I obviously can't vouch for whether or not it will work. One
>>>>>> >> thing I do have reservations about is the adaptor/pool naming. If
>>>>>> you
>>>>>> >> think the code is generic enough that it will work for anyone who
>>>>>> does
>>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>>>>> anything
>>>>>> >> about it that's specific to YOUR iscsi target or how it likes to be
>>>>>> >> treated then I'd say that they should be named something less
>>>>>> generic
>>>>>> >> than iScsiAdmStorage.
>>>>>> >>
>>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>> >> > Great - thanks!
>>>>>> >> >
>>>>>> >> > Just to give you an overview of what my code does (for when you
>>>>>> get a
>>>>>> >> > chance to review it):
>>>>>> >> >
>>>>>> >> > SolidFireHostListener is registered in
>>>>>> SolidfirePrimaryDataStoreProvider.
>>>>>> >> > Its hostConnect method is invoked when a host connects with the
>>>>>> CS MS. If
>>>>>> >> > the host is running KVM, the listener sends a
>>>>>> ModifyStoragePoolCommand to
>>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>>>> >> >
>>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>>>> KVMStoragePoolManager
>>>>>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>>>>>> (which
>>>>>> >> was
>>>>>> >> > registered in the constructor for KVMStoragePoolManager under
>>>>>> the key of
>>>>>> >> > StoragePoolType.Iscsi.toString()).
>>>>>> >> >
>>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance
>>>>>> of
>>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer
>>>>>> to the
>>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of
>>>>>> the storage
>>>>>> >> > pool.
>>>>>> >> >
>>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>>>> managed
>>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>>>> iscsiadm to
>>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>>>> follows.
>>>>>> >> >
>>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the
>>>>>> IQN of the
>>>>>> >> > volume if the storage pool in question is managed storage.
>>>>>> Otherwise, the
>>>>>> >> > normal vol.getPath() is used.
>>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>>>>> detach logic.
>>>>>> >> >
>>>>>> >> > Once the volume has been detached,
>>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>>>> removes the
>>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com
>>>>>> >> >wrote:
>>>>>> >> >
>>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change
>>>>>> all INFO
>>>>>> >> to
>>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail
>>>>>> the log
>>>>>> >> when
>>>>>> >> >> you try to start the service, or maybe it will spit something
>>>>>> out into
>>>>>> >> one
>>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> >
>>>>>> >> >> wrote:
>>>>>> >> >>
>>>>>> >> >> > This is how I've been trying to query for the status of the
>>>>>> service (I
>>>>>> >> >> > assume it could be started this way, as well, by changing
>>>>>> "status" to
>>>>>> >> >> > "start" or "restart"?):
>>>>>> >> >> >
>>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>> /usr/sbin/service
>>>>>> >> >> > cloudstack-agent status
>>>>>> >> >> >
>>>>>> >> >> > I get this back:
>>>>>> >> >> >
>>>>>> >> >> > Failed to execute: * could not access PID file for
>>>>>> cloudstack-agent
>>>>>> >> >> >
>>>>>> >> >> > I've made a bunch of code changes recently, though, so I
>>>>>> think I'm
>>>>>> >> going
>>>>>> >> >> to
>>>>>> >> >> > rebuild and redeploy everything.
>>>>>> >> >> >
>>>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>>>> >> >> >
>>>>>> >> >> > Thanks, Marcus!
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com
>>>>>> >> >> > >wrote:
>>>>>> >> >> >
>>>>>> >> >> > > OK, will check it out in the next few days. As mentioned,
>>>>>> you can
>>>>>> >> set
>>>>>> >> >> up
>>>>>> >> >> > > your Ubuntu vm as the management server as well if all else
>>>>>> fails.
>>>>>> >>  If
>>>>>> >> >> > you
>>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>>>> you need
>>>>>> >> to
>>>>>> >> >> > > enable.debug on the agent. It won't run without complaining
>>>>>> loudly
>>>>>> >> if
>>>>>> >> >> it
>>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>>>>>> agent
>>>>>> >> log,
>>>>>> >> >> so
>>>>>> >> >> > > perhaps its not running. I assume you know how to
>>>>>> stop/start the
>>>>>> >> agent
>>>>>> >> >> on
>>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>>>> >> >> mike.tutkowski@solidfire.com>
>>>>>> >> >> > > wrote:
>>>>>> >> >> > >
>>>>>> >> >> > > > Hey Marcus,
>>>>>> >> >> > > >
>>>>>> >> >> > > > I haven't yet been able to test my new code, but I
>>>>>> thought you
>>>>>> >> would
>>>>>> >> >> > be a
>>>>>> >> >> > > > good person to ask to review it:
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >>
>>>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>>>> >> >> > > >
>>>>>> >> >> > > > All it is supposed to do is attach and detach a data disk
>>>>>> (that
>>>>>> >> has
>>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>>>>>> >> happens to
>>>>>> >> >> > be
>>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1
>>>>>> mapping
>>>>>> >> between a
>>>>>> >> >> > > > CloudStack volume and a data disk.
>>>>>> >> >> > > >
>>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff
>>>>>> like that
>>>>>> >> >> > (likely a
>>>>>> >> >> > > > future release)...just attaching and detaching a data
>>>>>> disk in 4.3.
>>>>>> >> >> > > >
>>>>>> >> >> > > > Thanks!
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > >
>>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>>>> cloudstack-agent
>>>>>> >> >> first.
>>>>>> >> >> > > > Would
>>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>>>> >> >> > cloudstack-agent.
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >> I get the same error running the command manually:
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>>> >> /usr/sbin/service
>>>>>> >> >> > > > >> cloudstack-agent status
>>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>> agent.log looks OK to me:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > Agent
>>>>>> >> >> > > > >>> started
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> agent.properties found at
>>>>>> >> /etc/cloudstack/agent/agent.properties
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>>>>>> >> >> (main:null)
>>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>>>> >> (main:null)
>>>>>> >> >> > > log4j
>>>>>> >> >> > > > >>> configuration found at
>>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>>>> (main:null)
>>>>>> >> id
>>>>>> >> >> > is 3
>>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>>>> (main:null)
>>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>>>> >> >> scripts/network/domr/kvm
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was important.
>>>>>> This
>>>>>> >> seems
>>>>>> >> >> to
>>>>>> >> >> > > be
>>>>>> >> >> > > > a
>>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>> cloudstack-agent
>>>>>> >> status
>>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>>>> file for
>>>>>> >> >> > > > >>> cloudstack-agent
>>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>>> cloudstack-agent
>>>>>> >> start
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>>>> >> >> > > shadowsor@gmail.com
>>>>>> >> >> > > > >wrote:
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>>>> agent log
>>>>>> >> for
>>>>>> >> >> > > some
>>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the
>>>>>> place to
>>>>>> >> look.
>>>>>> >> >> > There
>>>>>> >> >> > > > is
>>>>>> >> >> > > > >>>> an
>>>>>> >> >> > > > >>>> agent log for the agent and one for the setup when
>>>>>> it adds
>>>>>> >> the
>>>>>> >> >> > host,
>>>>>> >> >> > > > >>>> both
>>>>>> >> >> > > > >>>> in /var/log
>>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>>>> >> >> > > > >>>> wrote:
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or
>>>>>> the KVM
>>>>>> >> host?
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>>>> >> >> > > > >>>> > hostThe ip address of management
>>>>>> server192.168.233.1
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>>>> >> >> host=192.168.233.1
>>>>>> >> >> > > > value.
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>>>> >> >> > > > >>>> > >wrote:
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10?
>>>>>> But you
>>>>>> >> >> tried
>>>>>> >> >> > > to
>>>>>> >> >> > > > >>>> telnet
>>>>>> >> >> > > > >>>> > to
>>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that
>>>>>> in
>>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>>>> may want
>>>>>> >> to
>>>>>> >> >> > edit
>>>>>> >> >> > > > the
>>>>>> >> >> > > > >>>> > config
>>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> > > wrote:
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
>>>>>> looks
>>>>>> >> like, if
>>>>>> >> >> > > that
>>>>>> >> >> > > > >>>> is of
>>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>>>>>> network
>>>>>> >> >> > VMware
>>>>>> >> >> > > > >>>> Fusion
>>>>>> >> >> > > > >>>> > set
>>>>>> >> >> > > > >>>> > > > up):
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto lo
>>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto eth0
>>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>>>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>>>>>> metric 1
>>>>>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>>>>> Tutkowski <
>>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the
>>>>>> MS log
>>>>>> >> >> (below).
>>>>>> >> >> > > > >>>> Discovery
>>>>>> >> >> > > > >>>> > > > timed
>>>>>> >> >> > > > >>>> > > > > out.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>>>> settings
>>>>>> >> >> > > shouldn't
>>>>>> >> >> > > > >>>> have
>>>>>> >> >> > > > >>>> > > > changed
>>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>>>> host and
>>>>>> >> vice
>>>>>> >> >> > > > versa.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on
>>>>>> the KVM
>>>>>> >> host
>>>>>> >> >> > and
>>>>>> >> >> > > > >>>> ping from
>>>>>> >> >> > > > >>>> > > it
>>>>>> >> >> > > > >>>> > > > > to the MS host.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>>>> running
>>>>>> >> the
>>>>>> >> >> CS
>>>>>> >> >> > > MS)
>>>>>> >> >> > > > >>>> to the
>>>>>> >> >> > > > >>>> > VM
>>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Timeout,
>>>>>> >> to
>>>>>> >> >> > wait
>>>>>> >> >> > > > for
>>>>>> >> >> > > > >>>> the
>>>>>> >> >> > > > >>>> > > host
>>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Unable to
>>>>>> >> >> find
>>>>>> >> >> > > the
>>>>>> >> >> > > > >>>> server
>>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Could not
>>>>>> >> >> find
>>>>>> >> >> > > > >>>> exception:
>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>>>> error code
>>>>>> >> >> list
>>>>>> >> >> > > for
>>>>>> >> >> > > > >>>> > > exceptions
>>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>>> Exception:
>>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>>>>> Unable to add
>>>>>> >> >> the
>>>>>> >> >> > > host
>>>>>> >> >> > > > >>>> > > > > at
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >>
>>>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my
>>>>>> KVM host to
>>>>>> >> >> the
>>>>>> >> >> > MS
>>>>>> >> >> > > > >>>> host's
>>>>>> >> >> > > > >>>> > > 8250
>>>>>> >> >> > > > >>>> > > > > port:
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1
>>>>>> 8250
>>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>>>> >> >> > > > >>>> > > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > > > --
>>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>>>> >> >> > > > >>>> > > > cloud<
>>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>>> > > > *™*
>>>>>> >> >> > > > >>>> > > >
>>>>>> >> >> > > > >>>> > >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>> > --
>>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>>> > o: 303.746.7302
>>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>>>> >> >> > > > >>>> > cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>>> > *™*
>>>>>> >> >> > > > >>>> >
>>>>>> >> >> > > > >>>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>> --
>>>>>> >> >> > > > >>> *Mike Tutkowski*
>>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >>> o: 303.746.7302
>>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >>> *™*
>>>>>> >> >> > > > >>>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >> --
>>>>>> >> >> > > > >> *Mike Tutkowski*
>>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > >> o: 303.746.7302
>>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > >> *™*
>>>>>> >> >> > > > >>
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > >
>>>>>> >> >> > > > > --
>>>>>> >> >> > > > > *Mike Tutkowski*
>>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > > o: 303.746.7302
>>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > > *™*
>>>>>> >> >> > > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > >
>>>>>> >> >> > > > --
>>>>>> >> >> > > > *Mike Tutkowski*
>>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > > > o: 303.746.7302
>>>>>> >> >> > > > Advancing the way the world uses the
>>>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > > > *™*
>>>>>> >> >> > > >
>>>>>> >> >> > >
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> >
>>>>>> >> >> > --
>>>>>> >> >> > *Mike Tutkowski*
>>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>>>> >> >> > o: 303.746.7302
>>>>>> >> >> > Advancing the way the world uses the
>>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> >> > *™*
>>>>>> >> >> >
>>>>>> >> >>
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > --
>>>>>> >> > *Mike Tutkowski*
>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>>> >> > o: 303.746.7302
>>>>>> >> > Advancing the way the world uses the
>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > *™*
>>>>>> >>
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > *Mike Tutkowski*
>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> > e: mike.tutkowski@solidfire.com
>>>>>> > o: 303.746.7302
>>>>>> > Advancing the way the world uses the
>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> > *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

I've been investigating my issue with not being able to add a KVM host to
CS.

For what it's worth, this comes back successful:

SSHCmdHelper.sshExecuteCmd(sshConnection, "cloudstack-setup-agent " +
parameters, 3);

This is what the command looks like:

cloudstack-setup-agent  -m 192.168.233.1 -z 1 -p 1 -c 1 -g
6b4aa1c2-2ac9-3c60-aabe-704aed40c684 -a --pubNic=cloudbr0 --prvNic=cloudbr0
--guestNic=cloudbr0

The problem is this method in LibvirtServerDiscoverer never finds a
matching host in the DB:

waitForHostConnect(long dcId, long podId, long clusterId, String guid)

I assume once the KVM host is up and running that it's supposed to call
into the CS MS so the DB can be updated as such?

If so, the problem must be on the KVM side.

I did run this again (from the KVM host) to see if the connection was in
place:

mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250

Trying 192.168.233.1...

Connected to 192.168.233.1.

Escape character is '^]'.
So that looks good.

I turned on more info in the debug log, but nothing obvious jumps out as of
yet.

If you have any thoughts on this, please shoot them my way. :)

Thanks!


On Sun, Sep 22, 2013 at 12:11 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> First step is for me to get this working for KVM, though. :)
>
> Once I do that, I can perhaps make modifications to the storage framework
> and hypervisor plug-ins to refactor the logic and such.
>
>
> On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Same would work for KVM.
>>
>> If CreateCommand and DestroyCommand were called at the appropriate times
>> by the storage framework, I could move my connect and disconnect logic out
>> of the attach/detach logic.
>>
>>
>> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Conversely, if the storage framework called the DestroyCommand for
>>> managed storage after the DetachCommand, then I could have had my remove
>>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>>> DetachCommand handling.
>>>
>>>
>>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>>
>>>> The initial approach that was discussed during 4.2 was for me to modify
>>>> the attach/detach logic only in the XenServer and VMware hypervisor
>>>> plug-ins.
>>>>
>>>> Now that I think about it more, though, I kind of would have liked to
>>>> have the storage framework send a CreateCommand to the hypervisor before
>>>> sending the AttachCommand if the storage in question was managed.
>>>>
>>>> Then I could have created my SR/datastore in the CreateCommand and the
>>>> AttachCommand would have had the SR/datastore that it was always expecting
>>>> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>>>>
>>>>
>>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>>> better position to tell.
>>>>>
>>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>>> that mean that there's no template support? Or is it some other call
>>>>> that does templating now? I'm still getting up to speed on all of the
>>>>> 4.2 changes. I was just looking at CreateCommand in
>>>>> LibvirtComputingResource, since that's the only place
>>>>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>>>>> might be skipped altogether when utilizing storage plugins.
>>>>>
>>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>>> <mi...@solidfire.com> wrote:
>>>>> > That's an interesting comment, Marcus.
>>>>> >
>>>>> > It was my intent that it should work with any CloudStack "managed"
>>>>> storage
>>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>>> code so
>>>>> > CHAP didn't have to be used.
>>>>> >
>>>>> > As I'm doing my testing, I can try to think about whether it is
>>>>> generic
>>>>> > enough to keep those names or not.
>>>>> >
>>>>> > My expectation is that it is generic enough.
>>>>> >
>>>>> >
>>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com>wrote:
>>>>> >
>>>>> >> I added a comment to your diff. In general I think it looks good,
>>>>> >> though I obviously can't vouch for whether or not it will work. One
>>>>> >> thing I do have reservations about is the adaptor/pool naming. If
>>>>> you
>>>>> >> think the code is generic enough that it will work for anyone who
>>>>> does
>>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's
>>>>> anything
>>>>> >> about it that's specific to YOUR iscsi target or how it likes to be
>>>>> >> treated then I'd say that they should be named something less
>>>>> generic
>>>>> >> than iScsiAdmStorage.
>>>>> >>
>>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>>> >> <mi...@solidfire.com> wrote:
>>>>> >> > Great - thanks!
>>>>> >> >
>>>>> >> > Just to give you an overview of what my code does (for when you
>>>>> get a
>>>>> >> > chance to review it):
>>>>> >> >
>>>>> >> > SolidFireHostListener is registered in
>>>>> SolidfirePrimaryDataStoreProvider.
>>>>> >> > Its hostConnect method is invoked when a host connects with the
>>>>> CS MS. If
>>>>> >> > the host is running KVM, the listener sends a
>>>>> ModifyStoragePoolCommand to
>>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>>> >> >
>>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>>> KVMStoragePoolManager
>>>>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>>>>> (which
>>>>> >> was
>>>>> >> > registered in the constructor for KVMStoragePoolManager under the
>>>>> key of
>>>>> >> > StoragePoolType.Iscsi.toString()).
>>>>> >> >
>>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to
>>>>> the
>>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>>>>> storage
>>>>> >> > pool.
>>>>> >> >
>>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>>> managed
>>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>>> iscsiadm to
>>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>>> follows.
>>>>> >> >
>>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the
>>>>> IQN of the
>>>>> >> > volume if the storage pool in question is managed storage.
>>>>> Otherwise, the
>>>>> >> > normal vol.getPath() is used.
>>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the
>>>>> detach logic.
>>>>> >> >
>>>>> >> > Once the volume has been detached,
>>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>>> removes the
>>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>>> >> >
>>>>> >> >
>>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com
>>>>> >> >wrote:
>>>>> >> >
>>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change
>>>>> all INFO
>>>>> >> to
>>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail
>>>>> the log
>>>>> >> when
>>>>> >> >> you try to start the service, or maybe it will spit something
>>>>> out into
>>>>> >> one
>>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com
>>>>> >> >
>>>>> >> >> wrote:
>>>>> >> >>
>>>>> >> >> > This is how I've been trying to query for the status of the
>>>>> service (I
>>>>> >> >> > assume it could be started this way, as well, by changing
>>>>> "status" to
>>>>> >> >> > "start" or "restart"?):
>>>>> >> >> >
>>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>> /usr/sbin/service
>>>>> >> >> > cloudstack-agent status
>>>>> >> >> >
>>>>> >> >> > I get this back:
>>>>> >> >> >
>>>>> >> >> > Failed to execute: * could not access PID file for
>>>>> cloudstack-agent
>>>>> >> >> >
>>>>> >> >> > I've made a bunch of code changes recently, though, so I think
>>>>> I'm
>>>>> >> going
>>>>> >> >> to
>>>>> >> >> > rebuild and redeploy everything.
>>>>> >> >> >
>>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>>> >> >> >
>>>>> >> >> > Thanks, Marcus!
>>>>> >> >> >
>>>>> >> >> >
>>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com
>>>>> >> >> > >wrote:
>>>>> >> >> >
>>>>> >> >> > > OK, will check it out in the next few days. As mentioned,
>>>>> you can
>>>>> >> set
>>>>> >> >> up
>>>>> >> >> > > your Ubuntu vm as the management server as well if all else
>>>>> fails.
>>>>> >>  If
>>>>> >> >> > you
>>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>>> you need
>>>>> >> to
>>>>> >> >> > > enable.debug on the agent. It won't run without complaining
>>>>> loudly
>>>>> >> if
>>>>> >> >> it
>>>>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>>>>> agent
>>>>> >> log,
>>>>> >> >> so
>>>>> >> >> > > perhaps its not running. I assume you know how to stop/start
>>>>> the
>>>>> >> agent
>>>>> >> >> on
>>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>>> >> >> mike.tutkowski@solidfire.com>
>>>>> >> >> > > wrote:
>>>>> >> >> > >
>>>>> >> >> > > > Hey Marcus,
>>>>> >> >> > > >
>>>>> >> >> > > > I haven't yet been able to test my new code, but I thought
>>>>> you
>>>>> >> would
>>>>> >> >> > be a
>>>>> >> >> > > > good person to ask to review it:
>>>>> >> >> > > >
>>>>> >> >> > > >
>>>>> >> >> > > >
>>>>> >> >> > >
>>>>> >> >> >
>>>>> >> >>
>>>>> >>
>>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>>> >> >> > > >
>>>>> >> >> > > > All it is supposed to do is attach and detach a data disk
>>>>> (that
>>>>> >> has
>>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>>>>> >> happens to
>>>>> >> >> > be
>>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>>>>> >> between a
>>>>> >> >> > > > CloudStack volume and a data disk.
>>>>> >> >> > > >
>>>>> >> >> > > > There is no support for hypervisor snapshots or stuff like
>>>>> that
>>>>> >> >> > (likely a
>>>>> >> >> > > > future release)...just attaching and detaching a data disk
>>>>> in 4.3.
>>>>> >> >> > > >
>>>>> >> >> > > > Thanks!
>>>>> >> >> > > >
>>>>> >> >> > > >
>>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>>> >> >> > > >
>>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>>> cloudstack-agent
>>>>> >> >> first.
>>>>> >> >> > > > Would
>>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>>> >> >> > cloudstack-agent.
>>>>> >> >> > > > >
>>>>> >> >> > > > >
>>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>>> >> >> > > > >
>>>>> >> >> > > > >> I get the same error running the command manually:
>>>>> >> >> > > > >>
>>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>>> >> /usr/sbin/service
>>>>> >> >> > > > >> cloudstack-agent status
>>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>>> >> >> > > > >>
>>>>> >> >> > > > >>
>>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>>> >> >> > > > >>
>>>>> >> >> > > > >>> agent.log looks OK to me:
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>>>>> >> >> (main:null)
>>>>> >> >> > > > Agent
>>>>> >> >> > > > >>> started
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>>>>> >> >> (main:null)
>>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>>>>> >> >> (main:null)
>>>>> >> >> > > > >>> agent.properties found at
>>>>> >> /etc/cloudstack/agent/agent.properties
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>>>>> >> >> (main:null)
>>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>>>>> >> >> (main:null)
>>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>>> >> (main:null)
>>>>> >> >> > > log4j
>>>>> >> >> > > > >>> configuration found at
>>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>>> (main:null)
>>>>> >> id
>>>>> >> >> > is 3
>>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>>> (main:null)
>>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>>> >> >> scripts/network/domr/kvm
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>> However, I wasn't aware that setup.log was important.
>>>>> This
>>>>> >> seems
>>>>> >> >> to
>>>>> >> >> > > be
>>>>> >> >> > > > a
>>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>> cloudstack-agent
>>>>> >> status
>>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>>> file for
>>>>> >> >> > > > >>> cloudstack-agent
>>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>>> cloudstack-agent
>>>>> >> start
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>>> >> >> > > shadowsor@gmail.com
>>>>> >> >> > > > >wrote:
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>>> agent log
>>>>> >> for
>>>>> >> >> > > some
>>>>> >> >> > > > >>>> reason. Is the agent started? That might be the place
>>>>> to
>>>>> >> look.
>>>>> >> >> > There
>>>>> >> >> > > > is
>>>>> >> >> > > > >>>> an
>>>>> >> >> > > > >>>> agent log for the agent and one for the setup when it
>>>>> adds
>>>>> >> the
>>>>> >> >> > host,
>>>>> >> >> > > > >>>> both
>>>>> >> >> > > > >>>> in /var/log
>>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>>> >> >> > > > >>>> wrote:
>>>>> >> >> > > > >>>>
>>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or
>>>>> the KVM
>>>>> >> host?
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>>> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>>> >> >> host=192.168.233.1
>>>>> >> >> > > > value.
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>>> >> >> > > > >>>> > >wrote:
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10?
>>>>> But you
>>>>> >> >> tried
>>>>> >> >> > > to
>>>>> >> >> > > > >>>> telnet
>>>>> >> >> > > > >>>> > to
>>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that
>>>>> in
>>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>>> may want
>>>>> >> to
>>>>> >> >> > edit
>>>>> >> >> > > > the
>>>>> >> >> > > > >>>> > config
>>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>>> >> >> > > > >>>> > >
>>>>> >> >> > > > >>>> > > wrote:
>>>>> >> >> > > > >>>> > >
>>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file
>>>>> looks
>>>>> >> like, if
>>>>> >> >> > > that
>>>>> >> >> > > > >>>> is of
>>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>>>>> network
>>>>> >> >> > VMware
>>>>> >> >> > > > >>>> Fusion
>>>>> >> >> > > > >>>> > set
>>>>> >> >> > > > >>>> > > > up):
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > auto lo
>>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > auto eth0
>>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>>>>> metric 1
>>>>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike
>>>>> Tutkowski <
>>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS
>>>>> log
>>>>> >> >> (below).
>>>>> >> >> > > > >>>> Discovery
>>>>> >> >> > > > >>>> > > > timed
>>>>> >> >> > > > >>>> > > > > out.
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>>> settings
>>>>> >> >> > > shouldn't
>>>>> >> >> > > > >>>> have
>>>>> >> >> > > > >>>> > > > changed
>>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>>> host and
>>>>> >> vice
>>>>> >> >> > > > versa.
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on
>>>>> the KVM
>>>>> >> host
>>>>> >> >> > and
>>>>> >> >> > > > >>>> ping from
>>>>> >> >> > > > >>>> > > it
>>>>> >> >> > > > >>>> > > > > to the MS host.
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>>> running
>>>>> >> the
>>>>> >> >> CS
>>>>> >> >> > > MS)
>>>>> >> >> > > > >>>> to the
>>>>> >> >> > > > >>>> > VM
>>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>> Timeout,
>>>>> >> to
>>>>> >> >> > wait
>>>>> >> >> > > > for
>>>>> >> >> > > > >>>> the
>>>>> >> >> > > > >>>> > > host
>>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>> Unable to
>>>>> >> >> find
>>>>> >> >> > > the
>>>>> >> >> > > > >>>> server
>>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>> Could not
>>>>> >> >> find
>>>>> >> >> > > > >>>> exception:
>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>>> error code
>>>>> >> >> list
>>>>> >> >> > > for
>>>>> >> >> > > > >>>> > > exceptions
>>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>>> Exception:
>>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException:
>>>>> Unable to add
>>>>> >> >> the
>>>>> >> >> > > host
>>>>> >> >> > > > >>>> > > > > at
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>>
>>>>> >> >> > > >
>>>>> >> >> > >
>>>>> >> >> >
>>>>> >> >>
>>>>> >>
>>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
>>>>> host to
>>>>> >> >> the
>>>>> >> >> > MS
>>>>> >> >> > > > >>>> host's
>>>>> >> >> > > > >>>> > > 8250
>>>>> >> >> > > > >>>> > > > > port:
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1
>>>>> 8250
>>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>>> >> >> > > > >>>> > > > >
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > > > --
>>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>>> >> >> > > > >>>> > > > cloud<
>>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > >>>> > > > *™*
>>>>> >> >> > > > >>>> > > >
>>>>> >> >> > > > >>>> > >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>> > --
>>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > >>>> > o: 303.746.7302
>>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>>> >> >> > > > >>>> > cloud<
>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > >>>> > *™*
>>>>> >> >> > > > >>>> >
>>>>> >> >> > > > >>>>
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>> --
>>>>> >> >> > > > >>> *Mike Tutkowski*
>>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > >>> o: 303.746.7302
>>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > >>> *™*
>>>>> >> >> > > > >>>
>>>>> >> >> > > > >>
>>>>> >> >> > > > >>
>>>>> >> >> > > > >>
>>>>> >> >> > > > >> --
>>>>> >> >> > > > >> *Mike Tutkowski*
>>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > >> o: 303.746.7302
>>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > >> *™*
>>>>> >> >> > > > >>
>>>>> >> >> > > > >
>>>>> >> >> > > > >
>>>>> >> >> > > > >
>>>>> >> >> > > > > --
>>>>> >> >> > > > > *Mike Tutkowski*
>>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > > o: 303.746.7302
>>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > > *™*
>>>>> >> >> > > > >
>>>>> >> >> > > >
>>>>> >> >> > > >
>>>>> >> >> > > >
>>>>> >> >> > > > --
>>>>> >> >> > > > *Mike Tutkowski*
>>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>>> >> >> > > > o: 303.746.7302
>>>>> >> >> > > > Advancing the way the world uses the
>>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > > > *™*
>>>>> >> >> > > >
>>>>> >> >> > >
>>>>> >> >> >
>>>>> >> >> >
>>>>> >> >> >
>>>>> >> >> > --
>>>>> >> >> > *Mike Tutkowski*
>>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>>> >> >> > o: 303.746.7302
>>>>> >> >> > Advancing the way the world uses the
>>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> >> > *™*
>>>>> >> >> >
>>>>> >> >>
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > --
>>>>> >> > *Mike Tutkowski*
>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>> >> > o: 303.746.7302
>>>>> >> > Advancing the way the world uses the
>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> > *™*
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > *Mike Tutkowski*
>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> > e: mike.tutkowski@solidfire.com
>>>>> > o: 303.746.7302
>>>>> > Advancing the way the world uses the
>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> > *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
First step is for me to get this working for KVM, though. :)

Once I do that, I can perhaps make modifications to the storage framework
and hypervisor plug-ins to refactor the logic and such.


On Sun, Sep 22, 2013 at 12:09 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Same would work for KVM.
>
> If CreateCommand and DestroyCommand were called at the appropriate times
> by the storage framework, I could move my connect and disconnect logic out
> of the attach/detach logic.
>
>
> On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Conversely, if the storage framework called the DestroyCommand for
>> managed storage after the DetachCommand, then I could have had my remove
>> SR/datastore logic placed in the DestroyCommand handling rather than in the
>> DetachCommand handling.
>>
>>
>> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Edison's plug-in calls the CreateCommand. Mine does not.
>>>
>>> The initial approach that was discussed during 4.2 was for me to modify
>>> the attach/detach logic only in the XenServer and VMware hypervisor
>>> plug-ins.
>>>
>>> Now that I think about it more, though, I kind of would have liked to
>>> have the storage framework send a CreateCommand to the hypervisor before
>>> sending the AttachCommand if the storage in question was managed.
>>>
>>> Then I could have created my SR/datastore in the CreateCommand and the
>>> AttachCommand would have had the SR/datastore that it was always expecting
>>> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>>>
>>>
>>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>>> better position to tell.
>>>>
>>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>>> that mean that there's no template support? Or is it some other call
>>>> that does templating now? I'm still getting up to speed on all of the
>>>> 4.2 changes. I was just looking at CreateCommand in
>>>> LibvirtComputingResource, since that's the only place
>>>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>>>> might be skipped altogether when utilizing storage plugins.
>>>>
>>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>>> <mi...@solidfire.com> wrote:
>>>> > That's an interesting comment, Marcus.
>>>> >
>>>> > It was my intent that it should work with any CloudStack "managed"
>>>> storage
>>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>>> code so
>>>> > CHAP didn't have to be used.
>>>> >
>>>> > As I'm doing my testing, I can try to think about whether it is
>>>> generic
>>>> > enough to keep those names or not.
>>>> >
>>>> > My expectation is that it is generic enough.
>>>> >
>>>> >
>>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com>wrote:
>>>> >
>>>> >> I added a comment to your diff. In general I think it looks good,
>>>> >> though I obviously can't vouch for whether or not it will work. One
>>>> >> thing I do have reservations about is the adaptor/pool naming. If you
>>>> >> think the code is generic enough that it will work for anyone who
>>>> does
>>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
>>>> >> about it that's specific to YOUR iscsi target or how it likes to be
>>>> >> treated then I'd say that they should be named something less generic
>>>> >> than iScsiAdmStorage.
>>>> >>
>>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>>> >> <mi...@solidfire.com> wrote:
>>>> >> > Great - thanks!
>>>> >> >
>>>> >> > Just to give you an overview of what my code does (for when you
>>>> get a
>>>> >> > chance to review it):
>>>> >> >
>>>> >> > SolidFireHostListener is registered in
>>>> SolidfirePrimaryDataStoreProvider.
>>>> >> > Its hostConnect method is invoked when a host connects with the CS
>>>> MS. If
>>>> >> > the host is running KVM, the listener sends a
>>>> ModifyStoragePoolCommand to
>>>> >> > the host. This logic was based off of DefaultHostListener.
>>>> >> >
>>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>>> KVMStoragePoolManager
>>>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>>>> (which
>>>> >> was
>>>> >> > registered in the constructor for KVMStoragePoolManager under the
>>>> key of
>>>> >> > StoragePoolType.Iscsi.toString()).
>>>> >> >
>>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to
>>>> the
>>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>>>> storage
>>>> >> > pool.
>>>> >> >
>>>> >> > When a volume is attached, createPhysicalDisk is invoked for
>>>> managed
>>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>>> iscsiadm to
>>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>>> follows.
>>>> >> >
>>>> >> > When a volume is detached, getPhysicalDisk is invoked with the IQN
>>>> of the
>>>> >> > volume if the storage pool in question is managed storage.
>>>> Otherwise, the
>>>> >> > normal vol.getPath() is used.
>>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>>> >> > returns a new instance of KVMPhysicalDisk to be used in the detach
>>>> logic.
>>>> >> >
>>>> >> > Once the volume has been detached,
>>>> iScsiAdmStoragePool.deletePhysicalDisk
>>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>>> removes the
>>>> >> > iSCSI connection to the volume using iscsiadm.
>>>> >> >
>>>> >> >
>>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> >> >wrote:
>>>> >> >
>>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change all
>>>> INFO
>>>> >> to
>>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail the
>>>> log
>>>> >> when
>>>> >> >> you try to start the service, or maybe it will spit something out
>>>> into
>>>> >> one
>>>> >> >> of the other files in /var/log/cloudstack/agent
>>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>>> mike.tutkowski@solidfire.com
>>>> >> >
>>>> >> >> wrote:
>>>> >> >>
>>>> >> >> > This is how I've been trying to query for the status of the
>>>> service (I
>>>> >> >> > assume it could be started this way, as well, by changing
>>>> "status" to
>>>> >> >> > "start" or "restart"?):
>>>> >> >> >
>>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>>> >> >> > cloudstack-agent status
>>>> >> >> >
>>>> >> >> > I get this back:
>>>> >> >> >
>>>> >> >> > Failed to execute: * could not access PID file for
>>>> cloudstack-agent
>>>> >> >> >
>>>> >> >> > I've made a bunch of code changes recently, though, so I think
>>>> I'm
>>>> >> going
>>>> >> >> to
>>>> >> >> > rebuild and redeploy everything.
>>>> >> >> >
>>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>>> >> >> >
>>>> >> >> > Thanks, Marcus!
>>>> >> >> >
>>>> >> >> >
>>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> >> >> > >wrote:
>>>> >> >> >
>>>> >> >> > > OK, will check it out in the next few days. As mentioned, you
>>>> can
>>>> >> set
>>>> >> >> up
>>>> >> >> > > your Ubuntu vm as the management server as well if all else
>>>> fails.
>>>> >>  If
>>>> >> >> > you
>>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then
>>>> you need
>>>> >> to
>>>> >> >> > > enable.debug on the agent. It won't run without complaining
>>>> loudly
>>>> >> if
>>>> >> >> it
>>>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>>>> agent
>>>> >> log,
>>>> >> >> so
>>>> >> >> > > perhaps its not running. I assume you know how to stop/start
>>>> the
>>>> >> agent
>>>> >> >> on
>>>> >> >> > > KVM via 'service cloud stacks agent'.
>>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>>> >> >> mike.tutkowski@solidfire.com>
>>>> >> >> > > wrote:
>>>> >> >> > >
>>>> >> >> > > > Hey Marcus,
>>>> >> >> > > >
>>>> >> >> > > > I haven't yet been able to test my new code, but I thought
>>>> you
>>>> >> would
>>>> >> >> > be a
>>>> >> >> > > > good person to ask to review it:
>>>> >> >> > > >
>>>> >> >> > > >
>>>> >> >> > > >
>>>> >> >> > >
>>>> >> >> >
>>>> >> >>
>>>> >>
>>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>>> >> >> > > >
>>>> >> >> > > > All it is supposed to do is attach and detach a data disk
>>>> (that
>>>> >> has
>>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>>>> >> happens to
>>>> >> >> > be
>>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>>>> >> between a
>>>> >> >> > > > CloudStack volume and a data disk.
>>>> >> >> > > >
>>>> >> >> > > > There is no support for hypervisor snapshots or stuff like
>>>> that
>>>> >> >> > (likely a
>>>> >> >> > > > future release)...just attaching and detaching a data disk
>>>> in 4.3.
>>>> >> >> > > >
>>>> >> >> > > > Thanks!
>>>> >> >> > > >
>>>> >> >> > > >
>>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>>> >> >> > > >
>>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>>> cloudstack-agent
>>>> >> >> first.
>>>> >> >> > > > Would
>>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>>> >> >> > cloudstack-agent.
>>>> >> >> > > > >
>>>> >> >> > > > >
>>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>>> >> >> > > > >
>>>> >> >> > > > >> I get the same error running the command manually:
>>>> >> >> > > > >>
>>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>>> >> /usr/sbin/service
>>>> >> >> > > > >> cloudstack-agent status
>>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>>> >> >> > > > >>
>>>> >> >> > > > >>
>>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>>> >> >> > > > >>
>>>> >> >> > > > >>> agent.log looks OK to me:
>>>> >> >> > > > >>>
>>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>>>> >> >> (main:null)
>>>> >> >> > > > Agent
>>>> >> >> > > > >>> started
>>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>>>> >> >> (main:null)
>>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>>>> >> >> (main:null)
>>>> >> >> > > > >>> agent.properties found at
>>>> >> /etc/cloudstack/agent/agent.properties
>>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>>>> >> >> (main:null)
>>>> >> >> > > > >>> Defaulting to using properties file for storage
>>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>>>> >> >> (main:null)
>>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>>> >> (main:null)
>>>> >> >> > > log4j
>>>> >> >> > > > >>> configuration found at
>>>> /etc/cloudstack/agent/log4j-cloud.xml
>>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>>> (main:null)
>>>> >> id
>>>> >> >> > is 3
>>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>>> (main:null)
>>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>>> >> >> scripts/network/domr/kvm
>>>> >> >> > > > >>>
>>>> >> >> > > > >>> However, I wasn't aware that setup.log was important.
>>>> This
>>>> >> seems
>>>> >> >> to
>>>> >> >> > > be
>>>> >> >> > > > a
>>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>>> >> >> > > > >>>
>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>> cloudstack-agent
>>>> >> status
>>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>>> file for
>>>> >> >> > > > >>> cloudstack-agent
>>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>>> cloudstack-agent
>>>> >> start
>>>> >> >> > > > >>>
>>>> >> >> > > > >>>
>>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>>> >> >> > > shadowsor@gmail.com
>>>> >> >> > > > >wrote:
>>>> >> >> > > > >>>
>>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>>> agent log
>>>> >> for
>>>> >> >> > > some
>>>> >> >> > > > >>>> reason. Is the agent started? That might be the place
>>>> to
>>>> >> look.
>>>> >> >> > There
>>>> >> >> > > > is
>>>> >> >> > > > >>>> an
>>>> >> >> > > > >>>> agent log for the agent and one for the setup when it
>>>> adds
>>>> >> the
>>>> >> >> > host,
>>>> >> >> > > > >>>> both
>>>> >> >> > > > >>>> in /var/log
>>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>>> >> >> > > > >>>> wrote:
>>>> >> >> > > > >>>>
>>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or the
>>>> KVM
>>>> >> host?
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>>> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>>> >> >> host=192.168.233.1
>>>> >> >> > > > value.
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>>> >> >> > > > >>>> shadowsor@gmail.com
>>>> >> >> > > > >>>> > >wrote:
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10?
>>>> But you
>>>> >> >> tried
>>>> >> >> > > to
>>>> >> >> > > > >>>> telnet
>>>> >> >> > > > >>>> > to
>>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you
>>>> may want
>>>> >> to
>>>> >> >> > edit
>>>> >> >> > > > the
>>>> >> >> > > > >>>> > config
>>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>>> >> >> > > > >>>> > >
>>>> >> >> > > > >>>> > > wrote:
>>>> >> >> > > > >>>> > >
>>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
>>>> >> like, if
>>>> >> >> > > that
>>>> >> >> > > > >>>> is of
>>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>>>> network
>>>> >> >> > VMware
>>>> >> >> > > > >>>> Fusion
>>>> >> >> > > > >>>> > set
>>>> >> >> > > > >>>> > > > up):
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > auto lo
>>>> >> >> > > > >>>> > > > iface lo inet loopback
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > auto eth0
>>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > auto cloudbr0
>>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>>> >> >> > > > >>>> > > >     bridge_fd 5
>>>> >> >> > > > >>>> > > >     bridge_stp off
>>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>>>> metric 1
>>>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski
>>>> <
>>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS
>>>> log
>>>> >> >> (below).
>>>> >> >> > > > >>>> Discovery
>>>> >> >> > > > >>>> > > > timed
>>>> >> >> > > > >>>> > > > > out.
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>>> settings
>>>> >> >> > > shouldn't
>>>> >> >> > > > >>>> have
>>>> >> >> > > > >>>> > > > changed
>>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS
>>>> host and
>>>> >> vice
>>>> >> >> > > > versa.
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the
>>>> KVM
>>>> >> host
>>>> >> >> > and
>>>> >> >> > > > >>>> ping from
>>>> >> >> > > > >>>> > > it
>>>> >> >> > > > >>>> > > > > to the MS host.
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>>> running
>>>> >> the
>>>> >> >> CS
>>>> >> >> > > MS)
>>>> >> >> > > > >>>> to the
>>>> >> >> > > > >>>> > VM
>>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>> Timeout,
>>>> >> to
>>>> >> >> > wait
>>>> >> >> > > > for
>>>> >> >> > > > >>>> the
>>>> >> >> > > > >>>> > > host
>>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>>> >> >>  [c.c.r.ResourceManagerImpl]
>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>> Unable to
>>>> >> >> find
>>>> >> >> > > the
>>>> >> >> > > > >>>> server
>>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>> Could not
>>>> >> >> find
>>>> >> >> > > > >>>> exception:
>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in
>>>> error code
>>>> >> >> list
>>>> >> >> > > for
>>>> >> >> > > > >>>> > > exceptions
>>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>>> Exception:
>>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable
>>>> to add
>>>> >> >> the
>>>> >> >> > > host
>>>> >> >> > > > >>>> > > > > at
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>>
>>>> >> >> > > >
>>>> >> >> > >
>>>> >> >> >
>>>> >> >>
>>>> >>
>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
>>>> host to
>>>> >> >> the
>>>> >> >> > MS
>>>> >> >> > > > >>>> host's
>>>> >> >> > > > >>>> > > 8250
>>>> >> >> > > > >>>> > > > > port:
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>>> >> >> > > > >>>> > > > >
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > > > --
>>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>>> >> >> > > > >>>> > > > o: 303.746.7302
>>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>>> >> >> > > > >>>> > > > cloud<
>>>> >> http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > >>>> > > > *™*
>>>> >> >> > > > >>>> > > >
>>>> >> >> > > > >>>> > >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>> > --
>>>> >> >> > > > >>>> > *Mike Tutkowski*
>>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>>> >> >> > > > >>>> > o: 303.746.7302
>>>> >> >> > > > >>>> > Advancing the way the world uses the
>>>> >> >> > > > >>>> > cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > >>>> > *™*
>>>> >> >> > > > >>>> >
>>>> >> >> > > > >>>>
>>>> >> >> > > > >>>
>>>> >> >> > > > >>>
>>>> >> >> > > > >>>
>>>> >> >> > > > >>> --
>>>> >> >> > > > >>> *Mike Tutkowski*
>>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>>> >> >> > > > >>> o: 303.746.7302
>>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > >>> *™*
>>>> >> >> > > > >>>
>>>> >> >> > > > >>
>>>> >> >> > > > >>
>>>> >> >> > > > >>
>>>> >> >> > > > >> --
>>>> >> >> > > > >> *Mike Tutkowski*
>>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>>> >> >> > > > >> o: 303.746.7302
>>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > >> *™*
>>>> >> >> > > > >>
>>>> >> >> > > > >
>>>> >> >> > > > >
>>>> >> >> > > > >
>>>> >> >> > > > > --
>>>> >> >> > > > > *Mike Tutkowski*
>>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>>> >> >> > > > > o: 303.746.7302
>>>> >> >> > > > > Advancing the way the world uses the cloud<
>>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > > *™*
>>>> >> >> > > > >
>>>> >> >> > > >
>>>> >> >> > > >
>>>> >> >> > > >
>>>> >> >> > > > --
>>>> >> >> > > > *Mike Tutkowski*
>>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>>> >> >> > > > o: 303.746.7302
>>>> >> >> > > > Advancing the way the world uses the
>>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > > > *™*
>>>> >> >> > > >
>>>> >> >> > >
>>>> >> >> >
>>>> >> >> >
>>>> >> >> >
>>>> >> >> > --
>>>> >> >> > *Mike Tutkowski*
>>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> >> > e: mike.tutkowski@solidfire.com
>>>> >> >> > o: 303.746.7302
>>>> >> >> > Advancing the way the world uses the
>>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> >> > *™*
>>>> >> >> >
>>>> >> >>
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > --
>>>> >> > *Mike Tutkowski*
>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > e: mike.tutkowski@solidfire.com
>>>> >> > o: 303.746.7302
>>>> >> > Advancing the way the world uses the
>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> > *™*
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > *Mike Tutkowski*
>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > e: mike.tutkowski@solidfire.com
>>>> > o: 303.746.7302
>>>> > Advancing the way the world uses the
>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> > *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Same would work for KVM.

If CreateCommand and DestroyCommand were called at the appropriate times by
the storage framework, I could move my connect and disconnect logic out of
the attach/detach logic.


On Sun, Sep 22, 2013 at 12:08 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Conversely, if the storage framework called the DestroyCommand for managed
> storage after the DetachCommand, then I could have had my remove
> SR/datastore logic placed in the DestroyCommand handling rather than in the
> DetachCommand handling.
>
>
> On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Edison's plug-in calls the CreateCommand. Mine does not.
>>
>> The initial approach that was discussed during 4.2 was for me to modify
>> the attach/detach logic only in the XenServer and VMware hypervisor
>> plug-ins.
>>
>> Now that I think about it more, though, I kind of would have liked to
>> have the storage framework send a CreateCommand to the hypervisor before
>> sending the AttachCommand if the storage in question was managed.
>>
>> Then I could have created my SR/datastore in the CreateCommand and the
>> AttachCommand would have had the SR/datastore that it was always expecting
>> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>>
>>
>> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Yeah, I think it probably is as well, but I figured you'd be in a
>>> better position to tell.
>>>
>>> I see that copyAsync is unsupported in your current 4.2 driver, does
>>> that mean that there's no template support? Or is it some other call
>>> that does templating now? I'm still getting up to speed on all of the
>>> 4.2 changes. I was just looking at CreateCommand in
>>> LibvirtComputingResource, since that's the only place
>>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>>> might be skipped altogether when utilizing storage plugins.
>>>
>>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > That's an interesting comment, Marcus.
>>> >
>>> > It was my intent that it should work with any CloudStack "managed"
>>> storage
>>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the
>>> code so
>>> > CHAP didn't have to be used.
>>> >
>>> > As I'm doing my testing, I can try to think about whether it is generic
>>> > enough to keep those names or not.
>>> >
>>> > My expectation is that it is generic enough.
>>> >
>>> >
>>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <shadowsor@gmail.com
>>> >wrote:
>>> >
>>> >> I added a comment to your diff. In general I think it looks good,
>>> >> though I obviously can't vouch for whether or not it will work. One
>>> >> thing I do have reservations about is the adaptor/pool naming. If you
>>> >> think the code is generic enough that it will work for anyone who does
>>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
>>> >> about it that's specific to YOUR iscsi target or how it likes to be
>>> >> treated then I'd say that they should be named something less generic
>>> >> than iScsiAdmStorage.
>>> >>
>>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>>> >> <mi...@solidfire.com> wrote:
>>> >> > Great - thanks!
>>> >> >
>>> >> > Just to give you an overview of what my code does (for when you get
>>> a
>>> >> > chance to review it):
>>> >> >
>>> >> > SolidFireHostListener is registered in
>>> SolidfirePrimaryDataStoreProvider.
>>> >> > Its hostConnect method is invoked when a host connects with the CS
>>> MS. If
>>> >> > the host is running KVM, the listener sends a
>>> ModifyStoragePoolCommand to
>>> >> > the host. This logic was based off of DefaultHostListener.
>>> >> >
>>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>>> >> > createStoragePool on the KVMStoragePoolManager. The
>>> KVMStoragePoolManager
>>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>>> (which
>>> >> was
>>> >> > registered in the constructor for KVMStoragePoolManager under the
>>> key of
>>> >> > StoragePoolType.Iscsi.toString()).
>>> >> >
>>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to
>>> the
>>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>>> storage
>>> >> > pool.
>>> >> >
>>> >> > When a volume is attached, createPhysicalDisk is invoked for managed
>>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>>> iscsiadm to
>>> >> > establish the iSCSI connection to the volume on the SAN and a
>>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>>> follows.
>>> >> >
>>> >> > When a volume is detached, getPhysicalDisk is invoked with the IQN
>>> of the
>>> >> > volume if the storage pool in question is managed storage.
>>> Otherwise, the
>>> >> > normal vol.getPath() is used.
>>> iScsiAdmStorageAdaptor.getPhysicalDisk just
>>> >> > returns a new instance of KVMPhysicalDisk to be used in the detach
>>> logic.
>>> >> >
>>> >> > Once the volume has been detached,
>>> iScsiAdmStoragePool.deletePhysicalDisk
>>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>>> removes the
>>> >> > iSCSI connection to the volume using iscsiadm.
>>> >> >
>>> >> >
>>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>>> shadowsor@gmail.com
>>> >> >wrote:
>>> >> >
>>> >> >> Its the log4j properties file in /etc/cloudstack/agent change all
>>> INFO
>>> >> to
>>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail the
>>> log
>>> >> when
>>> >> >> you try to start the service, or maybe it will spit something out
>>> into
>>> >> one
>>> >> >> of the other files in /var/log/cloudstack/agent
>>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>>> mike.tutkowski@solidfire.com
>>> >> >
>>> >> >> wrote:
>>> >> >>
>>> >> >> > This is how I've been trying to query for the status of the
>>> service (I
>>> >> >> > assume it could be started this way, as well, by changing
>>> "status" to
>>> >> >> > "start" or "restart"?):
>>> >> >> >
>>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>>> >> >> > cloudstack-agent status
>>> >> >> >
>>> >> >> > I get this back:
>>> >> >> >
>>> >> >> > Failed to execute: * could not access PID file for
>>> cloudstack-agent
>>> >> >> >
>>> >> >> > I've made a bunch of code changes recently, though, so I think
>>> I'm
>>> >> going
>>> >> >> to
>>> >> >> > rebuild and redeploy everything.
>>> >> >> >
>>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>>> >> >> >
>>> >> >> > Thanks, Marcus!
>>> >> >> >
>>> >> >> >
>>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>>> shadowsor@gmail.com
>>> >> >> > >wrote:
>>> >> >> >
>>> >> >> > > OK, will check it out in the next few days. As mentioned, you
>>> can
>>> >> set
>>> >> >> up
>>> >> >> > > your Ubuntu vm as the management server as well if all else
>>> fails.
>>> >>  If
>>> >> >> > you
>>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then you
>>> need
>>> >> to
>>> >> >> > > enable.debug on the agent. It won't run without complaining
>>> loudly
>>> >> if
>>> >> >> it
>>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>>> agent
>>> >> log,
>>> >> >> so
>>> >> >> > > perhaps its not running. I assume you know how to stop/start
>>> the
>>> >> agent
>>> >> >> on
>>> >> >> > > KVM via 'service cloud stacks agent'.
>>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>>> >> >> mike.tutkowski@solidfire.com>
>>> >> >> > > wrote:
>>> >> >> > >
>>> >> >> > > > Hey Marcus,
>>> >> >> > > >
>>> >> >> > > > I haven't yet been able to test my new code, but I thought
>>> you
>>> >> would
>>> >> >> > be a
>>> >> >> > > > good person to ask to review it:
>>> >> >> > > >
>>> >> >> > > >
>>> >> >> > > >
>>> >> >> > >
>>> >> >> >
>>> >> >>
>>> >>
>>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>>> >> >> > > >
>>> >> >> > > > All it is supposed to do is attach and detach a data disk
>>> (that
>>> >> has
>>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>>> >> happens to
>>> >> >> > be
>>> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>>> >> between a
>>> >> >> > > > CloudStack volume and a data disk.
>>> >> >> > > >
>>> >> >> > > > There is no support for hypervisor snapshots or stuff like
>>> that
>>> >> >> > (likely a
>>> >> >> > > > future release)...just attaching and detaching a data disk
>>> in 4.3.
>>> >> >> > > >
>>> >> >> > > > Thanks!
>>> >> >> > > >
>>> >> >> > > >
>>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>>> >> >> > > >
>>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>>> cloudstack-agent
>>> >> >> first.
>>> >> >> > > > Would
>>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>>> >> >> > cloudstack-agent.
>>> >> >> > > > >
>>> >> >> > > > >
>>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>>> >> >> > > > >
>>> >> >> > > > >> I get the same error running the command manually:
>>> >> >> > > > >>
>>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>>> >> /usr/sbin/service
>>> >> >> > > > >> cloudstack-agent status
>>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>>> >> >> > > > >>
>>> >> >> > > > >>
>>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>>> >> >> > > > >>
>>> >> >> > > > >>> agent.log looks OK to me:
>>> >> >> > > > >>>
>>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>>> >> >> (main:null)
>>> >> >> > > > Agent
>>> >> >> > > > >>> started
>>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>>> >> >> (main:null)
>>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>>> >> >> (main:null)
>>> >> >> > > > >>> agent.properties found at
>>> >> /etc/cloudstack/agent/agent.properties
>>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>>> >> >> (main:null)
>>> >> >> > > > >>> Defaulting to using properties file for storage
>>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>>> >> >> (main:null)
>>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>>> >> (main:null)
>>> >> >> > > log4j
>>> >> >> > > > >>> configuration found at
>>> /etc/cloudstack/agent/log4j-cloud.xml
>>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>>> (main:null)
>>> >> id
>>> >> >> > is 3
>>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>>> (main:null)
>>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>>> >> >> scripts/network/domr/kvm
>>> >> >> > > > >>>
>>> >> >> > > > >>> However, I wasn't aware that setup.log was important.
>>> This
>>> >> seems
>>> >> >> to
>>> >> >> > > be
>>> >> >> > > > a
>>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>>> >> >> > > > >>>
>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>> cloudstack-agent
>>> >> status
>>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID
>>> file for
>>> >> >> > > > >>> cloudstack-agent
>>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service
>>> cloudstack-agent
>>> >> start
>>> >> >> > > > >>>
>>> >> >> > > > >>>
>>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>>> >> >> > > shadowsor@gmail.com
>>> >> >> > > > >wrote:
>>> >> >> > > > >>>
>>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the
>>> agent log
>>> >> for
>>> >> >> > > some
>>> >> >> > > > >>>> reason. Is the agent started? That might be the place to
>>> >> look.
>>> >> >> > There
>>> >> >> > > > is
>>> >> >> > > > >>>> an
>>> >> >> > > > >>>> agent log for the agent and one for the setup when it
>>> adds
>>> >> the
>>> >> >> > host,
>>> >> >> > > > >>>> both
>>> >> >> > > > >>>> in /var/log
>>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>>> >> >> > > > >>>> wrote:
>>> >> >> > > > >>>>
>>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or the
>>> KVM
>>> >> host?
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>>> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>>> >> >> host=192.168.233.1
>>> >> >> > > > value.
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>> >> >> > > > >>>> shadowsor@gmail.com
>>> >> >> > > > >>>> > >wrote:
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10?
>>> But you
>>> >> >> tried
>>> >> >> > > to
>>> >> >> > > > >>>> telnet
>>> >> >> > > > >>>> > to
>>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may
>>> want
>>> >> to
>>> >> >> > edit
>>> >> >> > > > the
>>> >> >> > > > >>>> > config
>>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>>> >> >> > > > >>>> > >
>>> >> >> > > > >>>> > > wrote:
>>> >> >> > > > >>>> > >
>>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
>>> >> like, if
>>> >> >> > > that
>>> >> >> > > > >>>> is of
>>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>>> network
>>> >> >> > VMware
>>> >> >> > > > >>>> Fusion
>>> >> >> > > > >>>> > set
>>> >> >> > > > >>>> > > > up):
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > auto lo
>>> >> >> > > > >>>> > > > iface lo inet loopback
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > auto eth0
>>> >> >> > > > >>>> > > > iface eth0 inet manual
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > auto cloudbr0
>>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>>> >> >> > > > >>>> > > >     address 192.168.233.10
>>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>>> >> >> > > > >>>> > > >     network 192.168.233.0
>>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>>> >> >> > > > >>>> > > >     bridge_ports eth0
>>> >> >> > > > >>>> > > >     bridge_fd 5
>>> >> >> > > > >>>> > > >     bridge_stp off
>>> >> >> > > > >>>> > > >     bridge_maxwait 1
>>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>>> metric 1
>>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS
>>> log
>>> >> >> (below).
>>> >> >> > > > >>>> Discovery
>>> >> >> > > > >>>> > > > timed
>>> >> >> > > > >>>> > > > > out.
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>>> settings
>>> >> >> > > shouldn't
>>> >> >> > > > >>>> have
>>> >> >> > > > >>>> > > > changed
>>> >> >> > > > >>>> > > > > since the last time I tried this.
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host
>>> and
>>> >> vice
>>> >> >> > > > versa.
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the
>>> KVM
>>> >> host
>>> >> >> > and
>>> >> >> > > > >>>> ping from
>>> >> >> > > > >>>> > > it
>>> >> >> > > > >>>> > > > > to the MS host.
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>>> running
>>> >> the
>>> >> >> CS
>>> >> >> > > MS)
>>> >> >> > > > >>>> to the
>>> >> >> > > > >>>> > VM
>>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>> Timeout,
>>> >> to
>>> >> >> > wait
>>> >> >> > > > for
>>> >> >> > > > >>>> the
>>> >> >> > > > >>>> > > host
>>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>> >> >>  [c.c.r.ResourceManagerImpl]
>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>> Unable to
>>> >> >> find
>>> >> >> > > the
>>> >> >> > > > >>>> server
>>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>> Could not
>>> >> >> find
>>> >> >> > > > >>>> exception:
>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error
>>> code
>>> >> >> list
>>> >> >> > > for
>>> >> >> > > > >>>> > > exceptions
>>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>>> Exception:
>>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable
>>> to add
>>> >> >> the
>>> >> >> > > host
>>> >> >> > > > >>>> > > > > at
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>>
>>> >> >> > > >
>>> >> >> > >
>>> >> >> >
>>> >> >>
>>> >>
>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
>>> host to
>>> >> >> the
>>> >> >> > MS
>>> >> >> > > > >>>> host's
>>> >> >> > > > >>>> > > 8250
>>> >> >> > > > >>>> > > > > port:
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>>> >> >> > > > >>>> > > > > Escape character is '^]'.
>>> >> >> > > > >>>> > > > >
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > > > --
>>> >> >> > > > >>>> > > > *Mike Tutkowski*
>>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>>> >> >> > > > >>>> > > > o: 303.746.7302
>>> >> >> > > > >>>> > > > Advancing the way the world uses the
>>> >> >> > > > >>>> > > > cloud<
>>> >> http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > >>>> > > > *™*
>>> >> >> > > > >>>> > > >
>>> >> >> > > > >>>> > >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>> > --
>>> >> >> > > > >>>> > *Mike Tutkowski*
>>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>>> >> >> > > > >>>> > o: 303.746.7302
>>> >> >> > > > >>>> > Advancing the way the world uses the
>>> >> >> > > > >>>> > cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > >>>> > *™*
>>> >> >> > > > >>>> >
>>> >> >> > > > >>>>
>>> >> >> > > > >>>
>>> >> >> > > > >>>
>>> >> >> > > > >>>
>>> >> >> > > > >>> --
>>> >> >> > > > >>> *Mike Tutkowski*
>>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>>> >> >> > > > >>> o: 303.746.7302
>>> >> >> > > > >>> Advancing the way the world uses the cloud<
>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > >>> *™*
>>> >> >> > > > >>>
>>> >> >> > > > >>
>>> >> >> > > > >>
>>> >> >> > > > >>
>>> >> >> > > > >> --
>>> >> >> > > > >> *Mike Tutkowski*
>>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>>> >> >> > > > >> o: 303.746.7302
>>> >> >> > > > >> Advancing the way the world uses the cloud<
>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > >> *™*
>>> >> >> > > > >>
>>> >> >> > > > >
>>> >> >> > > > >
>>> >> >> > > > >
>>> >> >> > > > > --
>>> >> >> > > > > *Mike Tutkowski*
>>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > > e: mike.tutkowski@solidfire.com
>>> >> >> > > > > o: 303.746.7302
>>> >> >> > > > > Advancing the way the world uses the cloud<
>>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > > *™*
>>> >> >> > > > >
>>> >> >> > > >
>>> >> >> > > >
>>> >> >> > > >
>>> >> >> > > > --
>>> >> >> > > > *Mike Tutkowski*
>>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > > > e: mike.tutkowski@solidfire.com
>>> >> >> > > > o: 303.746.7302
>>> >> >> > > > Advancing the way the world uses the
>>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >> > > > *™*
>>> >> >> > > >
>>> >> >> > >
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> > --
>>> >> >> > *Mike Tutkowski*
>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > e: mike.tutkowski@solidfire.com
>>> >> >> > o: 303.746.7302
>>> >> >> > Advancing the way the world uses the
>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >> > *™*
>>> >> >> >
>>> >> >>
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > *Mike Tutkowski*
>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > e: mike.tutkowski@solidfire.com
>>> >> > o: 303.746.7302
>>> >> > Advancing the way the world uses the
>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> > *™*
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Conversely, if the storage framework called the DestroyCommand for managed
storage after the DetachCommand, then I could have had my remove
SR/datastore logic placed in the DestroyCommand handling rather than in the
DetachCommand handling.


On Sun, Sep 22, 2013 at 12:06 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Edison's plug-in calls the CreateCommand. Mine does not.
>
> The initial approach that was discussed during 4.2 was for me to modify
> the attach/detach logic only in the XenServer and VMware hypervisor
> plug-ins.
>
> Now that I think about it more, though, I kind of would have liked to have
> the storage framework send a CreateCommand to the hypervisor before sending
> the AttachCommand if the storage in question was managed.
>
> Then I could have created my SR/datastore in the CreateCommand and the
> AttachCommand would have had the SR/datastore that it was always expecting
> (and I wouldn't have had to create the SR/datastore in the AttachCommand).
>
>
> On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Yeah, I think it probably is as well, but I figured you'd be in a
>> better position to tell.
>>
>> I see that copyAsync is unsupported in your current 4.2 driver, does
>> that mean that there's no template support? Or is it some other call
>> that does templating now? I'm still getting up to speed on all of the
>> 4.2 changes. I was just looking at CreateCommand in
>> LibvirtComputingResource, since that's the only place
>> createPhysicalDisk is called, and it occurred to me that CreateCommand
>> might be skipped altogether when utilizing storage plugins.
>>
>> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > That's an interesting comment, Marcus.
>> >
>> > It was my intent that it should work with any CloudStack "managed"
>> storage
>> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the code
>> so
>> > CHAP didn't have to be used.
>> >
>> > As I'm doing my testing, I can try to think about whether it is generic
>> > enough to keep those names or not.
>> >
>> > My expectation is that it is generic enough.
>> >
>> >
>> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> I added a comment to your diff. In general I think it looks good,
>> >> though I obviously can't vouch for whether or not it will work. One
>> >> thing I do have reservations about is the adaptor/pool naming. If you
>> >> think the code is generic enough that it will work for anyone who does
>> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
>> >> about it that's specific to YOUR iscsi target or how it likes to be
>> >> treated then I'd say that they should be named something less generic
>> >> than iScsiAdmStorage.
>> >>
>> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> >> <mi...@solidfire.com> wrote:
>> >> > Great - thanks!
>> >> >
>> >> > Just to give you an overview of what my code does (for when you get a
>> >> > chance to review it):
>> >> >
>> >> > SolidFireHostListener is registered in
>> SolidfirePrimaryDataStoreProvider.
>> >> > Its hostConnect method is invoked when a host connects with the CS
>> MS. If
>> >> > the host is running KVM, the listener sends a
>> ModifyStoragePoolCommand to
>> >> > the host. This logic was based off of DefaultHostListener.
>> >> >
>> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>> >> > createStoragePool on the KVMStoragePoolManager. The
>> KVMStoragePoolManager
>> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
>> (which
>> >> was
>> >> > registered in the constructor for KVMStoragePoolManager under the
>> key of
>> >> > StoragePoolType.Iscsi.toString()).
>> >> >
>> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
>> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>> storage
>> >> > pool.
>> >> >
>> >> > When a volume is attached, createPhysicalDisk is invoked for managed
>> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses
>> iscsiadm to
>> >> > establish the iSCSI connection to the volume on the SAN and a
>> >> > KVMPhysicalDisk is returned to be used in the attach logic that
>> follows.
>> >> >
>> >> > When a volume is detached, getPhysicalDisk is invoked with the IQN
>> of the
>> >> > volume if the storage pool in question is managed storage.
>> Otherwise, the
>> >> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk
>> just
>> >> > returns a new instance of KVMPhysicalDisk to be used in the detach
>> logic.
>> >> >
>> >> > Once the volume has been detached,
>> iScsiAdmStoragePool.deletePhysicalDisk
>> >> > is invoked if the storage pool is managed. deletePhysicalDisk
>> removes the
>> >> > iSCSI connection to the volume using iscsiadm.
>> >> >
>> >> >
>> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> >wrote:
>> >> >
>> >> >> Its the log4j properties file in /etc/cloudstack/agent change all
>> INFO
>> >> to
>> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail the
>> log
>> >> when
>> >> >> you try to start the service, or maybe it will spit something out
>> into
>> >> one
>> >> >> of the other files in /var/log/cloudstack/agent
>> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com
>> >> >
>> >> >> wrote:
>> >> >>
>> >> >> > This is how I've been trying to query for the status of the
>> service (I
>> >> >> > assume it could be started this way, as well, by changing
>> "status" to
>> >> >> > "start" or "restart"?):
>> >> >> >
>> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >> >> > cloudstack-agent status
>> >> >> >
>> >> >> > I get this back:
>> >> >> >
>> >> >> > Failed to execute: * could not access PID file for
>> cloudstack-agent
>> >> >> >
>> >> >> > I've made a bunch of code changes recently, though, so I think I'm
>> >> going
>> >> >> to
>> >> >> > rebuild and redeploy everything.
>> >> >> >
>> >> >> > The debug info sounds helpful. Where can I set enable.debug?
>> >> >> >
>> >> >> > Thanks, Marcus!
>> >> >> >
>> >> >> >
>> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> >> > >wrote:
>> >> >> >
>> >> >> > > OK, will check it out in the next few days. As mentioned, you
>> can
>> >> set
>> >> >> up
>> >> >> > > your Ubuntu vm as the management server as well if all else
>> fails.
>> >>  If
>> >> >> > you
>> >> >> > > can get to the mgmt server on 8250 from the KVM host, then you
>> need
>> >> to
>> >> >> > > enable.debug on the agent. It won't run without complaining
>> loudly
>> >> if
>> >> >> it
>> >> >> > > can't get to the mgmt server, and I didn't see that in your
>> agent
>> >> log,
>> >> >> so
>> >> >> > > perhaps its not running. I assume you know how to stop/start the
>> >> agent
>> >> >> on
>> >> >> > > KVM via 'service cloud stacks agent'.
>> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> >> >> mike.tutkowski@solidfire.com>
>> >> >> > > wrote:
>> >> >> > >
>> >> >> > > > Hey Marcus,
>> >> >> > > >
>> >> >> > > > I haven't yet been able to test my new code, but I thought you
>> >> would
>> >> >> > be a
>> >> >> > > > good person to ask to review it:
>> >> >> > > >
>> >> >> > > >
>> >> >> > > >
>> >> >> > >
>> >> >> >
>> >> >>
>> >>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >> >> > > >
>> >> >> > > > All it is supposed to do is attach and detach a data disk
>> (that
>> >> has
>> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>> >> happens to
>> >> >> > be
>> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>> >> between a
>> >> >> > > > CloudStack volume and a data disk.
>> >> >> > > >
>> >> >> > > > There is no support for hypervisor snapshots or stuff like
>> that
>> >> >> > (likely a
>> >> >> > > > future release)...just attaching and detaching a data disk in
>> 4.3.
>> >> >> > > >
>> >> >> > > > Thanks!
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>> >> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > >
>> >> >> > > > > When I re-deployed the DEBs, I didn't remove
>> cloudstack-agent
>> >> >> first.
>> >> >> > > > Would
>> >> >> > > > > that be a problem? I just did a sudo apt-get install
>> >> >> > cloudstack-agent.
>> >> >> > > > >
>> >> >> > > > >
>> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > > >
>> >> >> > > > >> I get the same error running the command manually:
>> >> >> > > > >>
>> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> >> /usr/sbin/service
>> >> >> > > > >> cloudstack-agent status
>> >> >> > > > >>  * could not access PID file for cloudstack-agent
>> >> >> > > > >>
>> >> >> > > > >>
>> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >> >> > > > >>
>> >> >> > > > >>> agent.log looks OK to me:
>> >> >> > > > >>>
>> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>> >> >> (main:null)
>> >> >> > > > Agent
>> >> >> > > > >>> started
>> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>> >> >> (main:null)
>> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>> >> >> (main:null)
>> >> >> > > > >>> agent.properties found at
>> >> /etc/cloudstack/agent/agent.properties
>> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>> >> >> (main:null)
>> >> >> > > > >>> Defaulting to using properties file for storage
>> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>> >> >> (main:null)
>> >> >> > > > >>> Defaulting to the constant time backoff algorithm
>> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>> >> (main:null)
>> >> >> > > log4j
>> >> >> > > > >>> configuration found at
>> /etc/cloudstack/agent/log4j-cloud.xml
>> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>> (main:null)
>> >> id
>> >> >> > is 3
>> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
>> (main:null)
>> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>> >> >> scripts/network/domr/kvm
>> >> >> > > > >>>
>> >> >> > > > >>> However, I wasn't aware that setup.log was important. This
>> >> seems
>> >> >> to
>> >> >> > > be
>> >> >> > > > a
>> >> >> > > > >>> problem, but I'm not sure what it might indicate:
>> >> >> > > > >>>
>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> >> status
>> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file
>> for
>> >> >> > > > >>> cloudstack-agent
>> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> >> start
>> >> >> > > > >>>
>> >> >> > > > >>>
>> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>> >> >> > > shadowsor@gmail.com
>> >> >> > > > >wrote:
>> >> >> > > > >>>
>> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent
>> log
>> >> for
>> >> >> > > some
>> >> >> > > > >>>> reason. Is the agent started? That might be the place to
>> >> look.
>> >> >> > There
>> >> >> > > > is
>> >> >> > > > >>>> an
>> >> >> > > > >>>> agent log for the agent and one for the setup when it
>> adds
>> >> the
>> >> >> > host,
>> >> >> > > > >>>> both
>> >> >> > > > >>>> in /var/log
>> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>> >> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >> >> > > > >>>> wrote:
>> >> >> > > > >>>>
>> >> >> > > > >>>> > Is it saying that the MS is at the IP address or the
>> KVM
>> >> host?
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> >> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > I see this for my host Global Settings parameter:
>> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>> >> >> > > > >>>> >
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>> >> >> host=192.168.233.1
>> >> >> > > > value.
>> >> >> > > > >>>> >
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>> >> >> > > > >>>> shadowsor@gmail.com
>> >> >> > > > >>>> > >wrote:
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But
>> you
>> >> >> tried
>> >> >> > > to
>> >> >> > > > >>>> telnet
>> >> >> > > > >>>> > to
>> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may
>> want
>> >> to
>> >> >> > edit
>> >> >> > > > the
>> >> >> > > > >>>> > config
>> >> >> > > > >>>> > > as well to tell it the real ms IP.
>> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> >> >> > > > >>>> mike.tutkowski@solidfire.com
>> >> >> > > > >>>> > >
>> >> >> > > > >>>> > > wrote:
>> >> >> > > > >>>> > >
>> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
>> >> like, if
>> >> >> > > that
>> >> >> > > > >>>> is of
>> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
>> network
>> >> >> > VMware
>> >> >> > > > >>>> Fusion
>> >> >> > > > >>>> > set
>> >> >> > > > >>>> > > > up):
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > auto lo
>> >> >> > > > >>>> > > > iface lo inet loopback
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > auto eth0
>> >> >> > > > >>>> > > > iface eth0 inet manual
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > auto cloudbr0
>> >> >> > > > >>>> > > > iface cloudbr0 inet static
>> >> >> > > > >>>> > > >     address 192.168.233.10
>> >> >> > > > >>>> > > >     netmask 255.255.255.0
>> >> >> > > > >>>> > > >     network 192.168.233.0
>> >> >> > > > >>>> > > >     broadcast 192.168.233.255
>> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >> >> > > > >>>> > > >     bridge_ports eth0
>> >> >> > > > >>>> > > >     bridge_fd 5
>> >> >> > > > >>>> > > >     bridge_stp off
>> >> >> > > > >>>> > > >     bridge_maxwait 1
>> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
>> metric 1
>> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
>> >> >> (below).
>> >> >> > > > >>>> Discovery
>> >> >> > > > >>>> > > > timed
>> >> >> > > > >>>> > > > > out.
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
>> settings
>> >> >> > > shouldn't
>> >> >> > > > >>>> have
>> >> >> > > > >>>> > > > changed
>> >> >> > > > >>>> > > > > since the last time I tried this.
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host
>> and
>> >> vice
>> >> >> > > > versa.
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the
>> KVM
>> >> host
>> >> >> > and
>> >> >> > > > >>>> ping from
>> >> >> > > > >>>> > > it
>> >> >> > > > >>>> > > > > to the MS host.
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also
>> running
>> >> the
>> >> >> CS
>> >> >> > > MS)
>> >> >> > > > >>>> to the
>> >> >> > > > >>>> > VM
>> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>> Timeout,
>> >> to
>> >> >> > wait
>> >> >> > > > for
>> >> >> > > > >>>> the
>> >> >> > > > >>>> > > host
>> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> >> >>  [c.c.r.ResourceManagerImpl]
>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>> Unable to
>> >> >> find
>> >> >> > > the
>> >> >> > > > >>>> server
>> >> >> > > > >>>> > > > > resources at http://192.168.233.10
>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could
>> not
>> >> >> find
>> >> >> > > > >>>> exception:
>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error
>> code
>> >> >> list
>> >> >> > > for
>> >> >> > > > >>>> > > exceptions
>> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>> >>  [o.a.c.a.c.a.h.AddHostCmd]
>> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>> Exception:
>> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable
>> to add
>> >> >> the
>> >> >> > > host
>> >> >> > > > >>>> > > > > at
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > >
>> >> >> > > > >>>> >
>> >> >> > > > >>>>
>> >> >> > > >
>> >> >> > >
>> >> >> >
>> >> >>
>> >>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
>> host to
>> >> >> the
>> >> >> > MS
>> >> >> > > > >>>> host's
>> >> >> > > > >>>> > > 8250
>> >> >> > > > >>>> > > > > port:
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> >> >> > > > >>>> > > > > Escape character is '^]'.
>> >> >> > > > >>>> > > > >
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > > > --
>> >> >> > > > >>>> > > > *Mike Tutkowski*
>> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> >> >> > > > >>>> > > > o: 303.746.7302
>> >> >> > > > >>>> > > > Advancing the way the world uses the
>> >> >> > > > >>>> > > > cloud<
>> >> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > >>>> > > > *™*
>> >> >> > > > >>>> > > >
>> >> >> > > > >>>> > >
>> >> >> > > > >>>> >
>> >> >> > > > >>>> >
>> >> >> > > > >>>> >
>> >> >> > > > >>>> > --
>> >> >> > > > >>>> > *Mike Tutkowski*
>> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >> >> > > > >>>> > o: 303.746.7302
>> >> >> > > > >>>> > Advancing the way the world uses the
>> >> >> > > > >>>> > cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > >>>> > *™*
>> >> >> > > > >>>> >
>> >> >> > > > >>>>
>> >> >> > > > >>>
>> >> >> > > > >>>
>> >> >> > > > >>>
>> >> >> > > > >>> --
>> >> >> > > > >>> *Mike Tutkowski*
>> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >> >> > > > >>> o: 303.746.7302
>> >> >> > > > >>> Advancing the way the world uses the cloud<
>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > >>> *™*
>> >> >> > > > >>>
>> >> >> > > > >>
>> >> >> > > > >>
>> >> >> > > > >>
>> >> >> > > > >> --
>> >> >> > > > >> *Mike Tutkowski*
>> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > >> e: mike.tutkowski@solidfire.com
>> >> >> > > > >> o: 303.746.7302
>> >> >> > > > >> Advancing the way the world uses the cloud<
>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > >> *™*
>> >> >> > > > >>
>> >> >> > > > >
>> >> >> > > > >
>> >> >> > > > >
>> >> >> > > > > --
>> >> >> > > > > *Mike Tutkowski*
>> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > > e: mike.tutkowski@solidfire.com
>> >> >> > > > > o: 303.746.7302
>> >> >> > > > > Advancing the way the world uses the cloud<
>> >> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > > *™*
>> >> >> > > > >
>> >> >> > > >
>> >> >> > > >
>> >> >> > > >
>> >> >> > > > --
>> >> >> > > > *Mike Tutkowski*
>> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > > > e: mike.tutkowski@solidfire.com
>> >> >> > > > o: 303.746.7302
>> >> >> > > > Advancing the way the world uses the
>> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> >> > > > *™*
>> >> >> > > >
>> >> >> > >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > --
>> >> >> > *Mike Tutkowski*
>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> >> > e: mike.tutkowski@solidfire.com
>> >> >> > o: 303.746.7302
>> >> >> > Advancing the way the world uses the
>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> >> > *™*
>> >> >> >
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Edison's plug-in calls the CreateCommand. Mine does not.

The initial approach that was discussed during 4.2 was for me to modify the
attach/detach logic only in the XenServer and VMware hypervisor plug-ins.

Now that I think about it more, though, I kind of would have liked to have
the storage framework send a CreateCommand to the hypervisor before sending
the AttachCommand if the storage in question was managed.

Then I could have created my SR/datastore in the CreateCommand and the
AttachCommand would have had the SR/datastore that it was always expecting
(and I wouldn't have had to create the SR/datastore in the AttachCommand).


On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Yeah, I think it probably is as well, but I figured you'd be in a
> better position to tell.
>
> I see that copyAsync is unsupported in your current 4.2 driver, does
> that mean that there's no template support? Or is it some other call
> that does templating now? I'm still getting up to speed on all of the
> 4.2 changes. I was just looking at CreateCommand in
> LibvirtComputingResource, since that's the only place
> createPhysicalDisk is called, and it occurred to me that CreateCommand
> might be skipped altogether when utilizing storage plugins.
>
> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > That's an interesting comment, Marcus.
> >
> > It was my intent that it should work with any CloudStack "managed"
> storage
> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the code
> so
> > CHAP didn't have to be used.
> >
> > As I'm doing my testing, I can try to think about whether it is generic
> > enough to keep those names or not.
> >
> > My expectation is that it is generic enough.
> >
> >
> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> I added a comment to your diff. In general I think it looks good,
> >> though I obviously can't vouch for whether or not it will work. One
> >> thing I do have reservations about is the adaptor/pool naming. If you
> >> think the code is generic enough that it will work for anyone who does
> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
> >> about it that's specific to YOUR iscsi target or how it likes to be
> >> treated then I'd say that they should be named something less generic
> >> than iScsiAdmStorage.
> >>
> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >> <mi...@solidfire.com> wrote:
> >> > Great - thanks!
> >> >
> >> > Just to give you an overview of what my code does (for when you get a
> >> > chance to review it):
> >> >
> >> > SolidFireHostListener is registered in
> SolidfirePrimaryDataStoreProvider.
> >> > Its hostConnect method is invoked when a host connects with the CS
> MS. If
> >> > the host is running KVM, the listener sends a
> ModifyStoragePoolCommand to
> >> > the host. This logic was based off of DefaultHostListener.
> >> >
> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
> >> > createStoragePool on the KVMStoragePoolManager. The
> KVMStoragePoolManager
> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
> (which
> >> was
> >> > registered in the constructor for KVMStoragePoolManager under the key
> of
> >> > StoragePoolType.Iscsi.toString()).
> >> >
> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
> storage
> >> > pool.
> >> >
> >> > When a volume is attached, createPhysicalDisk is invoked for managed
> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm
> to
> >> > establish the iSCSI connection to the volume on the SAN and a
> >> > KVMPhysicalDisk is returned to be used in the attach logic that
> follows.
> >> >
> >> > When a volume is detached, getPhysicalDisk is invoked with the IQN of
> the
> >> > volume if the storage pool in question is managed storage. Otherwise,
> the
> >> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk
> just
> >> > returns a new instance of KVMPhysicalDisk to be used in the detach
> logic.
> >> >
> >> > Once the volume has been detached,
> iScsiAdmStoragePool.deletePhysicalDisk
> >> > is invoked if the storage pool is managed. deletePhysicalDisk removes
> the
> >> > iSCSI connection to the volume using iscsiadm.
> >> >
> >> >
> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <shadowsor@gmail.com
> >> >wrote:
> >> >
> >> >> Its the log4j properties file in /etc/cloudstack/agent change all
> INFO
> >> to
> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail the log
> >> when
> >> >> you try to start the service, or maybe it will spit something out
> into
> >> one
> >> >> of the other files in /var/log/cloudstack/agent
> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> >> >
> >> >> wrote:
> >> >>
> >> >> > This is how I've been trying to query for the status of the
> service (I
> >> >> > assume it could be started this way, as well, by changing "status"
> to
> >> >> > "start" or "restart"?):
> >> >> >
> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >> >> > cloudstack-agent status
> >> >> >
> >> >> > I get this back:
> >> >> >
> >> >> > Failed to execute: * could not access PID file for cloudstack-agent
> >> >> >
> >> >> > I've made a bunch of code changes recently, though, so I think I'm
> >> going
> >> >> to
> >> >> > rebuild and redeploy everything.
> >> >> >
> >> >> > The debug info sounds helpful. Where can I set enable.debug?
> >> >> >
> >> >> > Thanks, Marcus!
> >> >> >
> >> >> >
> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> >> > >wrote:
> >> >> >
> >> >> > > OK, will check it out in the next few days. As mentioned, you can
> >> set
> >> >> up
> >> >> > > your Ubuntu vm as the management server as well if all else
> fails.
> >>  If
> >> >> > you
> >> >> > > can get to the mgmt server on 8250 from the KVM host, then you
> need
> >> to
> >> >> > > enable.debug on the agent. It won't run without complaining
> loudly
> >> if
> >> >> it
> >> >> > > can't get to the mgmt server, and I didn't see that in your agent
> >> log,
> >> >> so
> >> >> > > perhaps its not running. I assume you know how to stop/start the
> >> agent
> >> >> on
> >> >> > > KVM via 'service cloud stacks agent'.
> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >> >> mike.tutkowski@solidfire.com>
> >> >> > > wrote:
> >> >> > >
> >> >> > > > Hey Marcus,
> >> >> > > >
> >> >> > > > I haven't yet been able to test my new code, but I thought you
> >> would
> >> >> > be a
> >> >> > > > good person to ask to review it:
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > >
> >> >> >
> >> >>
> >>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >> >> > > >
> >> >> > > > All it is supposed to do is attach and detach a data disk (that
> >> has
> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
> >> happens to
> >> >> > be
> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
> >> between a
> >> >> > > > CloudStack volume and a data disk.
> >> >> > > >
> >> >> > > > There is no support for hypervisor snapshots or stuff like that
> >> >> > (likely a
> >> >> > > > future release)...just attaching and detaching a data disk in
> 4.3.
> >> >> > > >
> >> >> > > > Thanks!
> >> >> > > >
> >> >> > > >
> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >
> >> >> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
> >> >> first.
> >> >> > > > Would
> >> >> > > > > that be a problem? I just did a sudo apt-get install
> >> >> > cloudstack-agent.
> >> >> > > > >
> >> >> > > > >
> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >
> >> >> > > > >> I get the same error running the command manually:
> >> >> > > > >>
> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >> /usr/sbin/service
> >> >> > > > >> cloudstack-agent status
> >> >> > > > >>  * could not access PID file for cloudstack-agent
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >>
> >> >> > > > >>> agent.log looks OK to me:
> >> >> > > > >>>
> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > Agent
> >> >> > > > >>> started
> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> agent.properties found at
> >> /etc/cloudstack/agent/agent.properties
> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Defaulting to using properties file for storage
> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Defaulting to the constant time backoff algorithm
> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
> >> (main:null)
> >> >> > > log4j
> >> >> > > > >>> configuration found at
> /etc/cloudstack/agent/log4j-cloud.xml
> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
> (main:null)
> >> id
> >> >> > is 3
> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
> (main:null)
> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> >> >> scripts/network/domr/kvm
> >> >> > > > >>>
> >> >> > > > >>> However, I wasn't aware that setup.log was important. This
> >> seems
> >> >> to
> >> >> > > be
> >> >> > > > a
> >> >> > > > >>> problem, but I'm not sure what it might indicate:
> >> >> > > > >>>
> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> >> status
> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file
> for
> >> >> > > > >>> cloudstack-agent
> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> >> start
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> >> >> > > shadowsor@gmail.com
> >> >> > > > >wrote:
> >> >> > > > >>>
> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent
> log
> >> for
> >> >> > > some
> >> >> > > > >>>> reason. Is the agent started? That might be the place to
> >> look.
> >> >> > There
> >> >> > > > is
> >> >> > > > >>>> an
> >> >> > > > >>>> agent log for the agent and one for the setup when it adds
> >> the
> >> >> > host,
> >> >> > > > >>>> both
> >> >> > > > >>>> in /var/log
> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >> >> > > > >>>> wrote:
> >> >> > > > >>>>
> >> >> > > > >>>> > Is it saying that the MS is at the IP address or the KVM
> >> host?
> >> >> > > > >>>> >
> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >> >> > > > >>>> >
> >> >> > > > >>>> > I see this for my host Global Settings parameter:
> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
> >> >> host=192.168.233.1
> >> >> > > > value.
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> >> >> > > > >>>> shadowsor@gmail.com
> >> >> > > > >>>> > >wrote:
> >> >> > > > >>>> >
> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But
> you
> >> >> tried
> >> >> > > to
> >> >> > > > >>>> telnet
> >> >> > > > >>>> > to
> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may
> want
> >> to
> >> >> > edit
> >> >> > > > the
> >> >> > > > >>>> > config
> >> >> > > > >>>> > > as well to tell it the real ms IP.
> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >> >> > > > >>>> > >
> >> >> > > > >>>> > > wrote:
> >> >> > > > >>>> > >
> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
> >> like, if
> >> >> > > that
> >> >> > > > >>>> is of
> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
> network
> >> >> > VMware
> >> >> > > > >>>> Fusion
> >> >> > > > >>>> > set
> >> >> > > > >>>> > > > up):
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto lo
> >> >> > > > >>>> > > > iface lo inet loopback
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto eth0
> >> >> > > > >>>> > > > iface eth0 inet manual
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto cloudbr0
> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >> >> > > > >>>> > > >     address 192.168.233.10
> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >> >> > > > >>>> > > >     network 192.168.233.0
> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >> >> > > > >>>> > > >     bridge_ports eth0
> >> >> > > > >>>> > > >     bridge_fd 5
> >> >> > > > >>>> > > >     bridge_stp off
> >> >> > > > >>>> > > >     bridge_maxwait 1
> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
> metric 1
> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
> >> >> (below).
> >> >> > > > >>>> Discovery
> >> >> > > > >>>> > > > timed
> >> >> > > > >>>> > > > > out.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
> settings
> >> >> > > shouldn't
> >> >> > > > >>>> have
> >> >> > > > >>>> > > > changed
> >> >> > > > >>>> > > > > since the last time I tried this.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host
> and
> >> vice
> >> >> > > > versa.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM
> >> host
> >> >> > and
> >> >> > > > >>>> ping from
> >> >> > > > >>>> > > it
> >> >> > > > >>>> > > > > to the MS host.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running
> >> the
> >> >> CS
> >> >> > > MS)
> >> >> > > > >>>> to the
> >> >> > > > >>>> > VM
> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
> Timeout,
> >> to
> >> >> > wait
> >> >> > > > for
> >> >> > > > >>>> the
> >> >> > > > >>>> > > host
> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >> >>  [c.c.r.ResourceManagerImpl]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable
> to
> >> >> find
> >> >> > > the
> >> >> > > > >>>> server
> >> >> > > > >>>> > > > > resources at http://192.168.233.10
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could
> not
> >> >> find
> >> >> > > > >>>> exception:
> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error
> code
> >> >> list
> >> >> > > for
> >> >> > > > >>>> > > exceptions
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
> Exception:
> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to
> add
> >> >> the
> >> >> > > host
> >> >> > > > >>>> > > > > at
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > >
> >> >> > > > >>>> >
> >> >> > > > >>>>
> >> >> > > >
> >> >> > >
> >> >> >
> >> >>
> >>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
> host to
> >> >> the
> >> >> > MS
> >> >> > > > >>>> host's
> >> >> > > > >>>> > > 8250
> >> >> > > > >>>> > > > > port:
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >> >> > > > >>>> > > > > Escape character is '^]'.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > --
> >> >> > > > >>>> > > > *Mike Tutkowski*
> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >> >> > > > >>>> > > > o: 303.746.7302
> >> >> > > > >>>> > > > Advancing the way the world uses the
> >> >> > > > >>>> > > > cloud<
> >> http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>>> > > > *™*
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > >
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > --
> >> >> > > > >>>> > *Mike Tutkowski*
> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >> >> > > > >>>> > o: 303.746.7302
> >> >> > > > >>>> > Advancing the way the world uses the
> >> >> > > > >>>> > cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>>> > *™*
> >> >> > > > >>>> >
> >> >> > > > >>>>
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>> --
> >> >> > > > >>> *Mike Tutkowski*
> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> >> > > > >>> o: 303.746.7302
> >> >> > > > >>> Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>> *™*
> >> >> > > > >>>
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >> --
> >> >> > > > >> *Mike Tutkowski*
> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >> >> > > > >> o: 303.746.7302
> >> >> > > > >> Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >> *™*
> >> >> > > > >>
> >> >> > > > >
> >> >> > > > >
> >> >> > > > >
> >> >> > > > > --
> >> >> > > > > *Mike Tutkowski*
> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > > e: mike.tutkowski@solidfire.com
> >> >> > > > > o: 303.746.7302
> >> >> > > > > Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > > *™*
> >> >> > > > >
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > > > --
> >> >> > > > *Mike Tutkowski*
> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > e: mike.tutkowski@solidfire.com
> >> >> > > > o: 303.746.7302
> >> >> > > > Advancing the way the world uses the
> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > > > *™*
> >> >> > > >
> >> >> > >
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > *Mike Tutkowski*
> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > e: mike.tutkowski@solidfire.com
> >> >> > o: 303.746.7302
> >> >> > Advancing the way the world uses the
> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > *™*
> >> >> >
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
My code does not yet support copying from a template.

Edison's default plug-in does, though (I believe):
CloudStackPrimaryDataStoreProviderImpl


On Sat, Sep 21, 2013 at 11:56 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Yeah, I think it probably is as well, but I figured you'd be in a
> better position to tell.
>
> I see that copyAsync is unsupported in your current 4.2 driver, does
> that mean that there's no template support? Or is it some other call
> that does templating now? I'm still getting up to speed on all of the
> 4.2 changes. I was just looking at CreateCommand in
> LibvirtComputingResource, since that's the only place
> createPhysicalDisk is called, and it occurred to me that CreateCommand
> might be skipped altogether when utilizing storage plugins.
>
> On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > That's an interesting comment, Marcus.
> >
> > It was my intent that it should work with any CloudStack "managed"
> storage
> > that uses an iSCSI target. Even though I'm using CHAP, I wrote the code
> so
> > CHAP didn't have to be used.
> >
> > As I'm doing my testing, I can try to think about whether it is generic
> > enough to keep those names or not.
> >
> > My expectation is that it is generic enough.
> >
> >
> > On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> I added a comment to your diff. In general I think it looks good,
> >> though I obviously can't vouch for whether or not it will work. One
> >> thing I do have reservations about is the adaptor/pool naming. If you
> >> think the code is generic enough that it will work for anyone who does
> >> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
> >> about it that's specific to YOUR iscsi target or how it likes to be
> >> treated then I'd say that they should be named something less generic
> >> than iScsiAdmStorage.
> >>
> >> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> >> <mi...@solidfire.com> wrote:
> >> > Great - thanks!
> >> >
> >> > Just to give you an overview of what my code does (for when you get a
> >> > chance to review it):
> >> >
> >> > SolidFireHostListener is registered in
> SolidfirePrimaryDataStoreProvider.
> >> > Its hostConnect method is invoked when a host connects with the CS
> MS. If
> >> > the host is running KVM, the listener sends a
> ModifyStoragePoolCommand to
> >> > the host. This logic was based off of DefaultHostListener.
> >> >
> >> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
> >> > createStoragePool on the KVMStoragePoolManager. The
> KVMStoragePoolManager
> >> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor
> (which
> >> was
> >> > registered in the constructor for KVMStoragePoolManager under the key
> of
> >> > StoragePoolType.Iscsi.toString()).
> >> >
> >> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
> >> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
> >> > iScsiAdmStoragePool object. The key of the map is the UUID of the
> storage
> >> > pool.
> >> >
> >> > When a volume is attached, createPhysicalDisk is invoked for managed
> >> > storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm
> to
> >> > establish the iSCSI connection to the volume on the SAN and a
> >> > KVMPhysicalDisk is returned to be used in the attach logic that
> follows.
> >> >
> >> > When a volume is detached, getPhysicalDisk is invoked with the IQN of
> the
> >> > volume if the storage pool in question is managed storage. Otherwise,
> the
> >> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk
> just
> >> > returns a new instance of KVMPhysicalDisk to be used in the detach
> logic.
> >> >
> >> > Once the volume has been detached,
> iScsiAdmStoragePool.deletePhysicalDisk
> >> > is invoked if the storage pool is managed. deletePhysicalDisk removes
> the
> >> > iSCSI connection to the volume using iscsiadm.
> >> >
> >> >
> >> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <shadowsor@gmail.com
> >> >wrote:
> >> >
> >> >> Its the log4j properties file in /etc/cloudstack/agent change all
> INFO
> >> to
> >> >> DEBUG.  I imagine the agent just isn't starting, you can tail the log
> >> when
> >> >> you try to start the service, or maybe it will spit something out
> into
> >> one
> >> >> of the other files in /var/log/cloudstack/agent
> >> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> >> >
> >> >> wrote:
> >> >>
> >> >> > This is how I've been trying to query for the status of the
> service (I
> >> >> > assume it could be started this way, as well, by changing "status"
> to
> >> >> > "start" or "restart"?):
> >> >> >
> >> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >> >> > cloudstack-agent status
> >> >> >
> >> >> > I get this back:
> >> >> >
> >> >> > Failed to execute: * could not access PID file for cloudstack-agent
> >> >> >
> >> >> > I've made a bunch of code changes recently, though, so I think I'm
> >> going
> >> >> to
> >> >> > rebuild and redeploy everything.
> >> >> >
> >> >> > The debug info sounds helpful. Where can I set enable.debug?
> >> >> >
> >> >> > Thanks, Marcus!
> >> >> >
> >> >> >
> >> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> >> > >wrote:
> >> >> >
> >> >> > > OK, will check it out in the next few days. As mentioned, you can
> >> set
> >> >> up
> >> >> > > your Ubuntu vm as the management server as well if all else
> fails.
> >>  If
> >> >> > you
> >> >> > > can get to the mgmt server on 8250 from the KVM host, then you
> need
> >> to
> >> >> > > enable.debug on the agent. It won't run without complaining
> loudly
> >> if
> >> >> it
> >> >> > > can't get to the mgmt server, and I didn't see that in your agent
> >> log,
> >> >> so
> >> >> > > perhaps its not running. I assume you know how to stop/start the
> >> agent
> >> >> on
> >> >> > > KVM via 'service cloud stacks agent'.
> >> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >> >> mike.tutkowski@solidfire.com>
> >> >> > > wrote:
> >> >> > >
> >> >> > > > Hey Marcus,
> >> >> > > >
> >> >> > > > I haven't yet been able to test my new code, but I thought you
> >> would
> >> >> > be a
> >> >> > > > good person to ask to review it:
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > >
> >> >> >
> >> >>
> >>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >> >> > > >
> >> >> > > > All it is supposed to do is attach and detach a data disk (that
> >> has
> >> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
> >> happens to
> >> >> > be
> >> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
> >> between a
> >> >> > > > CloudStack volume and a data disk.
> >> >> > > >
> >> >> > > > There is no support for hypervisor snapshots or stuff like that
> >> >> > (likely a
> >> >> > > > future release)...just attaching and detaching a data disk in
> 4.3.
> >> >> > > >
> >> >> > > > Thanks!
> >> >> > > >
> >> >> > > >
> >> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> >> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > >
> >> >> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
> >> >> first.
> >> >> > > > Would
> >> >> > > > > that be a problem? I just did a sudo apt-get install
> >> >> > cloudstack-agent.
> >> >> > > > >
> >> >> > > > >
> >> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> >> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >
> >> >> > > > >> I get the same error running the command manually:
> >> >> > > > >>
> >> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> >> /usr/sbin/service
> >> >> > > > >> cloudstack-agent status
> >> >> > > > >>  * could not access PID file for cloudstack-agent
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> >> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >>
> >> >> > > > >>> agent.log looks OK to me:
> >> >> > > > >>>
> >> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > Agent
> >> >> > > > >>> started
> >> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> >> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> agent.properties found at
> >> /etc/cloudstack/agent/agent.properties
> >> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Defaulting to using properties file for storage
> >> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
> >> >> (main:null)
> >> >> > > > >>> Defaulting to the constant time backoff algorithm
> >> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
> >> (main:null)
> >> >> > > log4j
> >> >> > > > >>> configuration found at
> /etc/cloudstack/agent/log4j-cloud.xml
> >> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
> (main:null)
> >> id
> >> >> > is 3
> >> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource]
> (main:null)
> >> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> >> >> scripts/network/domr/kvm
> >> >> > > > >>>
> >> >> > > > >>> However, I wasn't aware that setup.log was important. This
> >> seems
> >> >> to
> >> >> > > be
> >> >> > > > a
> >> >> > > > >>> problem, but I'm not sure what it might indicate:
> >> >> > > > >>>
> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> >> status
> >> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file
> for
> >> >> > > > >>> cloudstack-agent
> >> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> >> start
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> >> >> > > shadowsor@gmail.com
> >> >> > > > >wrote:
> >> >> > > > >>>
> >> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent
> log
> >> for
> >> >> > > some
> >> >> > > > >>>> reason. Is the agent started? That might be the place to
> >> look.
> >> >> > There
> >> >> > > > is
> >> >> > > > >>>> an
> >> >> > > > >>>> agent log for the agent and one for the setup when it adds
> >> the
> >> >> > host,
> >> >> > > > >>>> both
> >> >> > > > >>>> in /var/log
> >> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >> >> > > > >>>> mike.tutkowski@solidfire.com>
> >> >> > > > >>>> wrote:
> >> >> > > > >>>>
> >> >> > > > >>>> > Is it saying that the MS is at the IP address or the KVM
> >> host?
> >> >> > > > >>>> >
> >> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >> >> > > > >>>> > The MS host is at 192.168.233.1.
> >> >> > > > >>>> >
> >> >> > > > >>>> > I see this for my host Global Settings parameter:
> >> >> > > > >>>> > hostThe ip address of management server192.168.233.1
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
> >> >> host=192.168.233.1
> >> >> > > > value.
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> >> >> > > > >>>> shadowsor@gmail.com
> >> >> > > > >>>> > >wrote:
> >> >> > > > >>>> >
> >> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But
> you
> >> >> tried
> >> >> > > to
> >> >> > > > >>>> telnet
> >> >> > > > >>>> > to
> >> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
> >> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may
> want
> >> to
> >> >> > edit
> >> >> > > > the
> >> >> > > > >>>> > config
> >> >> > > > >>>> > > as well to tell it the real ms IP.
> >> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> >> >> > > > >>>> mike.tutkowski@solidfire.com
> >> >> > > > >>>> > >
> >> >> > > > >>>> > > wrote:
> >> >> > > > >>>> > >
> >> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
> >> like, if
> >> >> > > that
> >> >> > > > >>>> is of
> >> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT
> network
> >> >> > VMware
> >> >> > > > >>>> Fusion
> >> >> > > > >>>> > set
> >> >> > > > >>>> > > > up):
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto lo
> >> >> > > > >>>> > > > iface lo inet loopback
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto eth0
> >> >> > > > >>>> > > > iface eth0 inet manual
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > auto cloudbr0
> >> >> > > > >>>> > > > iface cloudbr0 inet static
> >> >> > > > >>>> > > >     address 192.168.233.10
> >> >> > > > >>>> > > >     netmask 255.255.255.0
> >> >> > > > >>>> > > >     network 192.168.233.0
> >> >> > > > >>>> > > >     broadcast 192.168.233.255
> >> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >> >> > > > >>>> > > >     bridge_ports eth0
> >> >> > > > >>>> > > >     bridge_fd 5
> >> >> > > > >>>> > > >     bridge_stp off
> >> >> > > > >>>> > > >     bridge_maxwait 1
> >> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2
> metric 1
> >> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> >> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
> >> >> (below).
> >> >> > > > >>>> Discovery
> >> >> > > > >>>> > > > timed
> >> >> > > > >>>> > > > > out.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I'm not sure why this would be. My network
> settings
> >> >> > > shouldn't
> >> >> > > > >>>> have
> >> >> > > > >>>> > > > changed
> >> >> > > > >>>> > > > > since the last time I tried this.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host
> and
> >> vice
> >> >> > > > versa.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM
> >> host
> >> >> > and
> >> >> > > > >>>> ping from
> >> >> > > > >>>> > > it
> >> >> > > > >>>> > > > > to the MS host.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running
> >> the
> >> >> CS
> >> >> > > MS)
> >> >> > > > >>>> to the
> >> >> > > > >>>> > VM
> >> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
> Timeout,
> >> to
> >> >> > wait
> >> >> > > > for
> >> >> > > > >>>> the
> >> >> > > > >>>> > > host
> >> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >> >>  [c.c.r.ResourceManagerImpl]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable
> to
> >> >> find
> >> >> > > the
> >> >> > > > >>>> server
> >> >> > > > >>>> > > > > resources at http://192.168.233.10
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >> >> >  [c.c.u.e.CSExceptionErrorCode]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could
> not
> >> >> find
> >> >> > > > >>>> exception:
> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error
> code
> >> >> list
> >> >> > > for
> >> >> > > > >>>> > > exceptions
> >> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
> >>  [o.a.c.a.c.a.h.AddHostCmd]
> >> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
> Exception:
> >> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to
> add
> >> >> the
> >> >> > > host
> >> >> > > > >>>> > > > > at
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > >
> >> >> > > > >>>> >
> >> >> > > > >>>>
> >> >> > > >
> >> >> > >
> >> >> >
> >> >>
> >>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM
> host to
> >> >> the
> >> >> > MS
> >> >> > > > >>>> host's
> >> >> > > > >>>> > > 8250
> >> >> > > > >>>> > > > > port:
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >> >> > > > >>>> > > > > Trying 192.168.233.1...
> >> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >> >> > > > >>>> > > > > Escape character is '^]'.
> >> >> > > > >>>> > > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > > > --
> >> >> > > > >>>> > > > *Mike Tutkowski*
> >> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >> >> > > > >>>> > > > o: 303.746.7302
> >> >> > > > >>>> > > > Advancing the way the world uses the
> >> >> > > > >>>> > > > cloud<
> >> http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>>> > > > *™*
> >> >> > > > >>>> > > >
> >> >> > > > >>>> > >
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> >
> >> >> > > > >>>> > --
> >> >> > > > >>>> > *Mike Tutkowski*
> >> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >> >> > > > >>>> > o: 303.746.7302
> >> >> > > > >>>> > Advancing the way the world uses the
> >> >> > > > >>>> > cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>>> > *™*
> >> >> > > > >>>> >
> >> >> > > > >>>>
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>>
> >> >> > > > >>> --
> >> >> > > > >>> *Mike Tutkowski*
> >> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> >> > > > >>> o: 303.746.7302
> >> >> > > > >>> Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >>> *™*
> >> >> > > > >>>
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >>
> >> >> > > > >> --
> >> >> > > > >> *Mike Tutkowski*
> >> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > >> e: mike.tutkowski@solidfire.com
> >> >> > > > >> o: 303.746.7302
> >> >> > > > >> Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > >> *™*
> >> >> > > > >>
> >> >> > > > >
> >> >> > > > >
> >> >> > > > >
> >> >> > > > > --
> >> >> > > > > *Mike Tutkowski*
> >> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > > e: mike.tutkowski@solidfire.com
> >> >> > > > > o: 303.746.7302
> >> >> > > > > Advancing the way the world uses the cloud<
> >> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> >> > > > > *™*
> >> >> > > > >
> >> >> > > >
> >> >> > > >
> >> >> > > >
> >> >> > > > --
> >> >> > > > *Mike Tutkowski*
> >> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > > > e: mike.tutkowski@solidfire.com
> >> >> > > > o: 303.746.7302
> >> >> > > > Advancing the way the world uses the
> >> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > > > *™*
> >> >> > > >
> >> >> > >
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > *Mike Tutkowski*
> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> >> > e: mike.tutkowski@solidfire.com
> >> >> > o: 303.746.7302
> >> >> > Advancing the way the world uses the
> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> >> > *™*
> >> >> >
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Yeah, I think it probably is as well, but I figured you'd be in a
better position to tell.

I see that copyAsync is unsupported in your current 4.2 driver, does
that mean that there's no template support? Or is it some other call
that does templating now? I'm still getting up to speed on all of the
4.2 changes. I was just looking at CreateCommand in
LibvirtComputingResource, since that's the only place
createPhysicalDisk is called, and it occurred to me that CreateCommand
might be skipped altogether when utilizing storage plugins.

On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> That's an interesting comment, Marcus.
>
> It was my intent that it should work with any CloudStack "managed" storage
> that uses an iSCSI target. Even though I'm using CHAP, I wrote the code so
> CHAP didn't have to be used.
>
> As I'm doing my testing, I can try to think about whether it is generic
> enough to keep those names or not.
>
> My expectation is that it is generic enough.
>
>
> On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> I added a comment to your diff. In general I think it looks good,
>> though I obviously can't vouch for whether or not it will work. One
>> thing I do have reservations about is the adaptor/pool naming. If you
>> think the code is generic enough that it will work for anyone who does
>> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
>> about it that's specific to YOUR iscsi target or how it likes to be
>> treated then I'd say that they should be named something less generic
>> than iScsiAdmStorage.
>>
>> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > Great - thanks!
>> >
>> > Just to give you an overview of what my code does (for when you get a
>> > chance to review it):
>> >
>> > SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
>> > Its hostConnect method is invoked when a host connects with the CS MS. If
>> > the host is running KVM, the listener sends a ModifyStoragePoolCommand to
>> > the host. This logic was based off of DefaultHostListener.
>> >
>> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>> > createStoragePool on the KVMStoragePoolManager. The KVMStoragePoolManager
>> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor (which
>> was
>> > registered in the constructor for KVMStoragePoolManager under the key of
>> > StoragePoolType.Iscsi.toString()).
>> >
>> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
>> > iScsiAdmStoragePool object. The key of the map is the UUID of the storage
>> > pool.
>> >
>> > When a volume is attached, createPhysicalDisk is invoked for managed
>> > storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm to
>> > establish the iSCSI connection to the volume on the SAN and a
>> > KVMPhysicalDisk is returned to be used in the attach logic that follows.
>> >
>> > When a volume is detached, getPhysicalDisk is invoked with the IQN of the
>> > volume if the storage pool in question is managed storage. Otherwise, the
>> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk just
>> > returns a new instance of KVMPhysicalDisk to be used in the detach logic.
>> >
>> > Once the volume has been detached, iScsiAdmStoragePool.deletePhysicalDisk
>> > is invoked if the storage pool is managed. deletePhysicalDisk removes the
>> > iSCSI connection to the volume using iscsiadm.
>> >
>> >
>> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> Its the log4j properties file in /etc/cloudstack/agent change all INFO
>> to
>> >> DEBUG.  I imagine the agent just isn't starting, you can tail the log
>> when
>> >> you try to start the service, or maybe it will spit something out into
>> one
>> >> of the other files in /var/log/cloudstack/agent
>> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
>> >
>> >> wrote:
>> >>
>> >> > This is how I've been trying to query for the status of the service (I
>> >> > assume it could be started this way, as well, by changing "status" to
>> >> > "start" or "restart"?):
>> >> >
>> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >> > cloudstack-agent status
>> >> >
>> >> > I get this back:
>> >> >
>> >> > Failed to execute: * could not access PID file for cloudstack-agent
>> >> >
>> >> > I've made a bunch of code changes recently, though, so I think I'm
>> going
>> >> to
>> >> > rebuild and redeploy everything.
>> >> >
>> >> > The debug info sounds helpful. Where can I set enable.debug?
>> >> >
>> >> > Thanks, Marcus!
>> >> >
>> >> >
>> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <shadowsor@gmail.com
>> >> > >wrote:
>> >> >
>> >> > > OK, will check it out in the next few days. As mentioned, you can
>> set
>> >> up
>> >> > > your Ubuntu vm as the management server as well if all else fails.
>>  If
>> >> > you
>> >> > > can get to the mgmt server on 8250 from the KVM host, then you need
>> to
>> >> > > enable.debug on the agent. It won't run without complaining loudly
>> if
>> >> it
>> >> > > can't get to the mgmt server, and I didn't see that in your agent
>> log,
>> >> so
>> >> > > perhaps its not running. I assume you know how to stop/start the
>> agent
>> >> on
>> >> > > KVM via 'service cloud stacks agent'.
>> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> >> mike.tutkowski@solidfire.com>
>> >> > > wrote:
>> >> > >
>> >> > > > Hey Marcus,
>> >> > > >
>> >> > > > I haven't yet been able to test my new code, but I thought you
>> would
>> >> > be a
>> >> > > > good person to ask to review it:
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > >
>> >> >
>> >>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >> > > >
>> >> > > > All it is supposed to do is attach and detach a data disk (that
>> has
>> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>> happens to
>> >> > be
>> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>> between a
>> >> > > > CloudStack volume and a data disk.
>> >> > > >
>> >> > > > There is no support for hypervisor snapshots or stuff like that
>> >> > (likely a
>> >> > > > future release)...just attaching and detaching a data disk in 4.3.
>> >> > > >
>> >> > > > Thanks!
>> >> > > >
>> >> > > >
>> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > >
>> >> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
>> >> first.
>> >> > > > Would
>> >> > > > > that be a problem? I just did a sudo apt-get install
>> >> > cloudstack-agent.
>> >> > > > >
>> >> > > > >
>> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > > >
>> >> > > > >> I get the same error running the command manually:
>> >> > > > >>
>> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> /usr/sbin/service
>> >> > > > >> cloudstack-agent status
>> >> > > > >>  * could not access PID file for cloudstack-agent
>> >> > > > >>
>> >> > > > >>
>> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >> > > > >>
>> >> > > > >>> agent.log looks OK to me:
>> >> > > > >>>
>> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > Agent
>> >> > > > >>> started
>> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> agent.properties found at
>> /etc/cloudstack/agent/agent.properties
>> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Defaulting to using properties file for storage
>> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Defaulting to the constant time backoff algorithm
>> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>> (main:null)
>> >> > > log4j
>> >> > > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
>> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null)
>> id
>> >> > is 3
>> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
>> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>> >> scripts/network/domr/kvm
>> >> > > > >>>
>> >> > > > >>> However, I wasn't aware that setup.log was important. This
>> seems
>> >> to
>> >> > > be
>> >> > > > a
>> >> > > > >>> problem, but I'm not sure what it might indicate:
>> >> > > > >>>
>> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> status
>> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file for
>> >> > > > >>> cloudstack-agent
>> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> start
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>> >> > > shadowsor@gmail.com
>> >> > > > >wrote:
>> >> > > > >>>
>> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent log
>> for
>> >> > > some
>> >> > > > >>>> reason. Is the agent started? That might be the place to
>> look.
>> >> > There
>> >> > > > is
>> >> > > > >>>> an
>> >> > > > >>>> agent log for the agent and one for the setup when it adds
>> the
>> >> > host,
>> >> > > > >>>> both
>> >> > > > >>>> in /var/log
>> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >> > > > >>>> wrote:
>> >> > > > >>>>
>> >> > > > >>>> > Is it saying that the MS is at the IP address or the KVM
>> host?
>> >> > > > >>>> >
>> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >> > > > >>>> >
>> >> > > > >>>> > I see this for my host Global Settings parameter:
>> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>> >> host=192.168.233.1
>> >> > > > value.
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>> >> > > > >>>> shadowsor@gmail.com
>> >> > > > >>>> > >wrote:
>> >> > > > >>>> >
>> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But you
>> >> tried
>> >> > > to
>> >> > > > >>>> telnet
>> >> > > > >>>> > to
>> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want
>> to
>> >> > edit
>> >> > > > the
>> >> > > > >>>> > config
>> >> > > > >>>> > > as well to tell it the real ms IP.
>> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> >> > > > >>>> mike.tutkowski@solidfire.com
>> >> > > > >>>> > >
>> >> > > > >>>> > > wrote:
>> >> > > > >>>> > >
>> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
>> like, if
>> >> > > that
>> >> > > > >>>> is of
>> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
>> >> > VMware
>> >> > > > >>>> Fusion
>> >> > > > >>>> > set
>> >> > > > >>>> > > > up):
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto lo
>> >> > > > >>>> > > > iface lo inet loopback
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto eth0
>> >> > > > >>>> > > > iface eth0 inet manual
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto cloudbr0
>> >> > > > >>>> > > > iface cloudbr0 inet static
>> >> > > > >>>> > > >     address 192.168.233.10
>> >> > > > >>>> > > >     netmask 255.255.255.0
>> >> > > > >>>> > > >     network 192.168.233.0
>> >> > > > >>>> > > >     broadcast 192.168.233.255
>> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >> > > > >>>> > > >     bridge_ports eth0
>> >> > > > >>>> > > >     bridge_fd 5
>> >> > > > >>>> > > >     bridge_stp off
>> >> > > > >>>> > > >     bridge_maxwait 1
>> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
>> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
>> >> (below).
>> >> > > > >>>> Discovery
>> >> > > > >>>> > > > timed
>> >> > > > >>>> > > > > out.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I'm not sure why this would be. My network settings
>> >> > > shouldn't
>> >> > > > >>>> have
>> >> > > > >>>> > > > changed
>> >> > > > >>>> > > > > since the last time I tried this.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host and
>> vice
>> >> > > > versa.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM
>> host
>> >> > and
>> >> > > > >>>> ping from
>> >> > > > >>>> > > it
>> >> > > > >>>> > > > > to the MS host.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running
>> the
>> >> CS
>> >> > > MS)
>> >> > > > >>>> to the
>> >> > > > >>>> > VM
>> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout,
>> to
>> >> > wait
>> >> > > > for
>> >> > > > >>>> the
>> >> > > > >>>> > > host
>> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> >>  [c.c.r.ResourceManagerImpl]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to
>> >> find
>> >> > > the
>> >> > > > >>>> server
>> >> > > > >>>> > > > > resources at http://192.168.233.10
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not
>> >> find
>> >> > > > >>>> exception:
>> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error code
>> >> list
>> >> > > for
>> >> > > > >>>> > > exceptions
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>  [o.a.c.a.c.a.h.AddHostCmd]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
>> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add
>> >> the
>> >> > > host
>> >> > > > >>>> > > > > at
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > >
>> >> > > > >>>> >
>> >> > > > >>>>
>> >> > > >
>> >> > >
>> >> >
>> >>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM host to
>> >> the
>> >> > MS
>> >> > > > >>>> host's
>> >> > > > >>>> > > 8250
>> >> > > > >>>> > > > > port:
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> >> > > > >>>> > > > > Escape character is '^]'.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > --
>> >> > > > >>>> > > > *Mike Tutkowski*
>> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> >> > > > >>>> > > > o: 303.746.7302
>> >> > > > >>>> > > > Advancing the way the world uses the
>> >> > > > >>>> > > > cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>>> > > > *™*
>> >> > > > >>>> > > >
>> >> > > > >>>> > >
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > --
>> >> > > > >>>> > *Mike Tutkowski*
>> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >> > > > >>>> > o: 303.746.7302
>> >> > > > >>>> > Advancing the way the world uses the
>> >> > > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>>> > *™*
>> >> > > > >>>> >
>> >> > > > >>>>
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>> --
>> >> > > > >>> *Mike Tutkowski*
>> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >> > > > >>> o: 303.746.7302
>> >> > > > >>> Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>> *™*
>> >> > > > >>>
>> >> > > > >>
>> >> > > > >>
>> >> > > > >>
>> >> > > > >> --
>> >> > > > >> *Mike Tutkowski*
>> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >> e: mike.tutkowski@solidfire.com
>> >> > > > >> o: 303.746.7302
>> >> > > > >> Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > >> *™*
>> >> > > > >>
>> >> > > > >
>> >> > > > >
>> >> > > > >
>> >> > > > > --
>> >> > > > > *Mike Tutkowski*
>> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > > e: mike.tutkowski@solidfire.com
>> >> > > > > o: 303.746.7302
>> >> > > > > Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > > *™*
>> >> > > > >
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > --
>> >> > > > *Mike Tutkowski*
>> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > e: mike.tutkowski@solidfire.com
>> >> > > > o: 303.746.7302
>> >> > > > Advancing the way the world uses the
>> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > > *™*
>> >> > > >
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Adding a connectPhysicalDisk method sounds good.

I probably should add a disconnectPhysicalDisk method, as well, and not use
the deletePhysicalDisk method.


On Sat, Sep 21, 2013 at 11:38 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> That's an interesting comment, Marcus.
>
> It was my intent that it should work with any CloudStack "managed" storage
> that uses an iSCSI target. Even though I'm using CHAP, I wrote the code so
> CHAP didn't have to be used.
>
> As I'm doing my testing, I can try to think about whether it is generic
> enough to keep those names or not.
>
> My expectation is that it is generic enough.
>
>
> On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> I added a comment to your diff. In general I think it looks good,
>> though I obviously can't vouch for whether or not it will work. One
>> thing I do have reservations about is the adaptor/pool naming. If you
>> think the code is generic enough that it will work for anyone who does
>> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
>> about it that's specific to YOUR iscsi target or how it likes to be
>> treated then I'd say that they should be named something less generic
>> than iScsiAdmStorage.
>>
>> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > Great - thanks!
>> >
>> > Just to give you an overview of what my code does (for when you get a
>> > chance to review it):
>> >
>> > SolidFireHostListener is registered in
>> SolidfirePrimaryDataStoreProvider.
>> > Its hostConnect method is invoked when a host connects with the CS MS.
>> If
>> > the host is running KVM, the listener sends a ModifyStoragePoolCommand
>> to
>> > the host. This logic was based off of DefaultHostListener.
>> >
>> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
>> > createStoragePool on the KVMStoragePoolManager. The
>> KVMStoragePoolManager
>> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor (which
>> was
>> > registered in the constructor for KVMStoragePoolManager under the key of
>> > StoragePoolType.Iscsi.toString()).
>> >
>> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
>> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
>> > iScsiAdmStoragePool object. The key of the map is the UUID of the
>> storage
>> > pool.
>> >
>> > When a volume is attached, createPhysicalDisk is invoked for managed
>> > storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm to
>> > establish the iSCSI connection to the volume on the SAN and a
>> > KVMPhysicalDisk is returned to be used in the attach logic that follows.
>> >
>> > When a volume is detached, getPhysicalDisk is invoked with the IQN of
>> the
>> > volume if the storage pool in question is managed storage. Otherwise,
>> the
>> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk
>> just
>> > returns a new instance of KVMPhysicalDisk to be used in the detach
>> logic.
>> >
>> > Once the volume has been detached,
>> iScsiAdmStoragePool.deletePhysicalDisk
>> > is invoked if the storage pool is managed. deletePhysicalDisk removes
>> the
>> > iSCSI connection to the volume using iscsiadm.
>> >
>> >
>> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> Its the log4j properties file in /etc/cloudstack/agent change all INFO
>> to
>> >> DEBUG.  I imagine the agent just isn't starting, you can tail the log
>> when
>> >> you try to start the service, or maybe it will spit something out into
>> one
>> >> of the other files in /var/log/cloudstack/agent
>> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> >> wrote:
>> >>
>> >> > This is how I've been trying to query for the status of the service
>> (I
>> >> > assume it could be started this way, as well, by changing "status" to
>> >> > "start" or "restart"?):
>> >> >
>> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> >> > cloudstack-agent status
>> >> >
>> >> > I get this back:
>> >> >
>> >> > Failed to execute: * could not access PID file for cloudstack-agent
>> >> >
>> >> > I've made a bunch of code changes recently, though, so I think I'm
>> going
>> >> to
>> >> > rebuild and redeploy everything.
>> >> >
>> >> > The debug info sounds helpful. Where can I set enable.debug?
>> >> >
>> >> > Thanks, Marcus!
>> >> >
>> >> >
>> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> > >wrote:
>> >> >
>> >> > > OK, will check it out in the next few days. As mentioned, you can
>> set
>> >> up
>> >> > > your Ubuntu vm as the management server as well if all else fails.
>>  If
>> >> > you
>> >> > > can get to the mgmt server on 8250 from the KVM host, then you
>> need to
>> >> > > enable.debug on the agent. It won't run without complaining loudly
>> if
>> >> it
>> >> > > can't get to the mgmt server, and I didn't see that in your agent
>> log,
>> >> so
>> >> > > perhaps its not running. I assume you know how to stop/start the
>> agent
>> >> on
>> >> > > KVM via 'service cloud stacks agent'.
>> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> >> mike.tutkowski@solidfire.com>
>> >> > > wrote:
>> >> > >
>> >> > > > Hey Marcus,
>> >> > > >
>> >> > > > I haven't yet been able to test my new code, but I thought you
>> would
>> >> > be a
>> >> > > > good person to ask to review it:
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > >
>> >> >
>> >>
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> >> > > >
>> >> > > > All it is supposed to do is attach and detach a data disk (that
>> has
>> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
>> happens to
>> >> > be
>> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
>> between a
>> >> > > > CloudStack volume and a data disk.
>> >> > > >
>> >> > > > There is no support for hypervisor snapshots or stuff like that
>> >> > (likely a
>> >> > > > future release)...just attaching and detaching a data disk in
>> 4.3.
>> >> > > >
>> >> > > > Thanks!
>> >> > > >
>> >> > > >
>> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>> >> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > >
>> >> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
>> >> first.
>> >> > > > Would
>> >> > > > > that be a problem? I just did a sudo apt-get install
>> >> > cloudstack-agent.
>> >> > > > >
>> >> > > > >
>> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>> >> > > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > > >
>> >> > > > >> I get the same error running the command manually:
>> >> > > > >>
>> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
>> /usr/sbin/service
>> >> > > > >> cloudstack-agent status
>> >> > > > >>  * could not access PID file for cloudstack-agent
>> >> > > > >>
>> >> > > > >>
>> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> >> > > > >> mike.tutkowski@solidfire.com> wrote:
>> >> > > > >>
>> >> > > > >>> agent.log looks OK to me:
>> >> > > > >>>
>> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > Agent
>> >> > > > >>> started
>> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> agent.properties found at
>> /etc/cloudstack/agent/agent.properties
>> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Defaulting to using properties file for storage
>> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>> >> (main:null)
>> >> > > > >>> Defaulting to the constant time backoff algorithm
>> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
>> (main:null)
>> >> > > log4j
>> >> > > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
>> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent]
>> (main:null) id
>> >> > is 3
>> >> > > > >>> 2013-09-20 19:35:39,197 INFO
>> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
>> >> > > > >>> VirtualRoutingResource _scriptDir to use:
>> >> scripts/network/domr/kvm
>> >> > > > >>>
>> >> > > > >>> However, I wasn't aware that setup.log was important. This
>> seems
>> >> to
>> >> > > be
>> >> > > > a
>> >> > > > >>> problem, but I'm not sure what it might indicate:
>> >> > > > >>>
>> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> status
>> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file for
>> >> > > > >>> cloudstack-agent
>> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
>> start
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>> >> > > shadowsor@gmail.com
>> >> > > > >wrote:
>> >> > > > >>>
>> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent
>> log for
>> >> > > some
>> >> > > > >>>> reason. Is the agent started? That might be the place to
>> look.
>> >> > There
>> >> > > > is
>> >> > > > >>>> an
>> >> > > > >>>> agent log for the agent and one for the setup when it adds
>> the
>> >> > host,
>> >> > > > >>>> both
>> >> > > > >>>> in /var/log
>> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>> >> > > > >>>> mike.tutkowski@solidfire.com>
>> >> > > > >>>> wrote:
>> >> > > > >>>>
>> >> > > > >>>> > Is it saying that the MS is at the IP address or the KVM
>> host?
>> >> > > > >>>> >
>> >> > > > >>>> > The KVM host is at 192.168.233.10.
>> >> > > > >>>> > The MS host is at 192.168.233.1.
>> >> > > > >>>> >
>> >> > > > >>>> > I see this for my host Global Settings parameter:
>> >> > > > >>>> > hostThe ip address of management server192.168.233.1
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>> >> host=192.168.233.1
>> >> > > > value.
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>> >> > > > >>>> shadowsor@gmail.com
>> >> > > > >>>> > >wrote:
>> >> > > > >>>> >
>> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But you
>> >> tried
>> >> > > to
>> >> > > > >>>> telnet
>> >> > > > >>>> > to
>> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may
>> want to
>> >> > edit
>> >> > > > the
>> >> > > > >>>> > config
>> >> > > > >>>> > > as well to tell it the real ms IP.
>> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> >> > > > >>>> mike.tutkowski@solidfire.com
>> >> > > > >>>> > >
>> >> > > > >>>> > > wrote:
>> >> > > > >>>> > >
>> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
>> like, if
>> >> > > that
>> >> > > > >>>> is of
>> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
>> >> > VMware
>> >> > > > >>>> Fusion
>> >> > > > >>>> > set
>> >> > > > >>>> > > > up):
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto lo
>> >> > > > >>>> > > > iface lo inet loopback
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto eth0
>> >> > > > >>>> > > > iface eth0 inet manual
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > auto cloudbr0
>> >> > > > >>>> > > > iface cloudbr0 inet static
>> >> > > > >>>> > > >     address 192.168.233.10
>> >> > > > >>>> > > >     netmask 255.255.255.0
>> >> > > > >>>> > > >     network 192.168.233.0
>> >> > > > >>>> > > >     broadcast 192.168.233.255
>> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> >> > > > >>>> > > >     bridge_ports eth0
>> >> > > > >>>> > > >     bridge_fd 5
>> >> > > > >>>> > > >     bridge_stp off
>> >> > > > >>>> > > >     bridge_maxwait 1
>> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2 metric
>> 1
>> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
>> >> (below).
>> >> > > > >>>> Discovery
>> >> > > > >>>> > > > timed
>> >> > > > >>>> > > > > out.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I'm not sure why this would be. My network settings
>> >> > > shouldn't
>> >> > > > >>>> have
>> >> > > > >>>> > > > changed
>> >> > > > >>>> > > > > since the last time I tried this.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host and
>> vice
>> >> > > > versa.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM
>> host
>> >> > and
>> >> > > > >>>> ping from
>> >> > > > >>>> > > it
>> >> > > > >>>> > > > > to the MS host.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running
>> the
>> >> CS
>> >> > > MS)
>> >> > > > >>>> to the
>> >> > > > >>>> > VM
>> >> > > > >>>> > > > > running KVM (VMware Fusion).
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout,
>> to
>> >> > wait
>> >> > > > for
>> >> > > > >>>> the
>> >> > > > >>>> > > host
>> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>> >>  [c.c.r.ResourceManagerImpl]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to
>> >> find
>> >> > > the
>> >> > > > >>>> server
>> >> > > > >>>> > > > > resources at http://192.168.233.10
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >> >  [c.c.u.e.CSExceptionErrorCode]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not
>> >> find
>> >> > > > >>>> exception:
>> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error code
>> >> list
>> >> > > for
>> >> > > > >>>> > > exceptions
>> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>>  [o.a.c.a.c.a.h.AddHostCmd]
>> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48)
>> Exception:
>> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to
>> add
>> >> the
>> >> > > host
>> >> > > > >>>> > > > > at
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > >
>> >> > > > >>>> >
>> >> > > > >>>>
>> >> > > >
>> >> > >
>> >> >
>> >>
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM host
>> to
>> >> the
>> >> > MS
>> >> > > > >>>> host's
>> >> > > > >>>> > > 8250
>> >> > > > >>>> > > > > port:
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> >> > > > >>>> > > > > Trying 192.168.233.1...
>> >> > > > >>>> > > > > Connected to 192.168.233.1.
>> >> > > > >>>> > > > > Escape character is '^]'.
>> >> > > > >>>> > > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > >
>> >> > > > >>>> > > > --
>> >> > > > >>>> > > > *Mike Tutkowski*
>> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> >> > > > >>>> > > > o: 303.746.7302
>> >> > > > >>>> > > > Advancing the way the world uses the
>> >> > > > >>>> > > > cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>>> > > > *™*
>> >> > > > >>>> > > >
>> >> > > > >>>> > >
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> >
>> >> > > > >>>> > --
>> >> > > > >>>> > *Mike Tutkowski*
>> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>>> > e: mike.tutkowski@solidfire.com
>> >> > > > >>>> > o: 303.746.7302
>> >> > > > >>>> > Advancing the way the world uses the
>> >> > > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>>> > *™*
>> >> > > > >>>> >
>> >> > > > >>>>
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>>
>> >> > > > >>> --
>> >> > > > >>> *Mike Tutkowski*
>> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >>> e: mike.tutkowski@solidfire.com
>> >> > > > >>> o: 303.746.7302
>> >> > > > >>> Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > >>> *™*
>> >> > > > >>>
>> >> > > > >>
>> >> > > > >>
>> >> > > > >>
>> >> > > > >> --
>> >> > > > >> *Mike Tutkowski*
>> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > >> e: mike.tutkowski@solidfire.com
>> >> > > > >> o: 303.746.7302
>> >> > > > >> Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > >> *™*
>> >> > > > >>
>> >> > > > >
>> >> > > > >
>> >> > > > >
>> >> > > > > --
>> >> > > > > *Mike Tutkowski*
>> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > > e: mike.tutkowski@solidfire.com
>> >> > > > > o: 303.746.7302
>> >> > > > > Advancing the way the world uses the cloud<
>> >> > > > http://solidfire.com/solution/overview/?video=play>
>> >> > > > > *™*
>> >> > > > >
>> >> > > >
>> >> > > >
>> >> > > >
>> >> > > > --
>> >> > > > *Mike Tutkowski*
>> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > > e: mike.tutkowski@solidfire.com
>> >> > > > o: 303.746.7302
>> >> > > > Advancing the way the world uses the
>> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > > > *™*
>> >> > > >
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
That's an interesting comment, Marcus.

It was my intent that it should work with any CloudStack "managed" storage
that uses an iSCSI target. Even though I'm using CHAP, I wrote the code so
CHAP didn't have to be used.

As I'm doing my testing, I can try to think about whether it is generic
enough to keep those names or not.

My expectation is that it is generic enough.


On Sat, Sep 21, 2013 at 11:32 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> I added a comment to your diff. In general I think it looks good,
> though I obviously can't vouch for whether or not it will work. One
> thing I do have reservations about is the adaptor/pool naming. If you
> think the code is generic enough that it will work for anyone who does
> an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
> about it that's specific to YOUR iscsi target or how it likes to be
> treated then I'd say that they should be named something less generic
> than iScsiAdmStorage.
>
> On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > Great - thanks!
> >
> > Just to give you an overview of what my code does (for when you get a
> > chance to review it):
> >
> > SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
> > Its hostConnect method is invoked when a host connects with the CS MS. If
> > the host is running KVM, the listener sends a ModifyStoragePoolCommand to
> > the host. This logic was based off of DefaultHostListener.
> >
> > The handling of ModifyStoragePoolCommand is unchanged. It invokes
> > createStoragePool on the KVMStoragePoolManager. The KVMStoragePoolManager
> > asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor (which
> was
> > registered in the constructor for KVMStoragePoolManager under the key of
> > StoragePoolType.Iscsi.toString()).
> >
> > iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
> > iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
> > iScsiAdmStoragePool object. The key of the map is the UUID of the storage
> > pool.
> >
> > When a volume is attached, createPhysicalDisk is invoked for managed
> > storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm to
> > establish the iSCSI connection to the volume on the SAN and a
> > KVMPhysicalDisk is returned to be used in the attach logic that follows.
> >
> > When a volume is detached, getPhysicalDisk is invoked with the IQN of the
> > volume if the storage pool in question is managed storage. Otherwise, the
> > normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk just
> > returns a new instance of KVMPhysicalDisk to be used in the detach logic.
> >
> > Once the volume has been detached, iScsiAdmStoragePool.deletePhysicalDisk
> > is invoked if the storage pool is managed. deletePhysicalDisk removes the
> > iSCSI connection to the volume using iscsiadm.
> >
> >
> > On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> Its the log4j properties file in /etc/cloudstack/agent change all INFO
> to
> >> DEBUG.  I imagine the agent just isn't starting, you can tail the log
> when
> >> you try to start the service, or maybe it will spit something out into
> one
> >> of the other files in /var/log/cloudstack/agent
> >> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
> >
> >> wrote:
> >>
> >> > This is how I've been trying to query for the status of the service (I
> >> > assume it could be started this way, as well, by changing "status" to
> >> > "start" or "restart"?):
> >> >
> >> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >> > cloudstack-agent status
> >> >
> >> > I get this back:
> >> >
> >> > Failed to execute: * could not access PID file for cloudstack-agent
> >> >
> >> > I've made a bunch of code changes recently, though, so I think I'm
> going
> >> to
> >> > rebuild and redeploy everything.
> >> >
> >> > The debug info sounds helpful. Where can I set enable.debug?
> >> >
> >> > Thanks, Marcus!
> >> >
> >> >
> >> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <shadowsor@gmail.com
> >> > >wrote:
> >> >
> >> > > OK, will check it out in the next few days. As mentioned, you can
> set
> >> up
> >> > > your Ubuntu vm as the management server as well if all else fails.
>  If
> >> > you
> >> > > can get to the mgmt server on 8250 from the KVM host, then you need
> to
> >> > > enable.debug on the agent. It won't run without complaining loudly
> if
> >> it
> >> > > can't get to the mgmt server, and I didn't see that in your agent
> log,
> >> so
> >> > > perhaps its not running. I assume you know how to stop/start the
> agent
> >> on
> >> > > KVM via 'service cloud stacks agent'.
> >> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> >> mike.tutkowski@solidfire.com>
> >> > > wrote:
> >> > >
> >> > > > Hey Marcus,
> >> > > >
> >> > > > I haven't yet been able to test my new code, but I thought you
> would
> >> > be a
> >> > > > good person to ask to review it:
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >> > > >
> >> > > > All it is supposed to do is attach and detach a data disk (that
> has
> >> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk
> happens to
> >> > be
> >> > > > from SolidFire-backed storage - where we have a 1:1 mapping
> between a
> >> > > > CloudStack volume and a data disk.
> >> > > >
> >> > > > There is no support for hypervisor snapshots or stuff like that
> >> > (likely a
> >> > > > future release)...just attaching and detaching a data disk in 4.3.
> >> > > >
> >> > > > Thanks!
> >> > > >
> >> > > >
> >> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> >> > > > mike.tutkowski@solidfire.com> wrote:
> >> > > >
> >> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
> >> first.
> >> > > > Would
> >> > > > > that be a problem? I just did a sudo apt-get install
> >> > cloudstack-agent.
> >> > > > >
> >> > > > >
> >> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> >> > > > > mike.tutkowski@solidfire.com> wrote:
> >> > > > >
> >> > > > >> I get the same error running the command manually:
> >> > > > >>
> >> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo
> /usr/sbin/service
> >> > > > >> cloudstack-agent status
> >> > > > >>  * could not access PID file for cloudstack-agent
> >> > > > >>
> >> > > > >>
> >> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> >> > > > >> mike.tutkowski@solidfire.com> wrote:
> >> > > > >>
> >> > > > >>> agent.log looks OK to me:
> >> > > > >>>
> >> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
> >> (main:null)
> >> > > > Agent
> >> > > > >>> started
> >> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
> >> (main:null)
> >> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> >> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
> >> (main:null)
> >> > > > >>> agent.properties found at
> /etc/cloudstack/agent/agent.properties
> >> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
> >> (main:null)
> >> > > > >>> Defaulting to using properties file for storage
> >> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
> >> (main:null)
> >> > > > >>> Defaulting to the constant time backoff algorithm
> >> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils]
> (main:null)
> >> > > log4j
> >> > > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> >> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null)
> id
> >> > is 3
> >> > > > >>> 2013-09-20 19:35:39,197 INFO
> >> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> >> > > > >>> VirtualRoutingResource _scriptDir to use:
> >> scripts/network/domr/kvm
> >> > > > >>>
> >> > > > >>> However, I wasn't aware that setup.log was important. This
> seems
> >> to
> >> > > be
> >> > > > a
> >> > > > >>> problem, but I'm not sure what it might indicate:
> >> > > > >>>
> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> status
> >> > > > >>> DEBUG:root:Failed to execute: * could not access PID file for
> >> > > > >>> cloudstack-agent
> >> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent
> start
> >> > > > >>>
> >> > > > >>>
> >> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> >> > > shadowsor@gmail.com
> >> > > > >wrote:
> >> > > > >>>
> >> > > > >>>> Sorry, I saw that in the log, I thought it was the agent log
> for
> >> > > some
> >> > > > >>>> reason. Is the agent started? That might be the place to
> look.
> >> > There
> >> > > > is
> >> > > > >>>> an
> >> > > > >>>> agent log for the agent and one for the setup when it adds
> the
> >> > host,
> >> > > > >>>> both
> >> > > > >>>> in /var/log
> >> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >> > > > >>>> mike.tutkowski@solidfire.com>
> >> > > > >>>> wrote:
> >> > > > >>>>
> >> > > > >>>> > Is it saying that the MS is at the IP address or the KVM
> host?
> >> > > > >>>> >
> >> > > > >>>> > The KVM host is at 192.168.233.10.
> >> > > > >>>> > The MS host is at 192.168.233.1.
> >> > > > >>>> >
> >> > > > >>>> > I see this for my host Global Settings parameter:
> >> > > > >>>> > hostThe ip address of management server192.168.233.1
> >> > > > >>>> >
> >> > > > >>>> >
> >> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
> >> host=192.168.233.1
> >> > > > value.
> >> > > > >>>> >
> >> > > > >>>> >
> >> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> >> > > > >>>> shadowsor@gmail.com
> >> > > > >>>> > >wrote:
> >> > > > >>>> >
> >> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But you
> >> tried
> >> > > to
> >> > > > >>>> telnet
> >> > > > >>>> > to
> >> > > > >>>> > > 192.168.233.1? It might be enough to change that in
> >> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want
> to
> >> > edit
> >> > > > the
> >> > > > >>>> > config
> >> > > > >>>> > > as well to tell it the real ms IP.
> >> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> >> > > > >>>> mike.tutkowski@solidfire.com
> >> > > > >>>> > >
> >> > > > >>>> > > wrote:
> >> > > > >>>> > >
> >> > > > >>>> > > > Here's what my /etc/network/interfaces file looks
> like, if
> >> > > that
> >> > > > >>>> is of
> >> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
> >> > VMware
> >> > > > >>>> Fusion
> >> > > > >>>> > set
> >> > > > >>>> > > > up):
> >> > > > >>>> > > >
> >> > > > >>>> > > > auto lo
> >> > > > >>>> > > > iface lo inet loopback
> >> > > > >>>> > > >
> >> > > > >>>> > > > auto eth0
> >> > > > >>>> > > > iface eth0 inet manual
> >> > > > >>>> > > >
> >> > > > >>>> > > > auto cloudbr0
> >> > > > >>>> > > > iface cloudbr0 inet static
> >> > > > >>>> > > >     address 192.168.233.10
> >> > > > >>>> > > >     netmask 255.255.255.0
> >> > > > >>>> > > >     network 192.168.233.0
> >> > > > >>>> > > >     broadcast 192.168.233.255
> >> > > > >>>> > > >     dns-nameservers 8.8.8.8
> >> > > > >>>> > > >     bridge_ports eth0
> >> > > > >>>> > > >     bridge_fd 5
> >> > > > >>>> > > >     bridge_stp off
> >> > > > >>>> > > >     bridge_maxwait 1
> >> > > > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
> >> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
> >> > > > >>>> > > >
> >> > > > >>>> > > >
> >> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> >> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >> > > > >>>> > > >
> >> > > > >>>> > > > > You appear to be correct. This is from the MS log
> >> (below).
> >> > > > >>>> Discovery
> >> > > > >>>> > > > timed
> >> > > > >>>> > > > > out.
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > I'm not sure why this would be. My network settings
> >> > > shouldn't
> >> > > > >>>> have
> >> > > > >>>> > > > changed
> >> > > > >>>> > > > > since the last time I tried this.
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > I am able to ping the KVM host from the MS host and
> vice
> >> > > > versa.
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM
> host
> >> > and
> >> > > > >>>> ping from
> >> > > > >>>> > > it
> >> > > > >>>> > > > > to the MS host.
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running
> the
> >> CS
> >> > > MS)
> >> > > > >>>> to the
> >> > > > >>>> > VM
> >> > > > >>>> > > > > running KVM (VMware Fusion).
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout,
> to
> >> > wait
> >> > > > for
> >> > > > >>>> the
> >> > > > >>>> > > host
> >> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
> >> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
> >>  [c.c.r.ResourceManagerImpl]
> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to
> >> find
> >> > > the
> >> > > > >>>> server
> >> > > > >>>> > > > > resources at http://192.168.233.10
> >> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >> >  [c.c.u.e.CSExceptionErrorCode]
> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not
> >> find
> >> > > > >>>> exception:
> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error code
> >> list
> >> > > for
> >> > > > >>>> > > exceptions
> >> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN
>  [o.a.c.a.c.a.h.AddHostCmd]
> >> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> >> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add
> >> the
> >> > > host
> >> > > > >>>> > > > > at
> >> > > > >>>> > > > >
> >> > > > >>>> > > >
> >> > > > >>>> > >
> >> > > > >>>> >
> >> > > > >>>>
> >> > > >
> >> > >
> >> >
> >>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > I do seem to be able to telnet in from my KVM host to
> >> the
> >> > MS
> >> > > > >>>> host's
> >> > > > >>>> > > 8250
> >> > > > >>>> > > > > port:
> >> > > > >>>> > > > >
> >> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >> > > > >>>> > > > > Trying 192.168.233.1...
> >> > > > >>>> > > > > Connected to 192.168.233.1.
> >> > > > >>>> > > > > Escape character is '^]'.
> >> > > > >>>> > > > >
> >> > > > >>>> > > >
> >> > > > >>>> > > >
> >> > > > >>>> > > >
> >> > > > >>>> > > > --
> >> > > > >>>> > > > *Mike Tutkowski*
> >> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> >> > > > >>>> > > > o: 303.746.7302
> >> > > > >>>> > > > Advancing the way the world uses the
> >> > > > >>>> > > > cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> > > > >>>> > > > *™*
> >> > > > >>>> > > >
> >> > > > >>>> > >
> >> > > > >>>> >
> >> > > > >>>> >
> >> > > > >>>> >
> >> > > > >>>> > --
> >> > > > >>>> > *Mike Tutkowski*
> >> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > >>>> > e: mike.tutkowski@solidfire.com
> >> > > > >>>> > o: 303.746.7302
> >> > > > >>>> > Advancing the way the world uses the
> >> > > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > > >>>> > *™*
> >> > > > >>>> >
> >> > > > >>>>
> >> > > > >>>
> >> > > > >>>
> >> > > > >>>
> >> > > > >>> --
> >> > > > >>> *Mike Tutkowski*
> >> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > >>> e: mike.tutkowski@solidfire.com
> >> > > > >>> o: 303.746.7302
> >> > > > >>> Advancing the way the world uses the cloud<
> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> > > > >>> *™*
> >> > > > >>>
> >> > > > >>
> >> > > > >>
> >> > > > >>
> >> > > > >> --
> >> > > > >> *Mike Tutkowski*
> >> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > >> e: mike.tutkowski@solidfire.com
> >> > > > >> o: 303.746.7302
> >> > > > >> Advancing the way the world uses the cloud<
> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> > > > >> *™*
> >> > > > >>
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > *Mike Tutkowski*
> >> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > > e: mike.tutkowski@solidfire.com
> >> > > > > o: 303.746.7302
> >> > > > > Advancing the way the world uses the cloud<
> >> > > > http://solidfire.com/solution/overview/?video=play>
> >> > > > > *™*
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > *Mike Tutkowski*
> >> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > > e: mike.tutkowski@solidfire.com
> >> > > > o: 303.746.7302
> >> > > > Advancing the way the world uses the
> >> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > > *™*
> >> > > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >> >
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
I added a comment to your diff. In general I think it looks good,
though I obviously can't vouch for whether or not it will work. One
thing I do have reservations about is the adaptor/pool naming. If you
think the code is generic enough that it will work for anyone who does
an iscsi LUN-per-volume plugin, then it's OK, but if there's anything
about it that's specific to YOUR iscsi target or how it likes to be
treated then I'd say that they should be named something less generic
than iScsiAdmStorage.

On Sat, Sep 21, 2013 at 7:23 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Great - thanks!
>
> Just to give you an overview of what my code does (for when you get a
> chance to review it):
>
> SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
> Its hostConnect method is invoked when a host connects with the CS MS. If
> the host is running KVM, the listener sends a ModifyStoragePoolCommand to
> the host. This logic was based off of DefaultHostListener.
>
> The handling of ModifyStoragePoolCommand is unchanged. It invokes
> createStoragePool on the KVMStoragePoolManager. The KVMStoragePoolManager
> asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor (which was
> registered in the constructor for KVMStoragePoolManager under the key of
> StoragePoolType.Iscsi.toString()).
>
> iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
> iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
> iScsiAdmStoragePool object. The key of the map is the UUID of the storage
> pool.
>
> When a volume is attached, createPhysicalDisk is invoked for managed
> storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm to
> establish the iSCSI connection to the volume on the SAN and a
> KVMPhysicalDisk is returned to be used in the attach logic that follows.
>
> When a volume is detached, getPhysicalDisk is invoked with the IQN of the
> volume if the storage pool in question is managed storage. Otherwise, the
> normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk just
> returns a new instance of KVMPhysicalDisk to be used in the detach logic.
>
> Once the volume has been detached, iScsiAdmStoragePool.deletePhysicalDisk
> is invoked if the storage pool is managed. deletePhysicalDisk removes the
> iSCSI connection to the volume using iscsiadm.
>
>
> On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Its the log4j properties file in /etc/cloudstack/agent change all INFO to
>> DEBUG.  I imagine the agent just isn't starting, you can tail the log when
>> you try to start the service, or maybe it will spit something out into one
>> of the other files in /var/log/cloudstack/agent
>> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > This is how I've been trying to query for the status of the service (I
>> > assume it could be started this way, as well, by changing "status" to
>> > "start" or "restart"?):
>> >
>> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> > cloudstack-agent status
>> >
>> > I get this back:
>> >
>> > Failed to execute: * could not access PID file for cloudstack-agent
>> >
>> > I've made a bunch of code changes recently, though, so I think I'm going
>> to
>> > rebuild and redeploy everything.
>> >
>> > The debug info sounds helpful. Where can I set enable.debug?
>> >
>> > Thanks, Marcus!
>> >
>> >
>> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > OK, will check it out in the next few days. As mentioned, you can set
>> up
>> > > your Ubuntu vm as the management server as well if all else fails.  If
>> > you
>> > > can get to the mgmt server on 8250 from the KVM host, then you need to
>> > > enable.debug on the agent. It won't run without complaining loudly if
>> it
>> > > can't get to the mgmt server, and I didn't see that in your agent log,
>> so
>> > > perhaps its not running. I assume you know how to stop/start the agent
>> on
>> > > KVM via 'service cloud stacks agent'.
>> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> > > wrote:
>> > >
>> > > > Hey Marcus,
>> > > >
>> > > > I haven't yet been able to test my new code, but I thought you would
>> > be a
>> > > > good person to ask to review it:
>> > > >
>> > > >
>> > > >
>> > >
>> >
>> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>> > > >
>> > > > All it is supposed to do is attach and detach a data disk (that has
>> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk happens to
>> > be
>> > > > from SolidFire-backed storage - where we have a 1:1 mapping between a
>> > > > CloudStack volume and a data disk.
>> > > >
>> > > > There is no support for hypervisor snapshots or stuff like that
>> > (likely a
>> > > > future release)...just attaching and detaching a data disk in 4.3.
>> > > >
>> > > > Thanks!
>> > > >
>> > > >
>> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
>> > > > mike.tutkowski@solidfire.com> wrote:
>> > > >
>> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
>> first.
>> > > > Would
>> > > > > that be a problem? I just did a sudo apt-get install
>> > cloudstack-agent.
>> > > > >
>> > > > >
>> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
>> > > > > mike.tutkowski@solidfire.com> wrote:
>> > > > >
>> > > > >> I get the same error running the command manually:
>> > > > >>
>> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> > > > >> cloudstack-agent status
>> > > > >>  * could not access PID file for cloudstack-agent
>> > > > >>
>> > > > >>
>> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> > > > >> mike.tutkowski@solidfire.com> wrote:
>> > > > >>
>> > > > >>> agent.log looks OK to me:
>> > > > >>>
>> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
>> (main:null)
>> > > > Agent
>> > > > >>> started
>> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
>> (main:null)
>> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
>> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
>> (main:null)
>> > > > >>> agent.properties found at /etc/cloudstack/agent/agent.properties
>> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
>> (main:null)
>> > > > >>> Defaulting to using properties file for storage
>> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
>> (main:null)
>> > > > >>> Defaulting to the constant time backoff algorithm
>> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null)
>> > > log4j
>> > > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
>> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id
>> > is 3
>> > > > >>> 2013-09-20 19:35:39,197 INFO
>> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
>> > > > >>> VirtualRoutingResource _scriptDir to use:
>> scripts/network/domr/kvm
>> > > > >>>
>> > > > >>> However, I wasn't aware that setup.log was important. This seems
>> to
>> > > be
>> > > > a
>> > > > >>> problem, but I'm not sure what it might indicate:
>> > > > >>>
>> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>> > > > >>> DEBUG:root:Failed to execute: * could not access PID file for
>> > > > >>> cloudstack-agent
>> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>> > > > >>>
>> > > > >>>
>> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
>> > > shadowsor@gmail.com
>> > > > >wrote:
>> > > > >>>
>> > > > >>>> Sorry, I saw that in the log, I thought it was the agent log for
>> > > some
>> > > > >>>> reason. Is the agent started? That might be the place to look.
>> > There
>> > > > is
>> > > > >>>> an
>> > > > >>>> agent log for the agent and one for the setup when it adds the
>> > host,
>> > > > >>>> both
>> > > > >>>> in /var/log
>> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>> > > > >>>> mike.tutkowski@solidfire.com>
>> > > > >>>> wrote:
>> > > > >>>>
>> > > > >>>> > Is it saying that the MS is at the IP address or the KVM host?
>> > > > >>>> >
>> > > > >>>> > The KVM host is at 192.168.233.10.
>> > > > >>>> > The MS host is at 192.168.233.1.
>> > > > >>>> >
>> > > > >>>> > I see this for my host Global Settings parameter:
>> > > > >>>> > hostThe ip address of management server192.168.233.1
>> > > > >>>> >
>> > > > >>>> >
>> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
>> host=192.168.233.1
>> > > > value.
>> > > > >>>> >
>> > > > >>>> >
>> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>> > > > >>>> shadowsor@gmail.com
>> > > > >>>> > >wrote:
>> > > > >>>> >
>> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But you
>> tried
>> > > to
>> > > > >>>> telnet
>> > > > >>>> > to
>> > > > >>>> > > 192.168.233.1? It might be enough to change that in
>> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want to
>> > edit
>> > > > the
>> > > > >>>> > config
>> > > > >>>> > > as well to tell it the real ms IP.
>> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> > > > >>>> mike.tutkowski@solidfire.com
>> > > > >>>> > >
>> > > > >>>> > > wrote:
>> > > > >>>> > >
>> > > > >>>> > > > Here's what my /etc/network/interfaces file looks like, if
>> > > that
>> > > > >>>> is of
>> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
>> > VMware
>> > > > >>>> Fusion
>> > > > >>>> > set
>> > > > >>>> > > > up):
>> > > > >>>> > > >
>> > > > >>>> > > > auto lo
>> > > > >>>> > > > iface lo inet loopback
>> > > > >>>> > > >
>> > > > >>>> > > > auto eth0
>> > > > >>>> > > > iface eth0 inet manual
>> > > > >>>> > > >
>> > > > >>>> > > > auto cloudbr0
>> > > > >>>> > > > iface cloudbr0 inet static
>> > > > >>>> > > >     address 192.168.233.10
>> > > > >>>> > > >     netmask 255.255.255.0
>> > > > >>>> > > >     network 192.168.233.0
>> > > > >>>> > > >     broadcast 192.168.233.255
>> > > > >>>> > > >     dns-nameservers 8.8.8.8
>> > > > >>>> > > >     bridge_ports eth0
>> > > > >>>> > > >     bridge_fd 5
>> > > > >>>> > > >     bridge_stp off
>> > > > >>>> > > >     bridge_maxwait 1
>> > > > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
>> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
>> > > > >>>> > > >
>> > > > >>>> > > > > You appear to be correct. This is from the MS log
>> (below).
>> > > > >>>> Discovery
>> > > > >>>> > > > timed
>> > > > >>>> > > > > out.
>> > > > >>>> > > > >
>> > > > >>>> > > > > I'm not sure why this would be. My network settings
>> > > shouldn't
>> > > > >>>> have
>> > > > >>>> > > > changed
>> > > > >>>> > > > > since the last time I tried this.
>> > > > >>>> > > > >
>> > > > >>>> > > > > I am able to ping the KVM host from the MS host and vice
>> > > > versa.
>> > > > >>>> > > > >
>> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM host
>> > and
>> > > > >>>> ping from
>> > > > >>>> > > it
>> > > > >>>> > > > > to the MS host.
>> > > > >>>> > > > >
>> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running the
>> CS
>> > > MS)
>> > > > >>>> to the
>> > > > >>>> > VM
>> > > > >>>> > > > > running KVM (VMware Fusion).
>> > > > >>>> > > > >
>> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to
>> > wait
>> > > > for
>> > > > >>>> the
>> > > > >>>> > > host
>> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
>> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>>  [c.c.r.ResourceManagerImpl]
>> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to
>> find
>> > > the
>> > > > >>>> server
>> > > > >>>> > > > > resources at http://192.168.233.10
>> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>> >  [c.c.u.e.CSExceptionErrorCode]
>> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not
>> find
>> > > > >>>> exception:
>> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error code
>> list
>> > > for
>> > > > >>>> > > exceptions
>> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
>> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
>> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add
>> the
>> > > host
>> > > > >>>> > > > > at
>> > > > >>>> > > > >
>> > > > >>>> > > >
>> > > > >>>> > >
>> > > > >>>> >
>> > > > >>>>
>> > > >
>> > >
>> >
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> > > > >>>> > > > >
>> > > > >>>> > > > > I do seem to be able to telnet in from my KVM host to
>> the
>> > MS
>> > > > >>>> host's
>> > > > >>>> > > 8250
>> > > > >>>> > > > > port:
>> > > > >>>> > > > >
>> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> > > > >>>> > > > > Trying 192.168.233.1...
>> > > > >>>> > > > > Connected to 192.168.233.1.
>> > > > >>>> > > > > Escape character is '^]'.
>> > > > >>>> > > > >
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > >
>> > > > >>>> > > > --
>> > > > >>>> > > > *Mike Tutkowski*
>> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > >>>> > > > e: mike.tutkowski@solidfire.com
>> > > > >>>> > > > o: 303.746.7302
>> > > > >>>> > > > Advancing the way the world uses the
>> > > > >>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > > >>>> > > > *™*
>> > > > >>>> > > >
>> > > > >>>> > >
>> > > > >>>> >
>> > > > >>>> >
>> > > > >>>> >
>> > > > >>>> > --
>> > > > >>>> > *Mike Tutkowski*
>> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > >>>> > e: mike.tutkowski@solidfire.com
>> > > > >>>> > o: 303.746.7302
>> > > > >>>> > Advancing the way the world uses the
>> > > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > > >>>> > *™*
>> > > > >>>> >
>> > > > >>>>
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>> --
>> > > > >>> *Mike Tutkowski*
>> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > > >>> e: mike.tutkowski@solidfire.com
>> > > > >>> o: 303.746.7302
>> > > > >>> Advancing the way the world uses the cloud<
>> > > > http://solidfire.com/solution/overview/?video=play>
>> > > > >>> *™*
>> > > > >>>
>> > > > >>
>> > > > >>
>> > > > >>
>> > > > >> --
>> > > > >> *Mike Tutkowski*
>> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
>> > > > >> e: mike.tutkowski@solidfire.com
>> > > > >> o: 303.746.7302
>> > > > >> Advancing the way the world uses the cloud<
>> > > > http://solidfire.com/solution/overview/?video=play>
>> > > > >> *™*
>> > > > >>
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > *Mike Tutkowski*
>> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > > e: mike.tutkowski@solidfire.com
>> > > > > o: 303.746.7302
>> > > > > Advancing the way the world uses the cloud<
>> > > > http://solidfire.com/solution/overview/?video=play>
>> > > > > *™*
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > *Mike Tutkowski*
>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > e: mike.tutkowski@solidfire.com
>> > > > o: 303.746.7302
>> > > > Advancing the way the world uses the
>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > > *™*
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Great - thanks!

Just to give you an overview of what my code does (for when you get a
chance to review it):

SolidFireHostListener is registered in SolidfirePrimaryDataStoreProvider.
Its hostConnect method is invoked when a host connects with the CS MS. If
the host is running KVM, the listener sends a ModifyStoragePoolCommand to
the host. This logic was based off of DefaultHostListener.

The handling of ModifyStoragePoolCommand is unchanged. It invokes
createStoragePool on the KVMStoragePoolManager. The KVMStoragePoolManager
asks for an adaptor and finds my new one: iScsiAdmStorageAdaptor (which was
registered in the constructor for KVMStoragePoolManager under the key of
StoragePoolType.Iscsi.toString()).

iScsiAdmStorageAdaptor.createStoragePool just makes an instance of
iScsiAdmStoragePool, adds it to a map, and returns the pointer to the
iScsiAdmStoragePool object. The key of the map is the UUID of the storage
pool.

When a volume is attached, createPhysicalDisk is invoked for managed
storage rather than getPhysicalDisk. createPhysicalDisk uses iscsiadm to
establish the iSCSI connection to the volume on the SAN and a
KVMPhysicalDisk is returned to be used in the attach logic that follows.

When a volume is detached, getPhysicalDisk is invoked with the IQN of the
volume if the storage pool in question is managed storage. Otherwise, the
normal vol.getPath() is used. iScsiAdmStorageAdaptor.getPhysicalDisk just
returns a new instance of KVMPhysicalDisk to be used in the detach logic.

Once the volume has been detached, iScsiAdmStoragePool.deletePhysicalDisk
is invoked if the storage pool is managed. deletePhysicalDisk removes the
iSCSI connection to the volume using iscsiadm.


On Sat, Sep 21, 2013 at 5:46 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Its the log4j properties file in /etc/cloudstack/agent change all INFO to
> DEBUG.  I imagine the agent just isn't starting, you can tail the log when
> you try to start the service, or maybe it will spit something out into one
> of the other files in /var/log/cloudstack/agent
> On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > This is how I've been trying to query for the status of the service (I
> > assume it could be started this way, as well, by changing "status" to
> > "start" or "restart"?):
> >
> > mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> > cloudstack-agent status
> >
> > I get this back:
> >
> > Failed to execute: * could not access PID file for cloudstack-agent
> >
> > I've made a bunch of code changes recently, though, so I think I'm going
> to
> > rebuild and redeploy everything.
> >
> > The debug info sounds helpful. Where can I set enable.debug?
> >
> > Thanks, Marcus!
> >
> >
> > On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > OK, will check it out in the next few days. As mentioned, you can set
> up
> > > your Ubuntu vm as the management server as well if all else fails.  If
> > you
> > > can get to the mgmt server on 8250 from the KVM host, then you need to
> > > enable.debug on the agent. It won't run without complaining loudly if
> it
> > > can't get to the mgmt server, and I didn't see that in your agent log,
> so
> > > perhaps its not running. I assume you know how to stop/start the agent
> on
> > > KVM via 'service cloud stacks agent'.
> > > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> > > wrote:
> > >
> > > > Hey Marcus,
> > > >
> > > > I haven't yet been able to test my new code, but I thought you would
> > be a
> > > > good person to ask to review it:
> > > >
> > > >
> > > >
> > >
> >
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> > > >
> > > > All it is supposed to do is attach and detach a data disk (that has
> > > > guaranteed IOPS) with KVM as the hypervisor. The data disk happens to
> > be
> > > > from SolidFire-backed storage - where we have a 1:1 mapping between a
> > > > CloudStack volume and a data disk.
> > > >
> > > > There is no support for hypervisor snapshots or stuff like that
> > (likely a
> > > > future release)...just attaching and detaching a data disk in 4.3.
> > > >
> > > > Thanks!
> > > >
> > > >
> > > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> > > > mike.tutkowski@solidfire.com> wrote:
> > > >
> > > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent
> first.
> > > > Would
> > > > > that be a problem? I just did a sudo apt-get install
> > cloudstack-agent.
> > > > >
> > > > >
> > > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> > > > > mike.tutkowski@solidfire.com> wrote:
> > > > >
> > > > >> I get the same error running the command manually:
> > > > >>
> > > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> > > > >> cloudstack-agent status
> > > > >>  * could not access PID file for cloudstack-agent
> > > > >>
> > > > >>
> > > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> > > > >> mike.tutkowski@solidfire.com> wrote:
> > > > >>
> > > > >>> agent.log looks OK to me:
> > > > >>>
> > > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell]
> (main:null)
> > > > Agent
> > > > >>> started
> > > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell]
> (main:null)
> > > > >>> Implementation Version is 4.3.0-SNAPSHOT
> > > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell]
> (main:null)
> > > > >>> agent.properties found at /etc/cloudstack/agent/agent.properties
> > > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell]
> (main:null)
> > > > >>> Defaulting to using properties file for storage
> > > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell]
> (main:null)
> > > > >>> Defaulting to the constant time backoff algorithm
> > > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null)
> > > log4j
> > > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id
> > is 3
> > > > >>> 2013-09-20 19:35:39,197 INFO
> > > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > > > >>> VirtualRoutingResource _scriptDir to use:
> scripts/network/domr/kvm
> > > > >>>
> > > > >>> However, I wasn't aware that setup.log was important. This seems
> to
> > > be
> > > > a
> > > > >>> problem, but I'm not sure what it might indicate:
> > > > >>>
> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> > > > >>> DEBUG:root:Failed to execute: * could not access PID file for
> > > > >>> cloudstack-agent
> > > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> > > > >>>
> > > > >>>
> > > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> > > shadowsor@gmail.com
> > > > >wrote:
> > > > >>>
> > > > >>>> Sorry, I saw that in the log, I thought it was the agent log for
> > > some
> > > > >>>> reason. Is the agent started? That might be the place to look.
> > There
> > > > is
> > > > >>>> an
> > > > >>>> agent log for the agent and one for the setup when it adds the
> > host,
> > > > >>>> both
> > > > >>>> in /var/log
> > > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> > > > >>>> mike.tutkowski@solidfire.com>
> > > > >>>> wrote:
> > > > >>>>
> > > > >>>> > Is it saying that the MS is at the IP address or the KVM host?
> > > > >>>> >
> > > > >>>> > The KVM host is at 192.168.233.10.
> > > > >>>> > The MS host is at 192.168.233.1.
> > > > >>>> >
> > > > >>>> > I see this for my host Global Settings parameter:
> > > > >>>> > hostThe ip address of management server192.168.233.1
> > > > >>>> >
> > > > >>>> >
> > > > >>>> > /etc/cloudstack/agent/agent.properties has a
> host=192.168.233.1
> > > > value.
> > > > >>>> >
> > > > >>>> >
> > > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> > > > >>>> shadowsor@gmail.com
> > > > >>>> > >wrote:
> > > > >>>> >
> > > > >>>> > > The log says your mgmt server is 192.168.233.10? But you
> tried
> > > to
> > > > >>>> telnet
> > > > >>>> > to
> > > > >>>> > > 192.168.233.1? It might be enough to change that in
> > > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want to
> > edit
> > > > the
> > > > >>>> > config
> > > > >>>> > > as well to tell it the real ms IP.
> > > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> > > > >>>> mike.tutkowski@solidfire.com
> > > > >>>> > >
> > > > >>>> > > wrote:
> > > > >>>> > >
> > > > >>>> > > > Here's what my /etc/network/interfaces file looks like, if
> > > that
> > > > >>>> is of
> > > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
> > VMware
> > > > >>>> Fusion
> > > > >>>> > set
> > > > >>>> > > > up):
> > > > >>>> > > >
> > > > >>>> > > > auto lo
> > > > >>>> > > > iface lo inet loopback
> > > > >>>> > > >
> > > > >>>> > > > auto eth0
> > > > >>>> > > > iface eth0 inet manual
> > > > >>>> > > >
> > > > >>>> > > > auto cloudbr0
> > > > >>>> > > > iface cloudbr0 inet static
> > > > >>>> > > >     address 192.168.233.10
> > > > >>>> > > >     netmask 255.255.255.0
> > > > >>>> > > >     network 192.168.233.0
> > > > >>>> > > >     broadcast 192.168.233.255
> > > > >>>> > > >     dns-nameservers 8.8.8.8
> > > > >>>> > > >     bridge_ports eth0
> > > > >>>> > > >     bridge_fd 5
> > > > >>>> > > >     bridge_stp off
> > > > >>>> > > >     bridge_maxwait 1
> > > > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
> > > > >>>> > > >     pre-down route del default gw 192.168.233.2
> > > > >>>> > > >
> > > > >>>> > > >
> > > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> > > > >>>> > > >
> > > > >>>> > > > > You appear to be correct. This is from the MS log
> (below).
> > > > >>>> Discovery
> > > > >>>> > > > timed
> > > > >>>> > > > > out.
> > > > >>>> > > > >
> > > > >>>> > > > > I'm not sure why this would be. My network settings
> > > shouldn't
> > > > >>>> have
> > > > >>>> > > > changed
> > > > >>>> > > > > since the last time I tried this.
> > > > >>>> > > > >
> > > > >>>> > > > > I am able to ping the KVM host from the MS host and vice
> > > > versa.
> > > > >>>> > > > >
> > > > >>>> > > > > I'm even able to manually kick off a VM on the KVM host
> > and
> > > > >>>> ping from
> > > > >>>> > > it
> > > > >>>> > > > > to the MS host.
> > > > >>>> > > > >
> > > > >>>> > > > > I am using NAT from my Mac OS X host (also running the
> CS
> > > MS)
> > > > >>>> to the
> > > > >>>> > VM
> > > > >>>> > > > > running KVM (VMware Fusion).
> > > > >>>> > > > >
> > > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> > > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to
> > wait
> > > > for
> > > > >>>> the
> > > > >>>> > > host
> > > > >>>> > > > > connecting to mgt svr, assuming it is failed
> > > > >>>> > > > > 2013-09-20 19:40:40,144 WARN
>  [c.c.r.ResourceManagerImpl]
> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to
> find
> > > the
> > > > >>>> server
> > > > >>>> > > > > resources at http://192.168.233.10
> > > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
> >  [c.c.u.e.CSExceptionErrorCode]
> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not
> find
> > > > >>>> exception:
> > > > >>>> > > > > com.cloud.exception.DiscoveryException in error code
> list
> > > for
> > > > >>>> > > exceptions
> > > > >>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add
> the
> > > host
> > > > >>>> > > > > at
> > > > >>>> > > > >
> > > > >>>> > > >
> > > > >>>> > >
> > > > >>>> >
> > > > >>>>
> > > >
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > > > >>>> > > > >
> > > > >>>> > > > > I do seem to be able to telnet in from my KVM host to
> the
> > MS
> > > > >>>> host's
> > > > >>>> > > 8250
> > > > >>>> > > > > port:
> > > > >>>> > > > >
> > > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > > >>>> > > > > Trying 192.168.233.1...
> > > > >>>> > > > > Connected to 192.168.233.1.
> > > > >>>> > > > > Escape character is '^]'.
> > > > >>>> > > > >
> > > > >>>> > > >
> > > > >>>> > > >
> > > > >>>> > > >
> > > > >>>> > > > --
> > > > >>>> > > > *Mike Tutkowski*
> > > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > >>>> > > > e: mike.tutkowski@solidfire.com
> > > > >>>> > > > o: 303.746.7302
> > > > >>>> > > > Advancing the way the world uses the
> > > > >>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > >>>> > > > *™*
> > > > >>>> > > >
> > > > >>>> > >
> > > > >>>> >
> > > > >>>> >
> > > > >>>> >
> > > > >>>> > --
> > > > >>>> > *Mike Tutkowski*
> > > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> > > > >>>> > e: mike.tutkowski@solidfire.com
> > > > >>>> > o: 303.746.7302
> > > > >>>> > Advancing the way the world uses the
> > > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > >>>> > *™*
> > > > >>>> >
> > > > >>>>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> --
> > > > >>> *Mike Tutkowski*
> > > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > > > >>> e: mike.tutkowski@solidfire.com
> > > > >>> o: 303.746.7302
> > > > >>> Advancing the way the world uses the cloud<
> > > > http://solidfire.com/solution/overview/?video=play>
> > > > >>> *™*
> > > > >>>
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> *Mike Tutkowski*
> > > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > > >> e: mike.tutkowski@solidfire.com
> > > > >> o: 303.746.7302
> > > > >> Advancing the way the world uses the cloud<
> > > > http://solidfire.com/solution/overview/?video=play>
> > > > >> *™*
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Mike Tutkowski*
> > > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > > e: mike.tutkowski@solidfire.com
> > > > > o: 303.746.7302
> > > > > Advancing the way the world uses the cloud<
> > > > http://solidfire.com/solution/overview/?video=play>
> > > > > *™*
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > > >
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Its the log4j properties file in /etc/cloudstack/agent change all INFO to
DEBUG.  I imagine the agent just isn't starting, you can tail the log when
you try to start the service, or maybe it will spit something out into one
of the other files in /var/log/cloudstack/agent
On Sep 21, 2013 5:19 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> This is how I've been trying to query for the status of the service (I
> assume it could be started this way, as well, by changing "status" to
> "start" or "restart"?):
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> cloudstack-agent status
>
> I get this back:
>
> Failed to execute: * could not access PID file for cloudstack-agent
>
> I've made a bunch of code changes recently, though, so I think I'm going to
> rebuild and redeploy everything.
>
> The debug info sounds helpful. Where can I set enable.debug?
>
> Thanks, Marcus!
>
>
> On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > OK, will check it out in the next few days. As mentioned, you can set up
> > your Ubuntu vm as the management server as well if all else fails.  If
> you
> > can get to the mgmt server on 8250 from the KVM host, then you need to
> > enable.debug on the agent. It won't run without complaining loudly if it
> > can't get to the mgmt server, and I didn't see that in your agent log, so
> > perhaps its not running. I assume you know how to stop/start the agent on
> > KVM via 'service cloud stacks agent'.
> > On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <mi...@solidfire.com>
> > wrote:
> >
> > > Hey Marcus,
> > >
> > > I haven't yet been able to test my new code, but I thought you would
> be a
> > > good person to ask to review it:
> > >
> > >
> > >
> >
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> > >
> > > All it is supposed to do is attach and detach a data disk (that has
> > > guaranteed IOPS) with KVM as the hypervisor. The data disk happens to
> be
> > > from SolidFire-backed storage - where we have a 1:1 mapping between a
> > > CloudStack volume and a data disk.
> > >
> > > There is no support for hypervisor snapshots or stuff like that
> (likely a
> > > future release)...just attaching and detaching a data disk in 4.3.
> > >
> > > Thanks!
> > >
> > >
> > > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> > > mike.tutkowski@solidfire.com> wrote:
> > >
> > > > When I re-deployed the DEBs, I didn't remove cloudstack-agent first.
> > > Would
> > > > that be a problem? I just did a sudo apt-get install
> cloudstack-agent.
> > > >
> > > >
> > > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> > > > mike.tutkowski@solidfire.com> wrote:
> > > >
> > > >> I get the same error running the command manually:
> > > >>
> > > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> > > >> cloudstack-agent status
> > > >>  * could not access PID file for cloudstack-agent
> > > >>
> > > >>
> > > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> > > >> mike.tutkowski@solidfire.com> wrote:
> > > >>
> > > >>> agent.log looks OK to me:
> > > >>>
> > > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null)
> > > Agent
> > > >>> started
> > > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
> > > >>> Implementation Version is 4.3.0-SNAPSHOT
> > > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
> > > >>> agent.properties found at /etc/cloudstack/agent/agent.properties
> > > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
> > > >>> Defaulting to using properties file for storage
> > > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
> > > >>> Defaulting to the constant time backoff algorithm
> > > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null)
> > log4j
> > > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id
> is 3
> > > >>> 2013-09-20 19:35:39,197 INFO
> > > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > > >>> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> > > >>>
> > > >>> However, I wasn't aware that setup.log was important. This seems to
> > be
> > > a
> > > >>> problem, but I'm not sure what it might indicate:
> > > >>>
> > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> > > >>> DEBUG:root:Failed to execute: * could not access PID file for
> > > >>> cloudstack-agent
> > > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> > > >>>
> > > >>>
> > > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> > shadowsor@gmail.com
> > > >wrote:
> > > >>>
> > > >>>> Sorry, I saw that in the log, I thought it was the agent log for
> > some
> > > >>>> reason. Is the agent started? That might be the place to look.
> There
> > > is
> > > >>>> an
> > > >>>> agent log for the agent and one for the setup when it adds the
> host,
> > > >>>> both
> > > >>>> in /var/log
> > > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> > > >>>> mike.tutkowski@solidfire.com>
> > > >>>> wrote:
> > > >>>>
> > > >>>> > Is it saying that the MS is at the IP address or the KVM host?
> > > >>>> >
> > > >>>> > The KVM host is at 192.168.233.10.
> > > >>>> > The MS host is at 192.168.233.1.
> > > >>>> >
> > > >>>> > I see this for my host Global Settings parameter:
> > > >>>> > hostThe ip address of management server192.168.233.1
> > > >>>> >
> > > >>>> >
> > > >>>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1
> > > value.
> > > >>>> >
> > > >>>> >
> > > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> > > >>>> shadowsor@gmail.com
> > > >>>> > >wrote:
> > > >>>> >
> > > >>>> > > The log says your mgmt server is 192.168.233.10? But you tried
> > to
> > > >>>> telnet
> > > >>>> > to
> > > >>>> > > 192.168.233.1? It might be enough to change that in
> > > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want to
> edit
> > > the
> > > >>>> > config
> > > >>>> > > as well to tell it the real ms IP.
> > > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> > > >>>> mike.tutkowski@solidfire.com
> > > >>>> > >
> > > >>>> > > wrote:
> > > >>>> > >
> > > >>>> > > > Here's what my /etc/network/interfaces file looks like, if
> > that
> > > >>>> is of
> > > >>>> > > > interest (the 192.168.233.0 network is the NAT network
> VMware
> > > >>>> Fusion
> > > >>>> > set
> > > >>>> > > > up):
> > > >>>> > > >
> > > >>>> > > > auto lo
> > > >>>> > > > iface lo inet loopback
> > > >>>> > > >
> > > >>>> > > > auto eth0
> > > >>>> > > > iface eth0 inet manual
> > > >>>> > > >
> > > >>>> > > > auto cloudbr0
> > > >>>> > > > iface cloudbr0 inet static
> > > >>>> > > >     address 192.168.233.10
> > > >>>> > > >     netmask 255.255.255.0
> > > >>>> > > >     network 192.168.233.0
> > > >>>> > > >     broadcast 192.168.233.255
> > > >>>> > > >     dns-nameservers 8.8.8.8
> > > >>>> > > >     bridge_ports eth0
> > > >>>> > > >     bridge_fd 5
> > > >>>> > > >     bridge_stp off
> > > >>>> > > >     bridge_maxwait 1
> > > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
> > > >>>> > > >     pre-down route del default gw 192.168.233.2
> > > >>>> > > >
> > > >>>> > > >
> > > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> > > >>>> > > >
> > > >>>> > > > > You appear to be correct. This is from the MS log (below).
> > > >>>> Discovery
> > > >>>> > > > timed
> > > >>>> > > > > out.
> > > >>>> > > > >
> > > >>>> > > > > I'm not sure why this would be. My network settings
> > shouldn't
> > > >>>> have
> > > >>>> > > > changed
> > > >>>> > > > > since the last time I tried this.
> > > >>>> > > > >
> > > >>>> > > > > I am able to ping the KVM host from the MS host and vice
> > > versa.
> > > >>>> > > > >
> > > >>>> > > > > I'm even able to manually kick off a VM on the KVM host
> and
> > > >>>> ping from
> > > >>>> > > it
> > > >>>> > > > > to the MS host.
> > > >>>> > > > >
> > > >>>> > > > > I am using NAT from my Mac OS X host (also running the CS
> > MS)
> > > >>>> to the
> > > >>>> > VM
> > > >>>> > > > > running KVM (VMware Fusion).
> > > >>>> > > > >
> > > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> > > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to
> wait
> > > for
> > > >>>> the
> > > >>>> > > host
> > > >>>> > > > > connecting to mgt svr, assuming it is failed
> > > >>>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find
> > the
> > > >>>> server
> > > >>>> > > > > resources at http://192.168.233.10
> > > >>>> > > > > 2013-09-20 19:40:40,145 INFO
>  [c.c.u.e.CSExceptionErrorCode]
> > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
> > > >>>> exception:
> > > >>>> > > > > com.cloud.exception.DiscoveryException in error code list
> > for
> > > >>>> > > exceptions
> > > >>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add the
> > host
> > > >>>> > > > > at
> > > >>>> > > > >
> > > >>>> > > >
> > > >>>> > >
> > > >>>> >
> > > >>>>
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > > >>>> > > > >
> > > >>>> > > > > I do seem to be able to telnet in from my KVM host to the
> MS
> > > >>>> host's
> > > >>>> > > 8250
> > > >>>> > > > > port:
> > > >>>> > > > >
> > > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > >>>> > > > > Trying 192.168.233.1...
> > > >>>> > > > > Connected to 192.168.233.1.
> > > >>>> > > > > Escape character is '^]'.
> > > >>>> > > > >
> > > >>>> > > >
> > > >>>> > > >
> > > >>>> > > >
> > > >>>> > > > --
> > > >>>> > > > *Mike Tutkowski*
> > > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>>> > > > e: mike.tutkowski@solidfire.com
> > > >>>> > > > o: 303.746.7302
> > > >>>> > > > Advancing the way the world uses the
> > > >>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >>>> > > > *™*
> > > >>>> > > >
> > > >>>> > >
> > > >>>> >
> > > >>>> >
> > > >>>> >
> > > >>>> > --
> > > >>>> > *Mike Tutkowski*
> > > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>>> > e: mike.tutkowski@solidfire.com
> > > >>>> > o: 303.746.7302
> > > >>>> > Advancing the way the world uses the
> > > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >>>> > *™*
> > > >>>> >
> > > >>>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> *Mike Tutkowski*
> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> e: mike.tutkowski@solidfire.com
> > > >>> o: 303.746.7302
> > > >>> Advancing the way the world uses the cloud<
> > > http://solidfire.com/solution/overview/?video=play>
> > > >>> *™*
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> *Mike Tutkowski*
> > > >> *Senior CloudStack Developer, SolidFire Inc.*
> > > >> e: mike.tutkowski@solidfire.com
> > > >> o: 303.746.7302
> > > >> Advancing the way the world uses the cloud<
> > > http://solidfire.com/solution/overview/?video=play>
> > > >> *™*
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the cloud<
> > > http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > > >
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
This is how I've been trying to query for the status of the service (I
assume it could be started this way, as well, by changing "status" to
"start" or "restart"?):

mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent status

I get this back:

Failed to execute: * could not access PID file for cloudstack-agent

I've made a bunch of code changes recently, though, so I think I'm going to
rebuild and redeploy everything.

The debug info sounds helpful. Where can I set enable.debug?

Thanks, Marcus!


On Sat, Sep 21, 2013 at 4:11 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> OK, will check it out in the next few days. As mentioned, you can set up
> your Ubuntu vm as the management server as well if all else fails.  If you
> can get to the mgmt server on 8250 from the KVM host, then you need to
> enable.debug on the agent. It won't run without complaining loudly if it
> can't get to the mgmt server, and I didn't see that in your agent log, so
> perhaps its not running. I assume you know how to stop/start the agent on
> KVM via 'service cloud stacks agent'.
> On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Hey Marcus,
> >
> > I haven't yet been able to test my new code, but I thought you would be a
> > good person to ask to review it:
> >
> >
> >
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
> >
> > All it is supposed to do is attach and detach a data disk (that has
> > guaranteed IOPS) with KVM as the hypervisor. The data disk happens to be
> > from SolidFire-backed storage - where we have a 1:1 mapping between a
> > CloudStack volume and a data disk.
> >
> > There is no support for hypervisor snapshots or stuff like that (likely a
> > future release)...just attaching and detaching a data disk in 4.3.
> >
> > Thanks!
> >
> >
> > On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> > > When I re-deployed the DEBs, I didn't remove cloudstack-agent first.
> > Would
> > > that be a problem? I just did a sudo apt-get install cloudstack-agent.
> > >
> > >
> > > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> > > mike.tutkowski@solidfire.com> wrote:
> > >
> > >> I get the same error running the command manually:
> > >>
> > >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> > >> cloudstack-agent status
> > >>  * could not access PID file for cloudstack-agent
> > >>
> > >>
> > >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> > >> mike.tutkowski@solidfire.com> wrote:
> > >>
> > >>> agent.log looks OK to me:
> > >>>
> > >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null)
> > Agent
> > >>> started
> > >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
> > >>> Implementation Version is 4.3.0-SNAPSHOT
> > >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
> > >>> agent.properties found at /etc/cloudstack/agent/agent.properties
> > >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
> > >>> Defaulting to using properties file for storage
> > >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
> > >>> Defaulting to the constant time backoff algorithm
> > >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null)
> log4j
> > >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> > >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
> > >>> 2013-09-20 19:35:39,197 INFO
> > >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> > >>> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> > >>>
> > >>> However, I wasn't aware that setup.log was important. This seems to
> be
> > a
> > >>> problem, but I'm not sure what it might indicate:
> > >>>
> > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> > >>> DEBUG:root:Failed to execute: * could not access PID file for
> > >>> cloudstack-agent
> > >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> > >>>
> > >>>
> > >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <
> shadowsor@gmail.com
> > >wrote:
> > >>>
> > >>>> Sorry, I saw that in the log, I thought it was the agent log for
> some
> > >>>> reason. Is the agent started? That might be the place to look. There
> > is
> > >>>> an
> > >>>> agent log for the agent and one for the setup when it adds the host,
> > >>>> both
> > >>>> in /var/log
> > >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> > >>>> mike.tutkowski@solidfire.com>
> > >>>> wrote:
> > >>>>
> > >>>> > Is it saying that the MS is at the IP address or the KVM host?
> > >>>> >
> > >>>> > The KVM host is at 192.168.233.10.
> > >>>> > The MS host is at 192.168.233.1.
> > >>>> >
> > >>>> > I see this for my host Global Settings parameter:
> > >>>> > hostThe ip address of management server192.168.233.1
> > >>>> >
> > >>>> >
> > >>>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1
> > value.
> > >>>> >
> > >>>> >
> > >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> > >>>> shadowsor@gmail.com
> > >>>> > >wrote:
> > >>>> >
> > >>>> > > The log says your mgmt server is 192.168.233.10? But you tried
> to
> > >>>> telnet
> > >>>> > to
> > >>>> > > 192.168.233.1? It might be enough to change that in
> > >>>> > > /etc/cloudstack/agent/agent.properties, but you may want to edit
> > the
> > >>>> > config
> > >>>> > > as well to tell it the real ms IP.
> > >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> > >>>> mike.tutkowski@solidfire.com
> > >>>> > >
> > >>>> > > wrote:
> > >>>> > >
> > >>>> > > > Here's what my /etc/network/interfaces file looks like, if
> that
> > >>>> is of
> > >>>> > > > interest (the 192.168.233.0 network is the NAT network VMware
> > >>>> Fusion
> > >>>> > set
> > >>>> > > > up):
> > >>>> > > >
> > >>>> > > > auto lo
> > >>>> > > > iface lo inet loopback
> > >>>> > > >
> > >>>> > > > auto eth0
> > >>>> > > > iface eth0 inet manual
> > >>>> > > >
> > >>>> > > > auto cloudbr0
> > >>>> > > > iface cloudbr0 inet static
> > >>>> > > >     address 192.168.233.10
> > >>>> > > >     netmask 255.255.255.0
> > >>>> > > >     network 192.168.233.0
> > >>>> > > >     broadcast 192.168.233.255
> > >>>> > > >     dns-nameservers 8.8.8.8
> > >>>> > > >     bridge_ports eth0
> > >>>> > > >     bridge_fd 5
> > >>>> > > >     bridge_stp off
> > >>>> > > >     bridge_maxwait 1
> > >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
> > >>>> > > >     pre-down route del default gw 192.168.233.2
> > >>>> > > >
> > >>>> > > >
> > >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > >>>> > > > mike.tutkowski@solidfire.com> wrote:
> > >>>> > > >
> > >>>> > > > > You appear to be correct. This is from the MS log (below).
> > >>>> Discovery
> > >>>> > > > timed
> > >>>> > > > > out.
> > >>>> > > > >
> > >>>> > > > > I'm not sure why this would be. My network settings
> shouldn't
> > >>>> have
> > >>>> > > > changed
> > >>>> > > > > since the last time I tried this.
> > >>>> > > > >
> > >>>> > > > > I am able to ping the KVM host from the MS host and vice
> > versa.
> > >>>> > > > >
> > >>>> > > > > I'm even able to manually kick off a VM on the KVM host and
> > >>>> ping from
> > >>>> > > it
> > >>>> > > > > to the MS host.
> > >>>> > > > >
> > >>>> > > > > I am using NAT from my Mac OS X host (also running the CS
> MS)
> > >>>> to the
> > >>>> > VM
> > >>>> > > > > running KVM (VMware Fusion).
> > >>>> > > > >
> > >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> > >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait
> > for
> > >>>> the
> > >>>> > > host
> > >>>> > > > > connecting to mgt svr, assuming it is failed
> > >>>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find
> the
> > >>>> server
> > >>>> > > > > resources at http://192.168.233.10
> > >>>> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
> > >>>> exception:
> > >>>> > > > > com.cloud.exception.DiscoveryException in error code list
> for
> > >>>> > > exceptions
> > >>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add the
> host
> > >>>> > > > > at
> > >>>> > > > >
> > >>>> > > >
> > >>>> > >
> > >>>> >
> > >>>>
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > >>>> > > > >
> > >>>> > > > > I do seem to be able to telnet in from my KVM host to the MS
> > >>>> host's
> > >>>> > > 8250
> > >>>> > > > > port:
> > >>>> > > > >
> > >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > >>>> > > > > Trying 192.168.233.1...
> > >>>> > > > > Connected to 192.168.233.1.
> > >>>> > > > > Escape character is '^]'.
> > >>>> > > > >
> > >>>> > > >
> > >>>> > > >
> > >>>> > > >
> > >>>> > > > --
> > >>>> > > > *Mike Tutkowski*
> > >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > >>>> > > > e: mike.tutkowski@solidfire.com
> > >>>> > > > o: 303.746.7302
> > >>>> > > > Advancing the way the world uses the
> > >>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > >>>> > > > *™*
> > >>>> > > >
> > >>>> > >
> > >>>> >
> > >>>> >
> > >>>> >
> > >>>> > --
> > >>>> > *Mike Tutkowski*
> > >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> > >>>> > e: mike.tutkowski@solidfire.com
> > >>>> > o: 303.746.7302
> > >>>> > Advancing the way the world uses the
> > >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> > >>>> > *™*
> > >>>> >
> > >>>>
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> *Mike Tutkowski*
> > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> e: mike.tutkowski@solidfire.com
> > >>> o: 303.746.7302
> > >>> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >>> *™*
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> *Mike Tutkowski*
> > >> *Senior CloudStack Developer, SolidFire Inc.*
> > >> e: mike.tutkowski@solidfire.com
> > >> o: 303.746.7302
> > >> Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > >> *™*
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
OK, will check it out in the next few days. As mentioned, you can set up
your Ubuntu vm as the management server as well if all else fails.  If you
can get to the mgmt server on 8250 from the KVM host, then you need to
enable.debug on the agent. It won't run without complaining loudly if it
can't get to the mgmt server, and I didn't see that in your agent log, so
perhaps its not running. I assume you know how to stop/start the agent on
KVM via 'service cloud stacks agent'.
On Sep 21, 2013 3:02 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hey Marcus,
>
> I haven't yet been able to test my new code, but I thought you would be a
> good person to ask to review it:
>
>
> https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37
>
> All it is supposed to do is attach and detach a data disk (that has
> guaranteed IOPS) with KVM as the hypervisor. The data disk happens to be
> from SolidFire-backed storage - where we have a 1:1 mapping between a
> CloudStack volume and a data disk.
>
> There is no support for hypervisor snapshots or stuff like that (likely a
> future release)...just attaching and detaching a data disk in 4.3.
>
> Thanks!
>
>
> On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
> > When I re-deployed the DEBs, I didn't remove cloudstack-agent first.
> Would
> > that be a problem? I just did a sudo apt-get install cloudstack-agent.
> >
> >
> > On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> >> I get the same error running the command manually:
> >>
> >> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> >> cloudstack-agent status
> >>  * could not access PID file for cloudstack-agent
> >>
> >>
> >> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> >> mike.tutkowski@solidfire.com> wrote:
> >>
> >>> agent.log looks OK to me:
> >>>
> >>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null)
> Agent
> >>> started
> >>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
> >>> Implementation Version is 4.3.0-SNAPSHOT
> >>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
> >>> agent.properties found at /etc/cloudstack/agent/agent.properties
> >>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
> >>> Defaulting to using properties file for storage
> >>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
> >>> Defaulting to the constant time backoff algorithm
> >>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null) log4j
> >>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> >>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
> >>> 2013-09-20 19:35:39,197 INFO
> >>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> >>> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
> >>>
> >>> However, I wasn't aware that setup.log was important. This seems to be
> a
> >>> problem, but I'm not sure what it might indicate:
> >>>
> >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> >>> DEBUG:root:Failed to execute: * could not access PID file for
> >>> cloudstack-agent
> >>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
> >>>
> >>>
> >>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >>>
> >>>> Sorry, I saw that in the log, I thought it was the agent log for some
> >>>> reason. Is the agent started? That might be the place to look. There
> is
> >>>> an
> >>>> agent log for the agent and one for the setup when it adds the host,
> >>>> both
> >>>> in /var/log
> >>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
> >>>> mike.tutkowski@solidfire.com>
> >>>> wrote:
> >>>>
> >>>> > Is it saying that the MS is at the IP address or the KVM host?
> >>>> >
> >>>> > The KVM host is at 192.168.233.10.
> >>>> > The MS host is at 192.168.233.1.
> >>>> >
> >>>> > I see this for my host Global Settings parameter:
> >>>> > hostThe ip address of management server192.168.233.1
> >>>> >
> >>>> >
> >>>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1
> value.
> >>>> >
> >>>> >
> >>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
> >>>> shadowsor@gmail.com
> >>>> > >wrote:
> >>>> >
> >>>> > > The log says your mgmt server is 192.168.233.10? But you tried to
> >>>> telnet
> >>>> > to
> >>>> > > 192.168.233.1? It might be enough to change that in
> >>>> > > /etc/cloudstack/agent/agent.properties, but you may want to edit
> the
> >>>> > config
> >>>> > > as well to tell it the real ms IP.
> >>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> >>>> mike.tutkowski@solidfire.com
> >>>> > >
> >>>> > > wrote:
> >>>> > >
> >>>> > > > Here's what my /etc/network/interfaces file looks like, if that
> >>>> is of
> >>>> > > > interest (the 192.168.233.0 network is the NAT network VMware
> >>>> Fusion
> >>>> > set
> >>>> > > > up):
> >>>> > > >
> >>>> > > > auto lo
> >>>> > > > iface lo inet loopback
> >>>> > > >
> >>>> > > > auto eth0
> >>>> > > > iface eth0 inet manual
> >>>> > > >
> >>>> > > > auto cloudbr0
> >>>> > > > iface cloudbr0 inet static
> >>>> > > >     address 192.168.233.10
> >>>> > > >     netmask 255.255.255.0
> >>>> > > >     network 192.168.233.0
> >>>> > > >     broadcast 192.168.233.255
> >>>> > > >     dns-nameservers 8.8.8.8
> >>>> > > >     bridge_ports eth0
> >>>> > > >     bridge_fd 5
> >>>> > > >     bridge_stp off
> >>>> > > >     bridge_maxwait 1
> >>>> > > >     post-up route add default gw 192.168.233.2 metric 1
> >>>> > > >     pre-down route del default gw 192.168.233.2
> >>>> > > >
> >>>> > > >
> >>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> >>>> > > > mike.tutkowski@solidfire.com> wrote:
> >>>> > > >
> >>>> > > > > You appear to be correct. This is from the MS log (below).
> >>>> Discovery
> >>>> > > > timed
> >>>> > > > > out.
> >>>> > > > >
> >>>> > > > > I'm not sure why this would be. My network settings shouldn't
> >>>> have
> >>>> > > > changed
> >>>> > > > > since the last time I tried this.
> >>>> > > > >
> >>>> > > > > I am able to ping the KVM host from the MS host and vice
> versa.
> >>>> > > > >
> >>>> > > > > I'm even able to manually kick off a VM on the KVM host and
> >>>> ping from
> >>>> > > it
> >>>> > > > > to the MS host.
> >>>> > > > >
> >>>> > > > > I am using NAT from my Mac OS X host (also running the CS MS)
> >>>> to the
> >>>> > VM
> >>>> > > > > running KVM (VMware Fusion).
> >>>> > > > >
> >>>> > > > > 2013-09-20 19:40:40,141 DEBUG
> >>>> [c.c.h.k.d.LibvirtServerDiscoverer]
> >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait
> for
> >>>> the
> >>>> > > host
> >>>> > > > > connecting to mgt svr, assuming it is failed
> >>>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the
> >>>> server
> >>>> > > > > resources at http://192.168.233.10
> >>>> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
> >>>> exception:
> >>>> > > > > com.cloud.exception.DiscoveryException in error code list for
> >>>> > > exceptions
> >>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> >>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> >>>> > > > > com.cloud.exception.DiscoveryException: Unable to add the host
> >>>> > > > > at
> >>>> > > > >
> >>>> > > >
> >>>> > >
> >>>> >
> >>>>
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >>>> > > > >
> >>>> > > > > I do seem to be able to telnet in from my KVM host to the MS
> >>>> host's
> >>>> > > 8250
> >>>> > > > > port:
> >>>> > > > >
> >>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> >>>> > > > > Trying 192.168.233.1...
> >>>> > > > > Connected to 192.168.233.1.
> >>>> > > > > Escape character is '^]'.
> >>>> > > > >
> >>>> > > >
> >>>> > > >
> >>>> > > >
> >>>> > > > --
> >>>> > > > *Mike Tutkowski*
> >>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
> >>>> > > > e: mike.tutkowski@solidfire.com
> >>>> > > > o: 303.746.7302
> >>>> > > > Advancing the way the world uses the
> >>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> > > > *™*
> >>>> > > >
> >>>> > >
> >>>> >
> >>>> >
> >>>> >
> >>>> > --
> >>>> > *Mike Tutkowski*
> >>>> > *Senior CloudStack Developer, SolidFire Inc.*
> >>>> > e: mike.tutkowski@solidfire.com
> >>>> > o: 303.746.7302
> >>>> > Advancing the way the world uses the
> >>>> > cloud<http://solidfire.com/solution/overview/?video=play>
> >>>> > *™*
> >>>> >
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >>> *™*
> >>>
> >>
> >>
> >>
> >> --
> >> *Mike Tutkowski*
> >> *Senior CloudStack Developer, SolidFire Inc.*
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> *™*
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

I haven't yet been able to test my new code, but I thought you would be a
good person to ask to review it:

https://github.com/mike-tutkowski/incubator-cloudstack/commit/ea74b312a8a36801994500407fd54f0cdda55e37

All it is supposed to do is attach and detach a data disk (that has
guaranteed IOPS) with KVM as the hypervisor. The data disk happens to be
from SolidFire-backed storage - where we have a 1:1 mapping between a
CloudStack volume and a data disk.

There is no support for hypervisor snapshots or stuff like that (likely a
future release)...just attaching and detaching a data disk in 4.3.

Thanks!


On Fri, Sep 20, 2013 at 11:05 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> When I re-deployed the DEBs, I didn't remove cloudstack-agent first. Would
> that be a problem? I just did a sudo apt-get install cloudstack-agent.
>
>
> On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> I get the same error running the command manually:
>>
>> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
>> cloudstack-agent status
>>  * could not access PID file for cloudstack-agent
>>
>>
>> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> agent.log looks OK to me:
>>>
>>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null) Agent
>>> started
>>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
>>> Implementation Version is 4.3.0-SNAPSHOT
>>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
>>> agent.properties found at /etc/cloudstack/agent/agent.properties
>>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
>>> Defaulting to using properties file for storage
>>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
>>> Defaulting to the constant time backoff algorithm
>>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null) log4j
>>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
>>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
>>> 2013-09-20 19:35:39,197 INFO
>>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
>>> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
>>>
>>> However, I wasn't aware that setup.log was important. This seems to be a
>>> problem, but I'm not sure what it might indicate:
>>>
>>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>>> DEBUG:root:Failed to execute: * could not access PID file for
>>> cloudstack-agent
>>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>>>
>>>
>>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> Sorry, I saw that in the log, I thought it was the agent log for some
>>>> reason. Is the agent started? That might be the place to look. There is
>>>> an
>>>> agent log for the agent and one for the setup when it adds the host,
>>>> both
>>>> in /var/log
>>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <
>>>> mike.tutkowski@solidfire.com>
>>>> wrote:
>>>>
>>>> > Is it saying that the MS is at the IP address or the KVM host?
>>>> >
>>>> > The KVM host is at 192.168.233.10.
>>>> > The MS host is at 192.168.233.1.
>>>> >
>>>> > I see this for my host Global Settings parameter:
>>>> > hostThe ip address of management server192.168.233.1
>>>> >
>>>> >
>>>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.
>>>> >
>>>> >
>>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> > >wrote:
>>>> >
>>>> > > The log says your mgmt server is 192.168.233.10? But you tried to
>>>> telnet
>>>> > to
>>>> > > 192.168.233.1? It might be enough to change that in
>>>> > > /etc/cloudstack/agent/agent.properties, but you may want to edit the
>>>> > config
>>>> > > as well to tell it the real ms IP.
>>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>>> mike.tutkowski@solidfire.com
>>>> > >
>>>> > > wrote:
>>>> > >
>>>> > > > Here's what my /etc/network/interfaces file looks like, if that
>>>> is of
>>>> > > > interest (the 192.168.233.0 network is the NAT network VMware
>>>> Fusion
>>>> > set
>>>> > > > up):
>>>> > > >
>>>> > > > auto lo
>>>> > > > iface lo inet loopback
>>>> > > >
>>>> > > > auto eth0
>>>> > > > iface eth0 inet manual
>>>> > > >
>>>> > > > auto cloudbr0
>>>> > > > iface cloudbr0 inet static
>>>> > > >     address 192.168.233.10
>>>> > > >     netmask 255.255.255.0
>>>> > > >     network 192.168.233.0
>>>> > > >     broadcast 192.168.233.255
>>>> > > >     dns-nameservers 8.8.8.8
>>>> > > >     bridge_ports eth0
>>>> > > >     bridge_fd 5
>>>> > > >     bridge_stp off
>>>> > > >     bridge_maxwait 1
>>>> > > >     post-up route add default gw 192.168.233.2 metric 1
>>>> > > >     pre-down route del default gw 192.168.233.2
>>>> > > >
>>>> > > >
>>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>>>> > > > mike.tutkowski@solidfire.com> wrote:
>>>> > > >
>>>> > > > > You appear to be correct. This is from the MS log (below).
>>>> Discovery
>>>> > > > timed
>>>> > > > > out.
>>>> > > > >
>>>> > > > > I'm not sure why this would be. My network settings shouldn't
>>>> have
>>>> > > > changed
>>>> > > > > since the last time I tried this.
>>>> > > > >
>>>> > > > > I am able to ping the KVM host from the MS host and vice versa.
>>>> > > > >
>>>> > > > > I'm even able to manually kick off a VM on the KVM host and
>>>> ping from
>>>> > > it
>>>> > > > > to the MS host.
>>>> > > > >
>>>> > > > > I am using NAT from my Mac OS X host (also running the CS MS)
>>>> to the
>>>> > VM
>>>> > > > > running KVM (VMware Fusion).
>>>> > > > >
>>>> > > > > 2013-09-20 19:40:40,141 DEBUG
>>>> [c.c.h.k.d.LibvirtServerDiscoverer]
>>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for
>>>> the
>>>> > > host
>>>> > > > > connecting to mgt svr, assuming it is failed
>>>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
>>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the
>>>> server
>>>> > > > > resources at http://192.168.233.10
>>>> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
>>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
>>>> exception:
>>>> > > > > com.cloud.exception.DiscoveryException in error code list for
>>>> > > exceptions
>>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
>>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
>>>> > > > > com.cloud.exception.DiscoveryException: Unable to add the host
>>>> > > > > at
>>>> > > > >
>>>> > > >
>>>> > >
>>>> >
>>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>>> > > > >
>>>> > > > > I do seem to be able to telnet in from my KVM host to the MS
>>>> host's
>>>> > > 8250
>>>> > > > > port:
>>>> > > > >
>>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>>> > > > > Trying 192.168.233.1...
>>>> > > > > Connected to 192.168.233.1.
>>>> > > > > Escape character is '^]'.
>>>> > > > >
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > --
>>>> > > > *Mike Tutkowski*
>>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > > > e: mike.tutkowski@solidfire.com
>>>> > > > o: 303.746.7302
>>>> > > > Advancing the way the world uses the
>>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> > > > *™*
>>>> > > >
>>>> > >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > *Mike Tutkowski*
>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > e: mike.tutkowski@solidfire.com
>>>> > o: 303.746.7302
>>>> > Advancing the way the world uses the
>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> > *™*
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
When I re-deployed the DEBs, I didn't remove cloudstack-agent first. Would
that be a problem? I just did a sudo apt-get install cloudstack-agent.


On Fri, Sep 20, 2013 at 11:03 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> I get the same error running the command manually:
>
> mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
> cloudstack-agent status
>  * could not access PID file for cloudstack-agent
>
>
> On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> agent.log looks OK to me:
>>
>> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null) Agent
>> started
>> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
>> Implementation Version is 4.3.0-SNAPSHOT
>> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
>> agent.properties found at /etc/cloudstack/agent/agent.properties
>> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
>> Defaulting to using properties file for storage
>> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
>> Defaulting to the constant time backoff algorithm
>> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null) log4j
>> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
>> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
>> 2013-09-20 19:35:39,197 INFO
>>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
>> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
>>
>> However, I wasn't aware that setup.log was important. This seems to be a
>> problem, but I'm not sure what it might indicate:
>>
>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
>> DEBUG:root:Failed to execute: * could not access PID file for
>> cloudstack-agent
>> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>>
>>
>> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Sorry, I saw that in the log, I thought it was the agent log for some
>>> reason. Is the agent started? That might be the place to look. There is
>>> an
>>> agent log for the agent and one for the setup when it adds the host, both
>>> in /var/log
>>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
>>> >
>>> wrote:
>>>
>>> > Is it saying that the MS is at the IP address or the KVM host?
>>> >
>>> > The KVM host is at 192.168.233.10.
>>> > The MS host is at 192.168.233.1.
>>> >
>>> > I see this for my host Global Settings parameter:
>>> > hostThe ip address of management server192.168.233.1
>>> >
>>> >
>>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.
>>> >
>>> >
>>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <shadowsor@gmail.com
>>> > >wrote:
>>> >
>>> > > The log says your mgmt server is 192.168.233.10? But you tried to
>>> telnet
>>> > to
>>> > > 192.168.233.1? It might be enough to change that in
>>> > > /etc/cloudstack/agent/agent.properties, but you may want to edit the
>>> > config
>>> > > as well to tell it the real ms IP.
>>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>>> mike.tutkowski@solidfire.com
>>> > >
>>> > > wrote:
>>> > >
>>> > > > Here's what my /etc/network/interfaces file looks like, if that is
>>> of
>>> > > > interest (the 192.168.233.0 network is the NAT network VMware
>>> Fusion
>>> > set
>>> > > > up):
>>> > > >
>>> > > > auto lo
>>> > > > iface lo inet loopback
>>> > > >
>>> > > > auto eth0
>>> > > > iface eth0 inet manual
>>> > > >
>>> > > > auto cloudbr0
>>> > > > iface cloudbr0 inet static
>>> > > >     address 192.168.233.10
>>> > > >     netmask 255.255.255.0
>>> > > >     network 192.168.233.0
>>> > > >     broadcast 192.168.233.255
>>> > > >     dns-nameservers 8.8.8.8
>>> > > >     bridge_ports eth0
>>> > > >     bridge_fd 5
>>> > > >     bridge_stp off
>>> > > >     bridge_maxwait 1
>>> > > >     post-up route add default gw 192.168.233.2 metric 1
>>> > > >     pre-down route del default gw 192.168.233.2
>>> > > >
>>> > > >
>>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>>> > > > mike.tutkowski@solidfire.com> wrote:
>>> > > >
>>> > > > > You appear to be correct. This is from the MS log (below).
>>> Discovery
>>> > > > timed
>>> > > > > out.
>>> > > > >
>>> > > > > I'm not sure why this would be. My network settings shouldn't
>>> have
>>> > > > changed
>>> > > > > since the last time I tried this.
>>> > > > >
>>> > > > > I am able to ping the KVM host from the MS host and vice versa.
>>> > > > >
>>> > > > > I'm even able to manually kick off a VM on the KVM host and ping
>>> from
>>> > > it
>>> > > > > to the MS host.
>>> > > > >
>>> > > > > I am using NAT from my Mac OS X host (also running the CS MS) to
>>> the
>>> > VM
>>> > > > > running KVM (VMware Fusion).
>>> > > > >
>>> > > > > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for
>>> the
>>> > > host
>>> > > > > connecting to mgt svr, assuming it is failed
>>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the
>>> server
>>> > > > > resources at http://192.168.233.10
>>> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
>>> exception:
>>> > > > > com.cloud.exception.DiscoveryException in error code list for
>>> > > exceptions
>>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
>>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
>>> > > > > com.cloud.exception.DiscoveryException: Unable to add the host
>>> > > > > at
>>> > > > >
>>> > > >
>>> > >
>>> >
>>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>>> > > > >
>>> > > > > I do seem to be able to telnet in from my KVM host to the MS
>>> host's
>>> > > 8250
>>> > > > > port:
>>> > > > >
>>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>>> > > > > Trying 192.168.233.1...
>>> > > > > Connected to 192.168.233.1.
>>> > > > > Escape character is '^]'.
>>> > > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > > --
>>> > > > *Mike Tutkowski*
>>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > > e: mike.tutkowski@solidfire.com
>>> > > > o: 303.746.7302
>>> > > > Advancing the way the world uses the
>>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > > > *™*
>>> > > >
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>> >
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I get the same error running the command manually:

mtutkowski@ubuntu:/etc/cloudstack/agent$ sudo /usr/sbin/service
cloudstack-agent status
 * could not access PID file for cloudstack-agent


On Fri, Sep 20, 2013 at 11:00 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> agent.log looks OK to me:
>
> 2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null) Agent
> started
> 2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
> Implementation Version is 4.3.0-SNAPSHOT
> 2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
> agent.properties found at /etc/cloudstack/agent/agent.properties
> 2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
> Defaulting to using properties file for storage
> 2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
> Defaulting to the constant time backoff algorithm
> 2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null) log4j
> configuration found at /etc/cloudstack/agent/log4j-cloud.xml
> 2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
> 2013-09-20 19:35:39,197 INFO
>  [resource.virtualnetwork.VirtualRoutingResource] (main:null)
> VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm
>
> However, I wasn't aware that setup.log was important. This seems to be a
> problem, but I'm not sure what it might indicate:
>
> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
> DEBUG:root:Failed to execute: * could not access PID file for
> cloudstack-agent
> DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start
>
>
> On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Sorry, I saw that in the log, I thought it was the agent log for some
>> reason. Is the agent started? That might be the place to look. There is an
>> agent log for the agent and one for the setup when it adds the host, both
>> in /var/log
>> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > Is it saying that the MS is at the IP address or the KVM host?
>> >
>> > The KVM host is at 192.168.233.10.
>> > The MS host is at 192.168.233.1.
>> >
>> > I see this for my host Global Settings parameter:
>> > hostThe ip address of management server192.168.233.1
>> >
>> >
>> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.
>> >
>> >
>> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > The log says your mgmt server is 192.168.233.10? But you tried to
>> telnet
>> > to
>> > > 192.168.233.1? It might be enough to change that in
>> > > /etc/cloudstack/agent/agent.properties, but you may want to edit the
>> > config
>> > > as well to tell it the real ms IP.
>> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com
>> > >
>> > > wrote:
>> > >
>> > > > Here's what my /etc/network/interfaces file looks like, if that is
>> of
>> > > > interest (the 192.168.233.0 network is the NAT network VMware Fusion
>> > set
>> > > > up):
>> > > >
>> > > > auto lo
>> > > > iface lo inet loopback
>> > > >
>> > > > auto eth0
>> > > > iface eth0 inet manual
>> > > >
>> > > > auto cloudbr0
>> > > > iface cloudbr0 inet static
>> > > >     address 192.168.233.10
>> > > >     netmask 255.255.255.0
>> > > >     network 192.168.233.0
>> > > >     broadcast 192.168.233.255
>> > > >     dns-nameservers 8.8.8.8
>> > > >     bridge_ports eth0
>> > > >     bridge_fd 5
>> > > >     bridge_stp off
>> > > >     bridge_maxwait 1
>> > > >     post-up route add default gw 192.168.233.2 metric 1
>> > > >     pre-down route del default gw 192.168.233.2
>> > > >
>> > > >
>> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
>> > > > mike.tutkowski@solidfire.com> wrote:
>> > > >
>> > > > > You appear to be correct. This is from the MS log (below).
>> Discovery
>> > > > timed
>> > > > > out.
>> > > > >
>> > > > > I'm not sure why this would be. My network settings shouldn't have
>> > > > changed
>> > > > > since the last time I tried this.
>> > > > >
>> > > > > I am able to ping the KVM host from the MS host and vice versa.
>> > > > >
>> > > > > I'm even able to manually kick off a VM on the KVM host and ping
>> from
>> > > it
>> > > > > to the MS host.
>> > > > >
>> > > > > I am using NAT from my Mac OS X host (also running the CS MS) to
>> the
>> > VM
>> > > > > running KVM (VMware Fusion).
>> > > > >
>> > > > > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for
>> the
>> > > host
>> > > > > connecting to mgt svr, assuming it is failed
>> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the
>> server
>> > > > > resources at http://192.168.233.10
>> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
>> exception:
>> > > > > com.cloud.exception.DiscoveryException in error code list for
>> > > exceptions
>> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
>> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
>> > > > > com.cloud.exception.DiscoveryException: Unable to add the host
>> > > > > at
>> > > > >
>> > > >
>> > >
>> >
>> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>> > > > >
>> > > > > I do seem to be able to telnet in from my KVM host to the MS
>> host's
>> > > 8250
>> > > > > port:
>> > > > >
>> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
>> > > > > Trying 192.168.233.1...
>> > > > > Connected to 192.168.233.1.
>> > > > > Escape character is '^]'.
>> > > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > *Mike Tutkowski*
>> > > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > > e: mike.tutkowski@solidfire.com
>> > > > o: 303.746.7302
>> > > > Advancing the way the world uses the
>> > > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > > *™*
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
agent.log looks OK to me:

2013-09-20 19:35:39,010 INFO  [cloud.agent.AgentShell] (main:null) Agent
started
2013-09-20 19:35:39,011 INFO  [cloud.agent.AgentShell] (main:null)
Implementation Version is 4.3.0-SNAPSHOT
2013-09-20 19:35:39,015 INFO  [cloud.agent.AgentShell] (main:null)
agent.properties found at /etc/cloudstack/agent/agent.properties
2013-09-20 19:35:39,023 INFO  [cloud.agent.AgentShell] (main:null)
Defaulting to using properties file for storage
2013-09-20 19:35:39,024 INFO  [cloud.agent.AgentShell] (main:null)
Defaulting to the constant time backoff algorithm
2013-09-20 19:35:39,029 INFO  [cloud.utils.LogUtils] (main:null) log4j
configuration found at /etc/cloudstack/agent/log4j-cloud.xml
2013-09-20 19:35:39,163 INFO  [cloud.agent.Agent] (main:null) id is 3
2013-09-20 19:35:39,197 INFO
 [resource.virtualnetwork.VirtualRoutingResource] (main:null)
VirtualRoutingResource _scriptDir to use: scripts/network/domr/kvm

However, I wasn't aware that setup.log was important. This seems to be a
problem, but I'm not sure what it might indicate:

DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent status
DEBUG:root:Failed to execute: * could not access PID file for
cloudstack-agent
DEBUG:root:execute:sudo /usr/sbin/service cloudstack-agent start


On Fri, Sep 20, 2013 at 10:53 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Sorry, I saw that in the log, I thought it was the agent log for some
> reason. Is the agent started? That might be the place to look. There is an
> agent log for the agent and one for the setup when it adds the host, both
> in /var/log
> On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Is it saying that the MS is at the IP address or the KVM host?
> >
> > The KVM host is at 192.168.233.10.
> > The MS host is at 192.168.233.1.
> >
> > I see this for my host Global Settings parameter:
> > hostThe ip address of management server192.168.233.1
> >
> >
> > /etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.
> >
> >
> > On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > The log says your mgmt server is 192.168.233.10? But you tried to
> telnet
> > to
> > > 192.168.233.1? It might be enough to change that in
> > > /etc/cloudstack/agent/agent.properties, but you may want to edit the
> > config
> > > as well to tell it the real ms IP.
> > > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> > >
> > > wrote:
> > >
> > > > Here's what my /etc/network/interfaces file looks like, if that is of
> > > > interest (the 192.168.233.0 network is the NAT network VMware Fusion
> > set
> > > > up):
> > > >
> > > > auto lo
> > > > iface lo inet loopback
> > > >
> > > > auto eth0
> > > > iface eth0 inet manual
> > > >
> > > > auto cloudbr0
> > > > iface cloudbr0 inet static
> > > >     address 192.168.233.10
> > > >     netmask 255.255.255.0
> > > >     network 192.168.233.0
> > > >     broadcast 192.168.233.255
> > > >     dns-nameservers 8.8.8.8
> > > >     bridge_ports eth0
> > > >     bridge_fd 5
> > > >     bridge_stp off
> > > >     bridge_maxwait 1
> > > >     post-up route add default gw 192.168.233.2 metric 1
> > > >     pre-down route del default gw 192.168.233.2
> > > >
> > > >
> > > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > > > mike.tutkowski@solidfire.com> wrote:
> > > >
> > > > > You appear to be correct. This is from the MS log (below).
> Discovery
> > > > timed
> > > > > out.
> > > > >
> > > > > I'm not sure why this would be. My network settings shouldn't have
> > > > changed
> > > > > since the last time I tried this.
> > > > >
> > > > > I am able to ping the KVM host from the MS host and vice versa.
> > > > >
> > > > > I'm even able to manually kick off a VM on the KVM host and ping
> from
> > > it
> > > > > to the MS host.
> > > > >
> > > > > I am using NAT from my Mac OS X host (also running the CS MS) to
> the
> > VM
> > > > > running KVM (VMware Fusion).
> > > > >
> > > > > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the
> > > host
> > > > > connecting to mgt svr, assuming it is failed
> > > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the
> server
> > > > > resources at http://192.168.233.10
> > > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find
> exception:
> > > > > com.cloud.exception.DiscoveryException in error code list for
> > > exceptions
> > > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > > > > com.cloud.exception.DiscoveryException: Unable to add the host
> > > > > at
> > > > >
> > > >
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > > > >
> > > > > I do seem to be able to telnet in from my KVM host to the MS host's
> > > 8250
> > > > > port:
> > > > >
> > > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > > > Trying 192.168.233.1...
> > > > > Connected to 192.168.233.1.
> > > > > Escape character is '^]'.
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > > >
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Sorry, I saw that in the log, I thought it was the agent log for some
reason. Is the agent started? That might be the place to look. There is an
agent log for the agent and one for the setup when it adds the host, both
in /var/log
On Sep 20, 2013 10:42 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Is it saying that the MS is at the IP address or the KVM host?
>
> The KVM host is at 192.168.233.10.
> The MS host is at 192.168.233.1.
>
> I see this for my host Global Settings parameter:
> hostThe ip address of management server192.168.233.1
>
>
> /etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.
>
>
> On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > The log says your mgmt server is 192.168.233.10? But you tried to telnet
> to
> > 192.168.233.1? It might be enough to change that in
> > /etc/cloudstack/agent/agent.properties, but you may want to edit the
> config
> > as well to tell it the real ms IP.
> > On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
> >
> > wrote:
> >
> > > Here's what my /etc/network/interfaces file looks like, if that is of
> > > interest (the 192.168.233.0 network is the NAT network VMware Fusion
> set
> > > up):
> > >
> > > auto lo
> > > iface lo inet loopback
> > >
> > > auto eth0
> > > iface eth0 inet manual
> > >
> > > auto cloudbr0
> > > iface cloudbr0 inet static
> > >     address 192.168.233.10
> > >     netmask 255.255.255.0
> > >     network 192.168.233.0
> > >     broadcast 192.168.233.255
> > >     dns-nameservers 8.8.8.8
> > >     bridge_ports eth0
> > >     bridge_fd 5
> > >     bridge_stp off
> > >     bridge_maxwait 1
> > >     post-up route add default gw 192.168.233.2 metric 1
> > >     pre-down route del default gw 192.168.233.2
> > >
> > >
> > > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > > mike.tutkowski@solidfire.com> wrote:
> > >
> > > > You appear to be correct. This is from the MS log (below). Discovery
> > > timed
> > > > out.
> > > >
> > > > I'm not sure why this would be. My network settings shouldn't have
> > > changed
> > > > since the last time I tried this.
> > > >
> > > > I am able to ping the KVM host from the MS host and vice versa.
> > > >
> > > > I'm even able to manually kick off a VM on the KVM host and ping from
> > it
> > > > to the MS host.
> > > >
> > > > I am using NAT from my Mac OS X host (also running the CS MS) to the
> VM
> > > > running KVM (VMware Fusion).
> > > >
> > > > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
> > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the
> > host
> > > > connecting to mgt svr, assuming it is failed
> > > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the server
> > > > resources at http://192.168.233.10
> > > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find exception:
> > > > com.cloud.exception.DiscoveryException in error code list for
> > exceptions
> > > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > > > com.cloud.exception.DiscoveryException: Unable to add the host
> > > > at
> > > >
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > > >
> > > > I do seem to be able to telnet in from my KVM host to the MS host's
> > 8250
> > > > port:
> > > >
> > > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > > Trying 192.168.233.1...
> > > > Connected to 192.168.233.1.
> > > > Escape character is '^]'.
> > > >
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Is it saying that the MS is at the IP address or the KVM host?

The KVM host is at 192.168.233.10.
The MS host is at 192.168.233.1.

I see this for my host Global Settings parameter:
hostThe ip address of management server192.168.233.1


/etc/cloudstack/agent/agent.properties has a host=192.168.233.1 value.


On Fri, Sep 20, 2013 at 10:21 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> The log says your mgmt server is 192.168.233.10? But you tried to telnet to
> 192.168.233.1? It might be enough to change that in
> /etc/cloudstack/agent/agent.properties, but you may want to edit the config
> as well to tell it the real ms IP.
> On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Here's what my /etc/network/interfaces file looks like, if that is of
> > interest (the 192.168.233.0 network is the NAT network VMware Fusion set
> > up):
> >
> > auto lo
> > iface lo inet loopback
> >
> > auto eth0
> > iface eth0 inet manual
> >
> > auto cloudbr0
> > iface cloudbr0 inet static
> >     address 192.168.233.10
> >     netmask 255.255.255.0
> >     network 192.168.233.0
> >     broadcast 192.168.233.255
> >     dns-nameservers 8.8.8.8
> >     bridge_ports eth0
> >     bridge_fd 5
> >     bridge_stp off
> >     bridge_maxwait 1
> >     post-up route add default gw 192.168.233.2 metric 1
> >     pre-down route del default gw 192.168.233.2
> >
> >
> > On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> > > You appear to be correct. This is from the MS log (below). Discovery
> > timed
> > > out.
> > >
> > > I'm not sure why this would be. My network settings shouldn't have
> > changed
> > > since the last time I tried this.
> > >
> > > I am able to ping the KVM host from the MS host and vice versa.
> > >
> > > I'm even able to manually kick off a VM on the KVM host and ping from
> it
> > > to the MS host.
> > >
> > > I am using NAT from my Mac OS X host (also running the CS MS) to the VM
> > > running KVM (VMware Fusion).
> > >
> > > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
> > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the
> host
> > > connecting to mgt svr, assuming it is failed
> > > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the server
> > > resources at http://192.168.233.10
> > > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find exception:
> > > com.cloud.exception.DiscoveryException in error code list for
> exceptions
> > > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > > com.cloud.exception.DiscoveryException: Unable to add the host
> > > at
> > >
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> > >
> > > I do seem to be able to telnet in from my KVM host to the MS host's
> 8250
> > > port:
> > >
> > > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > > Trying 192.168.233.1...
> > > Connected to 192.168.233.1.
> > > Escape character is '^]'.
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
The log says your mgmt server is 192.168.233.10? But you tried to telnet to
192.168.233.1? It might be enough to change that in
/etc/cloudstack/agent/agent.properties, but you may want to edit the config
as well to tell it the real ms IP.
On Sep 20, 2013 10:12 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Here's what my /etc/network/interfaces file looks like, if that is of
> interest (the 192.168.233.0 network is the NAT network VMware Fusion set
> up):
>
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet manual
>
> auto cloudbr0
> iface cloudbr0 inet static
>     address 192.168.233.10
>     netmask 255.255.255.0
>     network 192.168.233.0
>     broadcast 192.168.233.255
>     dns-nameservers 8.8.8.8
>     bridge_ports eth0
>     bridge_fd 5
>     bridge_stp off
>     bridge_maxwait 1
>     post-up route add default gw 192.168.233.2 metric 1
>     pre-down route del default gw 192.168.233.2
>
>
> On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
> > You appear to be correct. This is from the MS log (below). Discovery
> timed
> > out.
> >
> > I'm not sure why this would be. My network settings shouldn't have
> changed
> > since the last time I tried this.
> >
> > I am able to ping the KVM host from the MS host and vice versa.
> >
> > I'm even able to manually kick off a VM on the KVM host and ping from it
> > to the MS host.
> >
> > I am using NAT from my Mac OS X host (also running the CS MS) to the VM
> > running KVM (VMware Fusion).
> >
> > 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
> > (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the host
> > connecting to mgt svr, assuming it is failed
> > 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> > (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the server
> > resources at http://192.168.233.10
> > 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> > (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find exception:
> > com.cloud.exception.DiscoveryException in error code list for exceptions
> > 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> > (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> > com.cloud.exception.DiscoveryException: Unable to add the host
> > at
> >
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
> >
> > I do seem to be able to telnet in from my KVM host to the MS host's 8250
> > port:
> >
> > mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> > Trying 192.168.233.1...
> > Connected to 192.168.233.1.
> > Escape character is '^]'.
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Here's what my /etc/network/interfaces file looks like, if that is of
interest (the 192.168.233.0 network is the NAT network VMware Fusion set
up):

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto cloudbr0
iface cloudbr0 inet static
    address 192.168.233.10
    netmask 255.255.255.0
    network 192.168.233.0
    broadcast 192.168.233.255
    dns-nameservers 8.8.8.8
    bridge_ports eth0
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1
    post-up route add default gw 192.168.233.2 metric 1
    pre-down route del default gw 192.168.233.2


On Fri, Sep 20, 2013 at 10:08 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> You appear to be correct. This is from the MS log (below). Discovery timed
> out.
>
> I'm not sure why this would be. My network settings shouldn't have changed
> since the last time I tried this.
>
> I am able to ping the KVM host from the MS host and vice versa.
>
> I'm even able to manually kick off a VM on the KVM host and ping from it
> to the MS host.
>
> I am using NAT from my Mac OS X host (also running the CS MS) to the VM
> running KVM (VMware Fusion).
>
> 2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
> (613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the host
> connecting to mgt svr, assuming it is failed
> 2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
> (613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the server
> resources at http://192.168.233.10
> 2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
> (613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find exception:
> com.cloud.exception.DiscoveryException in error code list for exceptions
> 2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
> (613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
> com.cloud.exception.DiscoveryException: Unable to add the host
> at
> com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)
>
> I do seem to be able to telnet in from my KVM host to the MS host's 8250
> port:
>
> mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
> Trying 192.168.233.1...
> Connected to 192.168.233.1.
> Escape character is '^]'.
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
You appear to be correct. This is from the MS log (below). Discovery timed
out.

I'm not sure why this would be. My network settings shouldn't have changed
since the last time I tried this.

I am able to ping the KVM host from the MS host and vice versa.

I'm even able to manually kick off a VM on the KVM host and ping from it to
the MS host.

I am using NAT from my Mac OS X host (also running the CS MS) to the VM
running KVM (VMware Fusion).

2013-09-20 19:40:40,141 DEBUG [c.c.h.k.d.LibvirtServerDiscoverer]
(613487991@qtp-1659933291-3:ctx-6b28dc48) Timeout, to wait for the host
connecting to mgt svr, assuming it is failed
2013-09-20 19:40:40,144 WARN  [c.c.r.ResourceManagerImpl]
(613487991@qtp-1659933291-3:ctx-6b28dc48) Unable to find the server
resources at http://192.168.233.10
2013-09-20 19:40:40,145 INFO  [c.c.u.e.CSExceptionErrorCode]
(613487991@qtp-1659933291-3:ctx-6b28dc48) Could not find exception:
com.cloud.exception.DiscoveryException in error code list for exceptions
2013-09-20 19:40:40,147 WARN  [o.a.c.a.c.a.h.AddHostCmd]
(613487991@qtp-1659933291-3:ctx-6b28dc48) Exception:
com.cloud.exception.DiscoveryException: Unable to add the host
at
com.cloud.resource.ResourceManagerImpl.discoverHostsFull(ResourceManagerImpl.java:778)

I do seem to be able to telnet in from my KVM host to the MS host's 8250
port:

mtutkowski@ubuntu:~$ telnet 192.168.233.1 8250
Trying 192.168.233.1...
Connected to 192.168.233.1.
Escape character is '^]'.

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
You may not be able to connect from VM to host on the interface you
have defined. I'm not sure if you've set up your mgmt net as the
host-only or the NAT. Try to telnet from the KVM to your mac's ip for
that network on port 8250.

 All else fails, just build the .deb and install the mgmt server
inside the VM, that's the only way I do KVM development, as I build
the packages anyhow to get the agent installed.

On Fri, Sep 20, 2013 at 4:22 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Got most of my KVM code written to support managed storage (I believe).
>
> I'm in the process of testing it, but having a hard time adding a KVM host
> to my MG.
>
> I took a look at agent.log and it repeats the following line many times:
>
> 2013-09-20 14:00:17,264 INFO  [utils.nio.NioClient] (Agent-Selector:null)
> Connecting to 192.168.233.1:8250
> 2013-09-20 14:00:17,265 WARN  [utils.nio.NioConnection]
> (Agent-Selector:null) Unable to connect to remote: is there a server
> running on port 8250
>
> When I go over to the host running my management server (on
> 192.168.233.10), it appears it's listening on port 8250 (the CS MS is
> running on Mac OS X 10.8.3):
>
> mtutkowski-LT:~ mtutkowski$ sudo lsof -i -P | grep 8250
> java      35404     mtutkowski  303u  IPv6 0x91ca095ebc7bd3eb      0t0
>  TCP *:8250 (LISTEN)
>
> The KVM host is on Ubuntu 12.04.1. I don't think any firewalls are in play.
>
> Any thoughts on this?
>
> Thanks!
>
>
> On Wed, Sep 18, 2013 at 1:51 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Actually, I may have put that code in a place that is never called
>> nowadays.
>>
>> Do you know why we have AttachVolumeCommand in LibvirtComputingResource
>> and AttachCommand DetachCommand in KVMStorageProcessor?
>>
>> They seem like they do the same thing.
>>
>> If I recall, this change went in with 4.2 and it's the same story for
>> XenServer and VMware.
>>
>> Maybe this location (in KVMStorageProcessor) makes more sense:
>>
>>     @Override
>>
>>     public Answer attachVolume(AttachCommand cmd) {
>>
>>         DiskTO disk = cmd.getDisk();
>>
>>         VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>>
>>         PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
>> vol.getDataStore();
>>
>>         String vmName = cmd.getVmName();
>>
>>         try {
>>
>>             Connect conn = LibvirtConnection.getConnectionByVmName(vmName);
>>
>> *            if (cmd.isManaged()) {*
>>
>> *                storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
>> cmd.getStorageHost(), cmd.getStoragePort(),*
>>
>> *                        vol.getPath(), null, primaryStore.getPoolType());
>> *
>>
>> *            }*
>>
>>             KVMStoragePool primary = storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
>> primaryStore.getUuid());
>>
>>             KVMPhysicalDisk phyDisk =
>> primary.getPhysicalDisk(vol.getPath());
>>
>>             attachOrDetachDisk(conn, true, vmName, phyDisk,
>> disk.getDiskSeq().intValue());
>>
>>             return new AttachAnswer(disk);
>>
>>         } catch (LibvirtException e) {
>>
>>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
>> due to " + e.toString());
>>
>>             return new AttachAnswer(e.toString());
>>
>>         } catch (InternalErrorException e) {
>>
>>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
>> due to " + e.toString());
>>
>>             return new AttachAnswer(e.toString());
>>
>>         }
>>
>>     }
>>
>>
>>     @Override
>>
>>     public Answer dettachVolume(DettachCommand cmd) {
>>
>>         DiskTO disk = cmd.getDisk();
>>
>>         VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>>
>>         PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
>> vol.getDataStore();
>>
>>         String vmName = cmd.getVmName();
>>
>>         try {
>>
>>             Connect conn = LibvirtConnection.getConnectionByVmName(vmName);
>>
>>             KVMStoragePool primary = storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
>> primaryStore.getUuid());
>>
>>             KVMPhysicalDisk phyDisk =
>> primary.getPhysicalDisk(vol.getPath());
>>
>>             attachOrDetachDisk(conn, false, vmName, phyDisk,
>> disk.getDiskSeq().intValue());
>>
>> *            if (cmd.isManaged()) {*
>>
>> *                storagePoolMgr.deleteStoragePool(primaryStore.getPoolType(),
>> cmd.get_iScsiName());*
>>
>> *            }*
>>
>>             return new DettachAnswer(disk);
>>
>>         } catch (LibvirtException e) {
>>
>>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
>> due to " + e.toString());
>>
>>             return new DettachAnswer(e.toString());
>>
>>         } catch (InternalErrorException e) {
>>
>>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
>> due to " + e.toString());
>>
>>             return new DettachAnswer(e.toString());
>>
>>         }
>>
>>     }
>>
>>
>> On Wed, Sep 18, 2013 at 1:07 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Hey Marcus,
>>>
>>> What do you think about something relatively simple like this (below)? It
>>> parallels the XenServer and VMware code nicely.
>>>
>>> The idea is to only deal with the AttachVolumeCommand.
>>>
>>> If we are attaching a volume AND the underlying storage is so-called
>>> managed storage, at this point I invoke createStoragePool to create my
>>> iScsiAdmStoragePool object, establish a connection with the LUN, and
>>> prepare a KVMPhysicalDisk object, which will be requested a bit later
>>> during the actual attach.
>>>
>>> If we are detaching a volume AND the underlying storage is managed, the
>>> KVMStoragePool already exists, so we don't have to do anything special
>>> until after the volume is detached. At this point, we delete the storage
>>> pool (remove the iSCSI connection to the LUN and remove the reference to
>>> the iScsiAdmStoragePool from my adaptor).
>>>
>>>     private AttachVolumeAnswer execute(AttachVolumeCommand cmd) {
>>>
>>>         try {
>>>
>>>             Connect conn =
>>> LibvirtConnection.getConnectionByVmName(cmd.getVmName());
>>>
>>> *            if (cmd.getAttach() && cmd.isManaged()) {*
>>>
>>> *                _storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
>>> cmd.getStorageHost(), cmd.getStoragePort(),*
>>>
>>> *                        cmd.getVolumePath(), null, cmd.getPooltype());*
>>>
>>> *            }*
>>>
>>>             KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>
>>>                     cmd.getPooltype(),
>>>
>>>                     cmd.getPoolUuid());
>>>
>>>             KVMPhysicalDisk disk =
>>> primary.getPhysicalDisk(cmd.getVolumePath());
>>>
>>>             attachOrDetachDisk(conn, cmd.getAttach(), cmd.getVmName(),
>>> disk,
>>>
>>>                     cmd.getDeviceId().intValue(), cmd.getBytesReadRate(),
>>> cmd.getBytesWriteRate(), cmd.getIopsReadRate(), cmd.getIopsWriteRate());
>>>
>>> *            if (!cmd.getAttach() && cmd.isManaged()) {*
>>>
>>> *                _storagePoolMgr.deleteStoragePool(cmd.getPooltype(),
>>> cmd.get_iScsiName());*
>>>
>>> *            }*
>>>
>>>         } catch (LibvirtException e) {
>>>
>>>             return new AttachVolumeAnswer(cmd, e.toString());
>>>
>>>         } catch (InternalErrorException e) {
>>>
>>>             return new AttachVolumeAnswer(cmd, e.toString());
>>>
>>>         }
>>>
>>>
>>>         return new AttachVolumeAnswer(cmd, cmd.getDeviceId(),
>>> cmd.getVolumePath());
>>>
>>>     }
>>>
>>>
>>> On Wed, Sep 18, 2013 at 11:27 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Since a KVMStoragePool returns a capacity, used, and available number of
>>>> bytes, I will probably need to look into having this information ignored if
>>>> the storage_pool in question is "managed" as - in my case - it wouldn't
>>>> really make any sense.
>>>>
>>>>
>>>> On Wed, Sep 18, 2013 at 10:53 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> Sure, sounds good.
>>>>>
>>>>> Right now there are only two storage plug-ins: Edison's default plug-in
>>>>> and the SolidFire plug-in.
>>>>>
>>>>> As an example, when createAsync is called in the plug-in, mine creates
>>>>> a new volume (LUN) on the SAN with a capacity and number of Min, Max, and
>>>>> Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
>>>>> of preallocated storage for a new volume (like create a new VDI in an
>>>>> existing SR).
>>>>>
>>>>>
>>>>> On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> That wasn't my question, but I feel we're getting off in the weeds and
>>>>>> I can just look at the storage framework to see how it works and what
>>>>>> options it supports.
>>>>>>
>>>>>> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
>>>>>> <mi...@solidfire.com> wrote:
>>>>>> > At the time being, I am not aware of any other storage vendor with
>>>>>> truly
>>>>>> > guaranteed QoS.
>>>>>> >
>>>>>> > Most implement QoS in a relative sense (like thread priorities).
>>>>>> >
>>>>>> >
>>>>>> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com>wrote:
>>>>>> >
>>>>>> >> Yeah, that's why I thought it was specific to your implementation.
>>>>>> Perhaps
>>>>>> >> that's true, then?
>>>>>> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
>>>>>> mike.tutkowski@solidfire.com>
>>>>>> >> wrote:
>>>>>> >>
>>>>>> >> > I totally get where you're coming from with the tiered-pool
>>>>>> approach,
>>>>>> >> > though.
>>>>>> >> >
>>>>>> >> > Prior to SolidFire, I worked at HP and the product I worked on
>>>>>> allowed a
>>>>>> >> > single, clustered SAN to host multiple pools of storage. One pool
>>>>>> might
>>>>>> >> be
>>>>>> >> > made up of all-SSD storage nodes while another pool might be made
>>>>>> up of
>>>>>> >> > slower HDDs.
>>>>>> >> >
>>>>>> >> > That kind of tiering is not what SolidFire QoS is about, though,
>>>>>> as that
>>>>>> >> > kind of tiering does not guarantee QoS.
>>>>>> >> >
>>>>>> >> > In the SolidFire SAN, QoS was designed in from the beginning and
>>>>>> is
>>>>>> >> > extremely granular. Each volume has its own performance and
>>>>>> capacity. You
>>>>>> >> > do not have to worry about Noisy Neighbors.
>>>>>> >> >
>>>>>> >> > The idea is to encourage businesses to trust the cloud with their
>>>>>> most
>>>>>> >> > critical business applications at a price point on par with
>>>>>> traditional
>>>>>> >> > SANs.
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>>>>>> >> > mike.tutkowski@solidfire.com> wrote:
>>>>>> >> >
>>>>>> >> > > Ah, I think I see the miscommunication.
>>>>>> >> > >
>>>>>> >> > > I should have gone into a bit more detail about the SolidFire
>>>>>> SAN.
>>>>>> >> > >
>>>>>> >> > > It is built from the ground up to support QoS on a LUN-by-LUN
>>>>>> basis.
>>>>>> >> > Every
>>>>>> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>>>>>> >> > >
>>>>>> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself
>>>>>> is not
>>>>>> >> > > over provisioned). Capacity and IOPS are provisioned
>>>>>> independently.
>>>>>> >> > > Multiple volumes and multiple tenants using the same SAN do not
>>>>>> suffer
>>>>>> >> > from
>>>>>> >> > > the Noisy Neighbor effect.
>>>>>> >> > >
>>>>>> >> > > When you create a Disk Offering in CS that is storage tagged to
>>>>>> use
>>>>>> >> > > SolidFire primary storage, you specify a Min, Max, and Burst
>>>>>> number of
>>>>>> >> > IOPS
>>>>>> >> > > to provision from the SAN for volumes created from that Disk
>>>>>> Offering.
>>>>>> >> > >
>>>>>> >> > > There is no notion of RAID groups that you see in more
>>>>>> traditional
>>>>>> >> SANs.
>>>>>> >> > > The SAN is built from clusters of storage nodes and data is
>>>>>> replicated
>>>>>> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN)
>>>>>> in the
>>>>>> >> > > cluster to avoid hot spots and protect the data should a drives
>>>>>> and/or
>>>>>> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
>>>>>> >> > >
>>>>>> >> > > Data is compressed and de-duplicated inline across the cluster
>>>>>> and all
>>>>>> >> > > volumes are thinly provisioned.
>>>>>> >> > >
>>>>>> >> > >
>>>>>> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com
>>>>>> >> > >wrote:
>>>>>> >> > >
>>>>>> >> > >> I'm surprised there's no mention of pool on the SAN in your
>>>>>> >> description
>>>>>> >> > of
>>>>>> >> > >> the framework. I had assumed this was specific to your
>>>>>> implementation,
>>>>>> >> > >> because normally SANs host multiple disk pools, maybe multiple
>>>>>> RAID
>>>>>> >> 50s
>>>>>> >> > >> and
>>>>>> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a
>>>>>> pool
>>>>>> >> > intended
>>>>>> >> > >> for root disks and a separate one for data disks. Or one pool
>>>>>> for
>>>>>> >> > >> cloudstack and one dedicated to some other internal db
>>>>>> application.
>>>>>> >> But
>>>>>> >> > it
>>>>>> >> > >> sounds as though there's no place to specify which disks or
>>>>>> pool on
>>>>>> >> the
>>>>>> >> > >> SAN
>>>>>> >> > >> to use.
>>>>>> >> > >>
>>>>>> >> > >> We implemented our own internal storage SAN plugin based on
>>>>>> 4.1. We
>>>>>> >> used
>>>>>> >> > >> the 'path' attribute of the primary storage pool object to
>>>>>> specify
>>>>>> >> which
>>>>>> >> > >> pool name on the back end SAN to use, so we could create
>>>>>> all-ssd pools
>>>>>> >> > and
>>>>>> >> > >> slower spindle pools, then differentiate between them based on
>>>>>> storage
>>>>>> >> > >> tags. Normally the path attribute would be the mount point for
>>>>>> NFS,
>>>>>> >> but
>>>>>> >> > >> its
>>>>>> >> > >> just a string. So when registering ours we enter San dns host
>>>>>> name,
>>>>>> >> the
>>>>>> >> > >> san's rest api port, and the pool name. Then luns created from
>>>>>> that
>>>>>> >> > >> primary
>>>>>> >> > >> storage come from the matching disk pool on the SAN. We can
>>>>>> create and
>>>>>> >> > >> register multiple pools of different types and purposes on the
>>>>>> same
>>>>>> >> SAN.
>>>>>> >> > >> We
>>>>>> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it
>>>>>> will be
>>>>>> >> > >> interesting to see what we can come up with to make it work
>>>>>> similarly.
>>>>>> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>>>>>> >> > mike.tutkowski@solidfire.com
>>>>>> >> > >> >
>>>>>> >> > >> wrote:
>>>>>> >> > >>
>>>>>> >> > >> > What you're saying here is definitely something we should
>>>>>> talk
>>>>>> >> about.
>>>>>> >> > >> >
>>>>>> >> > >> > Hopefully my previous e-mail has clarified how this works a
>>>>>> bit.
>>>>>> >> > >> >
>>>>>> >> > >> > It mainly comes down to this:
>>>>>> >> > >> >
>>>>>> >> > >> > For the first time in CS history, primary storage is no
>>>>>> longer
>>>>>> >> > required
>>>>>> >> > >> to
>>>>>> >> > >> > be preallocated by the admin and then handed to CS. CS
>>>>>> volumes don't
>>>>>> >> > >> have
>>>>>> >> > >> > to share a preallocated volume anymore.
>>>>>> >> > >> >
>>>>>> >> > >> > As of 4.2, primary storage can be based on a SAN (or some
>>>>>> other
>>>>>> >> > storage
>>>>>> >> > >> > device). You can tell CS how many bytes and IOPS to use from
>>>>>> this
>>>>>> >> > >> storage
>>>>>> >> > >> > device and CS invokes the appropriate plug-in to carve out
>>>>>> LUNs
>>>>>> >> > >> > dynamically.
>>>>>> >> > >> >
>>>>>> >> > >> > Each LUN is home to one and only one data disk. Data disks -
>>>>>> in this
>>>>>> >> > >> model
>>>>>> >> > >> > - never share a LUN.
>>>>>> >> > >> >
>>>>>> >> > >> > The main use case for this is so a CS volume can deliver
>>>>>> guaranteed
>>>>>> >> > >> IOPS if
>>>>>> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed
>>>>>> IOPS on a
>>>>>> >> > >> > LUN-by-LUN basis.
>>>>>> >> > >> >
>>>>>> >> > >> >
>>>>>> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>>>>>> >> > shadowsor@gmail.com
>>>>>> >> > >> > >wrote:
>>>>>> >> > >> >
>>>>>> >> > >> > > I guess whether or not a solidfire device is capable of
>>>>>> hosting
>>>>>> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could
>>>>>> get the
>>>>>> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs).
>>>>>> But if
>>>>>> >> > these
>>>>>> >> > >> > > stats aren't collected, I can't as an admin define
>>>>>> multiple pools
>>>>>> >> > and
>>>>>> >> > >> > > expect cloudstack to allocate evenly from them or fill one
>>>>>> up and
>>>>>> >> > move
>>>>>> >> > >> > > to the next, because it doesn't know how big it is.
>>>>>> >> > >> > >
>>>>>> >> > >> > > Ultimately this discussion has nothing to do with the KVM
>>>>>> stuff
>>>>>> >> > >> > > itself, just a tangent, but something to think about.
>>>>>> >> > >> > >
>>>>>> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>>>>>> >> > >> shadowsor@gmail.com>
>>>>>> >> > >> > > wrote:
>>>>>> >> > >> > > > Ok, on most storage pools it shows how many GB free/used
>>>>>> when
>>>>>> >> > >> listing
>>>>>> >> > >> > > > the pool both via API and in the UI. I'm guessing those
>>>>>> are
>>>>>> >> empty
>>>>>> >> > >> then
>>>>>> >> > >> > > > for the solid fire storage, but it seems like the user
>>>>>> should
>>>>>> >> have
>>>>>> >> > >> to
>>>>>> >> > >> > > > define some sort of pool that the luns get carved out
>>>>>> of, and
>>>>>> >> you
>>>>>> >> > >> > > > should be able to get the stats for that, right? Or is a
>>>>>> solid
>>>>>> >> > fire
>>>>>> >> > >> > > > appliance only one pool per appliance? This isn't about
>>>>>> billing,
>>>>>> >> > but
>>>>>> >> > >> > > > just so cloudstack itself knows whether or not there is
>>>>>> space
>>>>>> >> left
>>>>>> >> > >> on
>>>>>> >> > >> > > > the storage device, so cloudstack can go on allocating
>>>>>> from a
>>>>>> >> > >> > > > different primary storage as this one fills up. There
>>>>>> are also
>>>>>> >> > >> > > > notifications and things. It seems like there should be
>>>>>> a call
>>>>>> >> you
>>>>>> >> > >> can
>>>>>> >> > >> > > > handle for this, maybe Edison knows.
>>>>>> >> > >> > > >
>>>>>> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>>>>>> >> > >> shadowsor@gmail.com>
>>>>>> >> > >> > > wrote:
>>>>>> >> > >> > > >> You respond to more than attach and detach, right?
>>>>>> Don't you
>>>>>> >> > create
>>>>>> >> > >> > > luns as
>>>>>> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>>>>>> >> > >> > > >>
>>>>>> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>>>>>> >> > >> > mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >
>>>>>> >> > >> > > >> wrote:
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> Hi Marcus,
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> I never need to respond to a CreateStoragePool call
>>>>>> for either
>>>>>> >> > >> > > XenServer
>>>>>> >> > >> > > >>> or
>>>>>> >> > >> > > >>> VMware.
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> What happens is I respond only to the Attach- and
>>>>>> >> Detach-volume
>>>>>> >> > >> > > commands.
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> Let's say an attach comes in:
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> In this case, I check to see if the storage is
>>>>>> "managed."
>>>>>> >> > Talking
>>>>>> >> > >> > > >>> XenServer
>>>>>> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk
>>>>>> we want
>>>>>> >> to
>>>>>> >> > >> > attach.
>>>>>> >> > >> > > >>> After, if this is the first time attaching this disk,
>>>>>> I create
>>>>>> >> > an
>>>>>> >> > >> SR
>>>>>> >> > >> > > and a
>>>>>> >> > >> > > >>> VDI within the SR. If it is not the first time
>>>>>> attaching this
>>>>>> >> > >> disk,
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> LUN
>>>>>> >> > >> > > >>> already has the SR and VDI on it.
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
>>>>>> because
>>>>>> >> > >> this
>>>>>> >> > >> > > logic
>>>>>> >> > >> > > >>> expected an SR and a VDI and now it has it.
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> It's the same thing for VMware: Just substitute
>>>>>> datastore for
>>>>>> >> SR
>>>>>> >> > >> and
>>>>>> >> > >> > > VMDK
>>>>>> >> > >> > > >>> for VDI.
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> Does that make sense?
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> Thanks!
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>>>>>> >> > >> > > >>> <sh...@gmail.com>wrote:
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> > What do you do with Xen? I imagine the user enter
>>>>>> the SAN
>>>>>> >> > >> details
>>>>>> >> > >> > > when
>>>>>> >> > >> > > >>> > registering the pool? A the pool details are
>>>>>> basically just
>>>>>> >> > >> > > instructions
>>>>>> >> > >> > > >>> > on
>>>>>> >> > >> > > >>> > how to log into a target, correct?
>>>>>> >> > >> > > >>> >
>>>>>> >> > >> > > >>> > You can choose to log in a KVM host to the target
>>>>>> during
>>>>>> >> > >> > > >>> > createStoragePool
>>>>>> >> > >> > > >>> > and save the pool in a map, or just save the pool
>>>>>> info in a
>>>>>> >> > map
>>>>>> >> > >> for
>>>>>> >> > >> > > >>> > future
>>>>>> >> > >> > > >>> > reference by uuid, for when you do need to log in.
>>>>>> The
>>>>>> >> > >> > > createStoragePool
>>>>>> >> > >> > > >>> > then just becomes a way to save the pool info to the
>>>>>> agent.
>>>>>> >> > >> > > Personally,
>>>>>> >> > >> > > >>> > I'd
>>>>>> >> > >> > > >>> > log in on the pool create and look/scan for specific
>>>>>> luns
>>>>>> >> when
>>>>>> >> > >> > > they're
>>>>>> >> > >> > > >>> > needed, but I haven't thought it through thoroughly.
>>>>>> I just
>>>>>> >> > say
>>>>>> >> > >> > that
>>>>>> >> > >> > > >>> > mainly
>>>>>> >> > >> > > >>> > because login only happens once, the first time the
>>>>>> pool is
>>>>>> >> > >> used,
>>>>>> >> > >> > and
>>>>>> >> > >> > > >>> > every
>>>>>> >> > >> > > >>> > other storage command is about discovering new luns
>>>>>> or maybe
>>>>>> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
>>>>>> other
>>>>>> >> > hand,
>>>>>> >> > >> > you
>>>>>> >> > >> > > >>> > could
>>>>>> >> > >> > > >>> > do all of the above: log in on pool create, then
>>>>>> also check
>>>>>> >> if
>>>>>> >> > >> > you're
>>>>>> >> > >> > > >>> > logged in on other commands and log in if you've lost
>>>>>> >> > >> connection.
>>>>>> >> > >> > > >>> >
>>>>>> >> > >> > > >>> > With Xen, what does your registered pool   show in
>>>>>> the UI
>>>>>> >> for
>>>>>> >> > >> > > avail/used
>>>>>> >> > >> > > >>> > capacity, and how does it get that info? I assume
>>>>>> there is
>>>>>> >> > some
>>>>>> >> > >> > sort
>>>>>> >> > >> > > of
>>>>>> >> > >> > > >>> > disk pool that the luns are carved from, and that
>>>>>> your
>>>>>> >> plugin
>>>>>> >> > is
>>>>>> >> > >> > > called
>>>>>> >> > >> > > >>> > to
>>>>>> >> > >> > > >>> > talk to the SAN and expose to the user how much of
>>>>>> that pool
>>>>>> >> > has
>>>>>> >> > >> > been
>>>>>> >> > >> > > >>> > allocated. Knowing how you already solves these
>>>>>> problems
>>>>>> >> with
>>>>>> >> > >> Xen
>>>>>> >> > >> > > will
>>>>>> >> > >> > > >>> > help
>>>>>> >> > >> > > >>> > figure out what to do with KVM.
>>>>>> >> > >> > > >>> >
>>>>>> >> > >> > > >>> > If this is the case, I think the plugin can continue
>>>>>> to
>>>>>> >> handle
>>>>>> >> > >> it
>>>>>> >> > >> > > rather
>>>>>> >> > >> > > >>> > than getting details from the agent. I'm not sure if
>>>>>> that
>>>>>> >> > means
>>>>>> >> > >> > nulls
>>>>>> >> > >> > > >>> > are
>>>>>> >> > >> > > >>> > OK for these on the agent side or what, I need to
>>>>>> look at
>>>>>> >> the
>>>>>> >> > >> > storage
>>>>>> >> > >> > > >>> > plugin arch more closely.
>>>>>> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>>>>>> >> > >> > > mike.tutkowski@solidfire.com>
>>>>>> >> > >> > > >>> > wrote:
>>>>>> >> > >> > > >>> >
>>>>>> >> > >> > > >>> > > Hey Marcus,
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the
>>>>>> necessary
>>>>>> >> > >> methods
>>>>>> >> > >> > in
>>>>>> >> > >> > > new
>>>>>> >> > >> > > >>> > > classes.
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > "So, referencing StorageAdaptor.java,
>>>>>> createStoragePool
>>>>>> >> > >> accepts
>>>>>> >> > >> > > all of
>>>>>> >> > >> > > >>> > > the pool data (host, port, name, path) which would
>>>>>> be used
>>>>>> >> > to
>>>>>> >> > >> log
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > > host into the initiator."
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
>>>>>> (primary
>>>>>> >> > >> > > storage) is
>>>>>> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>>>>>> >> anything
>>>>>> >> > >> at
>>>>>> >> > >> > > this
>>>>>> >> > >> > > >>> > point,
>>>>>> >> > >> > > >>> > > correct?
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > Also, what kind of capacity, available, and used
>>>>>> bytes
>>>>>> >> make
>>>>>> >> > >> sense
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > report
>>>>>> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool
>>>>>> represents the
>>>>>> >> SAN
>>>>>> >> > >> in my
>>>>>> >> > >> > > case
>>>>>> >> > >> > > >>> > and
>>>>>> >> > >> > > >>> > > not an individual LUN)?
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > Thanks!
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>>>>>> >> > >> > > shadowsor@gmail.com
>>>>>> >> > >> > > >>> > > >wrote:
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > > Ok, KVM will be close to that, of course,
>>>>>> because only
>>>>>> >> the
>>>>>> >> > >> > > >>> > > > hypervisor
>>>>>> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
>>>>>> Creating a
>>>>>> >> > >> volume
>>>>>> >> > >> > is
>>>>>> >> > >> > > >>> > > > just
>>>>>> >> > >> > > >>> > > > a db entry until it's deployed for the first
>>>>>> time.
>>>>>> >> > >> > > >>> > > > AttachVolumeCommand
>>>>>> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>>>>>> >> analogous
>>>>>> >> > >> to
>>>>>> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
>>>>>> commands
>>>>>> >> > (via
>>>>>> >> > >> a
>>>>>> >> > >> > KVM
>>>>>> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target
>>>>>> and
>>>>>> >> then
>>>>>> >> > >> you
>>>>>> >> > >> > > have a
>>>>>> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for
>>>>>> you, but
>>>>>> >> my
>>>>>> >> > >> quick
>>>>>> >> > >> > > read
>>>>>> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>>>>>> >> > actually a
>>>>>> >> > >> > > pool,
>>>>>> >> > >> > > >>> > > > not
>>>>>> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if
>>>>>> that
>>>>>> >> > works
>>>>>> >> > >> or
>>>>>> >> > >> > if
>>>>>> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>>>>>> >> (because
>>>>>> >> > >> > Libvirt
>>>>>> >> > >> > > >>> > > > doesn't really manage your pool the way you
>>>>>> want),
>>>>>> >> you're
>>>>>> >> > >> going
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > > > have to create a version of KVMStoragePool class
>>>>>> and a
>>>>>> >> > >> > > >>> > > > StorageAdaptor
>>>>>> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>>>>>> >> > >> > > LibvirtStorageAdaptor.java),
>>>>>> >> > >> > > >>> > > > implementing all of the methods, then in
>>>>>> >> > >> KVMStorageManager.java
>>>>>> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
>>>>>> select
>>>>>> >> the
>>>>>> >> > >> > > correct
>>>>>> >> > >> > > >>> > > > adaptor, you can see in this file that every
>>>>>> call first
>>>>>> >> > >> pulls
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> > > > correct adaptor out of this map via
>>>>>> getStorageAdaptor.
>>>>>> >> So
>>>>>> >> > >> you
>>>>>> >> > >> > can
>>>>>> >> > >> > > >>> > > > see
>>>>>> >> > >> > > >>> > > > a comment in this file that says "add other
>>>>>> storage
>>>>>> >> > adaptors
>>>>>> >> > >> > > here",
>>>>>> >> > >> > > >>> > > > where it puts to this map, this is where you'd
>>>>>> register
>>>>>> >> > your
>>>>>> >> > >> > > >>> > > > adaptor.
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
>>>>>> createStoragePool
>>>>>> >> > >> accepts
>>>>>> >> > >> > > all
>>>>>> >> > >> > > >>> > > > of
>>>>>> >> > >> > > >>> > > > the pool data (host, port, name, path) which
>>>>>> would be
>>>>>> >> used
>>>>>> >> > >> to
>>>>>> >> > >> > log
>>>>>> >> > >> > > >>> > > > the
>>>>>> >> > >> > > >>> > > > host into the initiator. I *believe* the method
>>>>>> >> > >> getPhysicalDisk
>>>>>> >> > >> > > will
>>>>>> >> > >> > > >>> > > > need to do the work of attaching the lun.
>>>>>> >> > >>  AttachVolumeCommand
>>>>>> >> > >> > > calls
>>>>>> >> > >> > > >>> > > > this and then creates the XML diskdef and
>>>>>> attaches it to
>>>>>> >> > the
>>>>>> >> > >> > VM.
>>>>>> >> > >> > > >>> > > > Now,
>>>>>> >> > >> > > >>> > > > one thing you need to know is that
>>>>>> createStoragePool is
>>>>>> >> > >> called
>>>>>> >> > >> > > >>> > > > often,
>>>>>> >> > >> > > >>> > > > sometimes just to make sure the pool is there.
>>>>>> You may
>>>>>> >> > want
>>>>>> >> > >> to
>>>>>> >> > >> > > >>> > > > create
>>>>>> >> > >> > > >>> > > > a map in your adaptor class and keep track of
>>>>>> pools that
>>>>>> >> > >> have
>>>>>> >> > >> > > been
>>>>>> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to
>>>>>> do this
>>>>>> >> > >> because
>>>>>> >> > >> > it
>>>>>> >> > >> > > >>> > > > asks
>>>>>> >> > >> > > >>> > > > libvirt about which storage pools exist. There
>>>>>> are also
>>>>>> >> > >> calls
>>>>>> >> > >> > to
>>>>>> >> > >> > > >>> > > > refresh the pool stats, and all of the other
>>>>>> calls can
>>>>>> >> be
>>>>>> >> > >> seen
>>>>>> >> > >> > in
>>>>>> >> > >> > > >>> > > > the
>>>>>> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
>>>>>> disk,
>>>>>> >> > >> clone,
>>>>>> >> > >> > > etc,
>>>>>> >> > >> > > >>> > > > but
>>>>>> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have
>>>>>> the vague
>>>>>> >> > idea
>>>>>> >> > >> > that
>>>>>> >> > >> > > >>> > > > volumes are created on the mgmt server via the
>>>>>> plugin
>>>>>> >> now,
>>>>>> >> > >> so
>>>>>> >> > >> > > >>> > > > whatever
>>>>>> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or
>>>>>> optionally
>>>>>> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind
>>>>>> the hosts
>>>>>> >> > >> > talking
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > > > the san api).
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > There is a difference between attaching new
>>>>>> volumes and
>>>>>> >> > >> > > launching a
>>>>>> >> > >> > > >>> > > > VM
>>>>>> >> > >> > > >>> > > > with existing volumes.  In the latter case, the
>>>>>> VM
>>>>>> >> > >> definition
>>>>>> >> > >> > > that
>>>>>> >> > >> > > >>> > > > was
>>>>>> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
>>>>>> >> > (StartCommand).
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > I'd be interested in how your pool is defined
>>>>>> for Xen, I
>>>>>> >> > >> > imagine
>>>>>> >> > >> > > it
>>>>>> >> > >> > > >>> > > > would need to be kept the same. Is it just a
>>>>>> definition
>>>>>> >> to
>>>>>> >> > >> the
>>>>>> >> > >> > > SAN
>>>>>> >> > >> > > >>> > > > (ip address or some such, port number) and
>>>>>> perhaps a
>>>>>> >> > volume
>>>>>> >> > >> > pool
>>>>>> >> > >> > > >>> > > > name?
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL
>>>>>> list on the
>>>>>> >> > >> SAN to
>>>>>> >> > >> > > have
>>>>>> >> > >> > > >>> > > only a
>>>>>> >> > >> > > >>> > > > > single KVM host have access to the volume,
>>>>>> that would
>>>>>> >> be
>>>>>> >> > >> > ideal.
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
>>>>>> >> impression
>>>>>> >> > >> that
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > > > storage plugin framework allowed for acls, or
>>>>>> for you to
>>>>>> >> > do
>>>>>> >> > >> > > whatever
>>>>>> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
>>>>>> You'd
>>>>>> >> > just
>>>>>> >> > >> > call
>>>>>> >> > >> > > >>> > > > your
>>>>>> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to
>>>>>> when
>>>>>> >> the
>>>>>> >> > >> disk
>>>>>> >> > >> > is
>>>>>> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to
>>>>>> look more
>>>>>> >> at
>>>>>> >> > >> the
>>>>>> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
>>>>>> this in
>>>>>> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the
>>>>>> LUN.
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>>>> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>>>>>> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting.
>>>>>> That is a
>>>>>> >> > bit
>>>>>> >> > >> > > >>> > > > > different
>>>>>> >> > >> > > >>> > > from
>>>>>> >> > >> > > >>> > > > how
>>>>>> >> > >> > > >>> > > > > it works with XenServer and VMware.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2
>>>>>> with
>>>>>> >> > >> XenServer:
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
>>>>>> recorded
>>>>>> >> in
>>>>>> >> > >> the
>>>>>> >> > >> > > >>> > > > cloud.volumes
>>>>>> >> > >> > > >>> > > > > table).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a
>>>>>> VM for
>>>>>> >> the
>>>>>> >> > >> > first
>>>>>> >> > >> > > >>> > > > > time
>>>>>> >> > >> > > >>> > (if
>>>>>> >> > >> > > >>> > > > the
>>>>>> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in,
>>>>>> the
>>>>>> >> > storage
>>>>>> >> > >> > > >>> > > > > framework
>>>>>> >> > >> > > >>> > > > invokes
>>>>>> >> > >> > > >>> > > > > a method on the plug-in that creates a volume
>>>>>> on the
>>>>>> >> > >> > SAN...info
>>>>>> >> > >> > > >>> > > > > like
>>>>>> >> > >> > > >>> > > the
>>>>>> >> > >> > > >>> > > > IQN
>>>>>> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > * CitrixResourceBase's
>>>>>> execute(AttachVolumeCommand) is
>>>>>> >> > >> > > executed.
>>>>>> >> > >> > > >>> > > > > It
>>>>>> >> > >> > > >>> > > > > determines based on a flag passed in that the
>>>>>> storage
>>>>>> >> in
>>>>>> >> > >> > > question
>>>>>> >> > >> > > >>> > > > > is
>>>>>> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>>>>>> >> > "traditional"
>>>>>> >> > >> > > >>> > preallocated
>>>>>> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
>>>>>> target.
>>>>>> >> > Once
>>>>>> >> > >> > > >>> > > > > discovered
>>>>>> >> > >> > > >>> > > it
>>>>>> >> > >> > > >>> > > > > determines if the iSCSI target already
>>>>>> contains a
>>>>>> >> > storage
>>>>>> >> > >> > > >>> > > > > repository
>>>>>> >> > >> > > >>> > > (it
>>>>>> >> > >> > > >>> > > > > would if this were a re-attach situation). If
>>>>>> it does
>>>>>> >> > >> contain
>>>>>> >> > >> > > an
>>>>>> >> > >> > > >>> > > > > SR
>>>>>> >> > >> > > >>> > > > already,
>>>>>> >> > >> > > >>> > > > > then there should already be one VDI, as well.
>>>>>> If
>>>>>> >> there
>>>>>> >> > >> is no
>>>>>> >> > >> > > SR,
>>>>>> >> > >> > > >>> > > > > an
>>>>>> >> > >> > > >>> > SR
>>>>>> >> > >> > > >>> > > > is
>>>>>> >> > >> > > >>> > > > > created and a single VDI is created within it
>>>>>> (that
>>>>>> >> > takes
>>>>>> >> > >> up
>>>>>> >> > >> > > about
>>>>>> >> > >> > > >>> > > > > as
>>>>>> >> > >> > > >>> > > > much
>>>>>> >> > >> > > >>> > > > > space as was requested for the CloudStack
>>>>>> volume).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
>>>>>> depends
>>>>>> >> > on
>>>>>> >> > >> the
>>>>>> >> > >> > > >>> > existence
>>>>>> >> > >> > > >>> > > > of
>>>>>> >> > >> > > >>> > > > > an SR and a VDI).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > The VMware case is essentially the same
>>>>>> (mainly just
>>>>>> >> > >> > substitute
>>>>>> >> > >> > > >>> > > datastore
>>>>>> >> > >> > > >>> > > > > for SR and VMDK for VDI).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>>>>>> >> discovered
>>>>>> >> > >> the
>>>>>> >> > >> > > iSCSI
>>>>>> >> > >> > > >>> > > target,
>>>>>> >> > >> > > >>> > > > > but only the host that is currently running
>>>>>> the VM
>>>>>> >> that
>>>>>> >> > is
>>>>>> >> > >> > > using
>>>>>> >> > >> > > >>> > > > > the
>>>>>> >> > >> > > >>> > > VDI
>>>>>> >> > >> > > >>> > > > (or
>>>>>> >> > >> > > >>> > > > > VMKD) is actually using the disk.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > Live Migration should be OK because the
>>>>>> hypervisors
>>>>>> >> > >> > communicate
>>>>>> >> > >> > > >>> > > > > with
>>>>>> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
>>>>>> datastore).
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
>>>>>> >> > CloudStack's
>>>>>> >> > >> > > eyes.
>>>>>> >> > >> > > >>> > > > > CS
>>>>>> >> > >> > > >>> > > > controls
>>>>>> >> > >> > > >>> > > > > Live Migration. You don't really need a
>>>>>> clustered
>>>>>> >> > >> filesystem
>>>>>> >> > >> > on
>>>>>> >> > >> > > >>> > > > > the
>>>>>> >> > >> > > >>> > > LUN.
>>>>>> >> > >> > > >>> > > > The
>>>>>> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using
>>>>>> it.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL
>>>>>> list on the
>>>>>> >> > >> SAN to
>>>>>> >> > >> > > have
>>>>>> >> > >> > > >>> > > only a
>>>>>> >> > >> > > >>> > > > > single KVM host have access to the volume,
>>>>>> that would
>>>>>> >> be
>>>>>> >> > >> > ideal.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to
>>>>>> discover
>>>>>> >> and
>>>>>> >> > >> log
>>>>>> >> > >> > in
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > > > > the
>>>>>> >> > >> > > >>> > > > iSCSI
>>>>>> >> > >> > > >>> > > > > target. I'll also need to take the resultant
>>>>>> new
>>>>>> >> device
>>>>>> >> > >> and
>>>>>> >> > >> > > pass
>>>>>> >> > >> > > >>> > > > > it
>>>>>> >> > >> > > >>> > > into
>>>>>> >> > >> > > >>> > > > the
>>>>>> >> > >> > > >>> > > > > VM.
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out
>>>>>> on
>>>>>> >> > >> anything I
>>>>>> >> > >> > > seem
>>>>>> >> > >> > > >>> > > > incorrect
>>>>>> >> > >> > > >>> > > > > about. :)
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus
>>>>>> Sorensen <
>>>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>>>> >> > >> > > >>> > > > > wrote:
>>>>>> >> > >> > > >>> > > > >>
>>>>>> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM),
>>>>>> a disk
>>>>>> >> > def,
>>>>>> >> > >> and
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > > attach
>>>>>> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do
>>>>>> your own
>>>>>> >> > >> > > >>> > > > >> StorageAdaptor
>>>>>> >> > >> > > >>> > and
>>>>>> >> > >> > > >>> > > > run
>>>>>> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that,
>>>>>> depending on
>>>>>> >> how
>>>>>> >> > >> the
>>>>>> >> > >> > > >>> > > > >> libvirt
>>>>>> >> > >> > > >>> > > iscsi
>>>>>> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
>>>>>> pool/lun/volume
>>>>>> >> > >> isn't
>>>>>> >> > >> > > how it
>>>>>> >> > >> > > >>> > > works
>>>>>> >> > >> > > >>> > > > on
>>>>>> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>>>>>> >> > >> > > >>> > > > >>
>>>>>> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which
>>>>>> host can
>>>>>> >> > see
>>>>>> >> > >> > > which
>>>>>> >> > >> > > >>> > > > >> luns
>>>>>> >> > >> > > >>> > > as
>>>>>> >> > >> > > >>> > > > >> well, I remember discussing that months ago,
>>>>>> so that
>>>>>> >> a
>>>>>> >> > >> disk
>>>>>> >> > >> > > won't
>>>>>> >> > >> > > >>> > > > >> be
>>>>>> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
>>>>>> access,
>>>>>> >> so
>>>>>> >> > >> it
>>>>>> >> > >> > > will
>>>>>> >> > >> > > >>> > > > >> be
>>>>>> >> > >> > > >>> > > safe
>>>>>> >> > >> > > >>> > > > and
>>>>>> >> > >> > > >>> > > > >> fence the disk from rogue nodes that
>>>>>> cloudstack loses
>>>>>> >> > >> > > >>> > > > >> connectivity
>>>>>> >> > >> > > >>> > > > with. It
>>>>>> >> > >> > > >>> > > > >> should revoke access to everything but the
>>>>>> target
>>>>>> >> > host...
>>>>>> >> > >> > > Except
>>>>>> >> > >> > > >>> > > > >> for
>>>>>> >> > >> > > >>> > > > during
>>>>>> >> > >> > > >>> > > > >> migration but we can discuss that later,
>>>>>> there's a
>>>>>> >> > >> migration
>>>>>> >> > >> > > prep
>>>>>> >> > >> > > >>> > > > process
>>>>>> >> > >> > > >>> > > > >> where the new host can be added to the acls,
>>>>>> and the
>>>>>> >> > old
>>>>>> >> > >> > host
>>>>>> >> > >> > > can
>>>>>> >> > >> > > >>> > > > >> be
>>>>>> >> > >> > > >>> > > > removed
>>>>>> >> > >> > > >>> > > > >> post migration.
>>>>>> >> > >> > > >>> > > > >>
>>>>>> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>>>>> >> > >> > > >>> > > mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >> wrote:
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
>>>>>> target,
>>>>>> >> > >> log in
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > > > >>> it,
>>>>>> >> > >> > > >>> > > then
>>>>>> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a
>>>>>> result
>>>>>> >> (and
>>>>>> >> > >> leave
>>>>>> >> > >> > > it
>>>>>> >> > >> > > >>> > > > >>> as
>>>>>> >> > >> > > >>> > is
>>>>>> >> > >> > > >>> > > -
>>>>>> >> > >> > > >>> > > > do
>>>>>> >> > >> > > >>> > > > >>> not format it with any file
>>>>>> system...clustered or
>>>>>> >> > not).
>>>>>> >> > >> I
>>>>>> >> > >> > > would
>>>>>> >> > >> > > >>> > pass
>>>>>> >> > >> > > >>> > > > that
>>>>>> >> > >> > > >>> > > > >>> device into the VM.
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>> Kind of accurate?
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus
>>>>>> Sorensen <
>>>>>> >> > >> > > >>> > > shadowsor@gmail.com>
>>>>>> >> > >> > > >>> > > > >>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>
>>>>>> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the
>>>>>> disk
>>>>>> >> > >> > > definitions.
>>>>>> >> > >> > > >>> > There
>>>>>> >> > >> > > >>> > > > are
>>>>>> >> > >> > > >>> > > > >>>> ones that work for block devices rather
>>>>>> than files.
>>>>>> >> > You
>>>>>> >> > >> > can
>>>>>> >> > >> > > >>> > > > >>>> piggy
>>>>>> >> > >> > > >>> > > > back off
>>>>>> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach
>>>>>> it to
>>>>>> >> the
>>>>>> >> > >> vm
>>>>>> >> > >> > as
>>>>>> >> > >> > > a
>>>>>> >> > >> > > >>> > block
>>>>>> >> > >> > > >>> > > > device.
>>>>>> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt
>>>>>> XML
>>>>>> >> > format.
>>>>>> >> > >> > You
>>>>>> >> > >> > > may
>>>>>> >> > >> > > >>> > want
>>>>>> >> > >> > > >>> > > > to use
>>>>>> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than
>>>>>> just
>>>>>> >> > /dev/sdx
>>>>>> >> > >> > > like I
>>>>>> >> > >> > > >>> > > > mentioned,
>>>>>> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices,
>>>>>> as well
>>>>>> >> > as
>>>>>> >> > >> > other
>>>>>> >> > >> > > >>> > > > >>>> ones
>>>>>> >> > >> > > >>> > > > that will
>>>>>> >> > >> > > >>> > > > >>>> be consistent and easier for management,
>>>>>> not sure
>>>>>> >> how
>>>>>> >> > >> > > familiar
>>>>>> >> > >> > > >>> > > > >>>> you
>>>>>> >> > >> > > >>> > > > are with
>>>>>> >> > >> > > >>> > > > >>>> device naming on Linux.
>>>>>> >> > >> > > >>> > > > >>>>
>>>>>> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>>>> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
>>>>>> >> > >> > > >>> > > > wrote:
>>>>>> >> > >> > > >>> > > > >>>>>
>>>>>> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>>>>>> >> network/iscsi
>>>>>> >> > >> > > initiator
>>>>>> >> > >> > > >>> > > inside
>>>>>> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
>>>>>> /dev/sdx
>>>>>> >> > (your
>>>>>> >> > >> > lun
>>>>>> >> > >> > > on
>>>>>> >> > >> > > >>> > > > hypervisor) as
>>>>>> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching
>>>>>> some image
>>>>>> >> > >> file
>>>>>> >> > >> > > that
>>>>>> >> > >> > > >>> > > resides
>>>>>> >> > >> > > >>> > > > on a
>>>>>> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on
>>>>>> a
>>>>>> >> target.
>>>>>> >> > >> > > >>> > > > >>>>>
>>>>>> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage
>>>>>> supporting
>>>>>> >> live
>>>>>> >> > >> > > migration
>>>>>> >> > >> > > >>> > > > >>>>> I
>>>>>> >> > >> > > >>> > > > think
>>>>>> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
>>>>>> filesystem
>>>>>> >> on
>>>>>> >> > it
>>>>>> >> > >> > and
>>>>>> >> > >> > > >>> > > > >>>>> mount
>>>>>> >> > >> > > >>> > it
>>>>>> >> > >> > > >>> > > > in two
>>>>>> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>>>>>> >> > clustered
>>>>>> >> > >> > > >>> > > > >>>>> filesystem,
>>>>>> >> > >> > > >>> > > in
>>>>>> >> > >> > > >>> > > > which
>>>>>> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>>>>>> >> > >> > > >>> > > > >>>>>
>>>>>> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR
>>>>>> style is
>>>>>> >> > >> basically
>>>>>> >> > >> > > LVM
>>>>>> >> > >> > > >>> > with a
>>>>>> >> > >> > > >>> > > > xen
>>>>>> >> > >> > > >>> > > > >>>>> specific cluster management, a custom
>>>>>> CLVM. They
>>>>>> >> > don't
>>>>>> >> > >> > use
>>>>>> >> > >> > > a
>>>>>> >> > >> > > >>> > > > filesystem
>>>>>> >> > >> > > >>> > > > >>>>> either.
>>>>>> >> > >> > > >>> > > > >>>>>
>>>>>> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>>> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly
>>>>>> to the
>>>>>> >> vm,"
>>>>>> >> > >> do
>>>>>> >> > >> > you
>>>>>> >> > >> > > >>> > > > >>>>>> mean
>>>>>> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't
>>>>>> think we
>>>>>> >> > >> could do
>>>>>> >> > >> > > that
>>>>>> >> > >> > > >>> > > > >>>>>> in
>>>>>> >> > >> > > >>> > > CS.
>>>>>> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
>>>>>> circumvents
>>>>>> >> > the
>>>>>> >> > >> > > >>> > > > >>>>>> hypervisor,
>>>>>> >> > >> > > >>> > > as
>>>>>> >> > >> > > >>> > > > far as I
>>>>>> >> > >> > > >>> > > > >>>>>> know.
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
>>>>>> Sorensen
>>>>>> >> <
>>>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>>>> >> > >> > > >>> > > > >>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to
>>>>>> the vm
>>>>>> >> > unless
>>>>>> >> > >> > > there is
>>>>>> >> > >> > > >>> > > > >>>>>>> a
>>>>>> >> > >> > > >>> > > good
>>>>>> >> > >> > > >>> > > > >>>>>>> reason not to.
>>>>>> >> > >> > > >>> > > > >>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus
>>>>>> Sorensen" <
>>>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>>>> >> > >> > > >>> > > > >>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I
>>>>>> think
>>>>>> >> its a
>>>>>> >> > >> > > mistake
>>>>>> >> > >> > > >>> > > > >>>>>>>> to
>>>>>> >> > >> > > >>> > go
>>>>>> >> > >> > > >>> > > to
>>>>>> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping
>>>>>> of CS
>>>>>> >> > >> volumes to
>>>>>> >> > >> > > luns
>>>>>> >> > >> > > >>> > and
>>>>>> >> > >> > > >>> > > > then putting
>>>>>> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and
>>>>>> then
>>>>>> >> > putting a
>>>>>> >> > >> > > QCOW2
>>>>>> >> > >> > > >>> > > > >>>>>>>> or
>>>>>> >> > >> > > >>> > > even
>>>>>> >> > >> > > >>> > > > RAW disk
>>>>>> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a
>>>>>> lot of
>>>>>> >> > iops
>>>>>> >> > >> > > along
>>>>>> >> > >> > > >>> > > > >>>>>>>> the
>>>>>> >> > >> > > >>> > > > way, and have
>>>>>> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and
>>>>>> its
>>>>>> >> > >> journaling,
>>>>>> >> > >> > > etc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike
>>>>>> Tutkowski"
>>>>>> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such
>>>>>> new ground
>>>>>> >> > in
>>>>>> >> > >> KVM
>>>>>> >> > >> > > with
>>>>>> >> > >> > > >>> > CS.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with
>>>>>> KVM and CS
>>>>>> >> > >> today
>>>>>> >> > >> > > is by
>>>>>> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and
>>>>>> specifying the
>>>>>> >> > >> > location
>>>>>> >> > >> > > of
>>>>>> >> > >> > > >>> > > > >>>>>>>>> the
>>>>>> >> > >> > > >>> > > > share.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
>>>>>> iSCSI
>>>>>> >> by
>>>>>> >> > >> > > >>> > > > >>>>>>>>> discovering
>>>>>> >> > >> > > >>> > > their
>>>>>> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
>>>>>> mounting
>>>>>> >> it
>>>>>> >> > >> > > somewhere
>>>>>> >> > >> > > >>> > > > >>>>>>>>> on
>>>>>> >> > >> > > >>> > > > their file
>>>>>> >> > >> > > >>> > > > >>>>>>>>> system.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do
>>>>>> that
>>>>>> >> > >> discovery,
>>>>>> >> > >> > > >>> > > > >>>>>>>>> logging
>>>>>> >> > >> > > >>> > > in,
>>>>>> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for
>>>>>> them and
>>>>>> >> > >> letting
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> > current
>>>>>> >> > >> > > >>> > > > code manage
>>>>>> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>>>>>> >> Sorensen
>>>>>> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>>>>>> >> different. I
>>>>>> >> > >> need
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > catch
>>>>>> >> > >> > > >>> > > up
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>>>>>> >> basically
>>>>>> >> > >> just
>>>>>> >> > >> > > disk
>>>>>> >> > >> > > >>> > > > snapshots + memory
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots
>>>>>> would
>>>>>> >> > >> preferably
>>>>>> >> > >> > be
>>>>>> >> > >> > > >>> > handled
>>>>>> >> > >> > > >>> > > > by the SAN,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to
>>>>>> secondary
>>>>>> >> > >> storage or
>>>>>> >> > >> > > >>> > something
>>>>>> >> > >> > > >>> > > > else. This is
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and
>>>>>> KVM, so we
>>>>>> >> > will
>>>>>> >> > >> > > want to
>>>>>> >> > >> > > >>> > see
>>>>>> >> > >> > > >>> > > > how others are
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
>>>>>> Sorensen" <
>>>>>> >> > >> > > >>> > > shadowsor@gmail.com
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
>>>>>> you'd
>>>>>> >> > use a
>>>>>> >> > >> > vdi
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> style
>>>>>> >> > >> > > >>> > on
>>>>>> >> > >> > > >>> > > > an
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to
>>>>>> treat it
>>>>>> >> as a
>>>>>> >> > >> RAW
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> format.
>>>>>> >> > >> > > >>> > > > Otherwise you're
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
>>>>>> mounting
>>>>>> >> it,
>>>>>> >> > >> > > creating
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> a
>>>>>> >> > >> > > >>> > > > QCOW2 disk image,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
>>>>>> performance
>>>>>> >> > >> > killer.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi
>>>>>> lun as a
>>>>>> >> > >> disk
>>>>>> >> > >> > to
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > VM,
>>>>>> >> > >> > > >>> > > > and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side
>>>>>> via the
>>>>>> >> > >> storage
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> plugin
>>>>>> >> > >> > > >>> > is
>>>>>> >> > >> > > >>> > > > best. My
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
>>>>>> refactor
>>>>>> >> > was
>>>>>> >> > >> > that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> there
>>>>>> >> > >> > > >>> > > was
>>>>>> >> > >> > > >>> > > > a snapshot
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
>>>>>> handle
>>>>>> >> > >> > snapshots.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
>>>>>> Sorensen" <
>>>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be
>>>>>> handled by
>>>>>> >> > the
>>>>>> >> > >> SAN
>>>>>> >> > >> > > back
>>>>>> >> > >> > > >>> > end,
>>>>>> >> > >> > > >>> > > > if
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack
>>>>>> mgmt
>>>>>> >> > server
>>>>>> >> > >> > > could
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> call
>>>>>> >> > >> > > >>> > > > your plugin for
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
>>>>>> hypervisor
>>>>>> >> > >> > > agnostic. As
>>>>>> >> > >> > > >>> > far
>>>>>> >> > >> > > >>> > > > as space, that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN
>>>>>> handles it.
>>>>>> >> With
>>>>>> >> > >> > ours,
>>>>>> >> > >> > > we
>>>>>> >> > >> > > >>> > carve
>>>>>> >> > >> > > >>> > > > out luns from a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes
>>>>>> from the
>>>>>> >> > >> pool
>>>>>> >> > >> > > and is
>>>>>> >> > >> > > >>> > > > independent of the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
>>>>>> Tutkowski"
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com>
>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool
>>>>>> type
>>>>>> >> for
>>>>>> >> > >> > libvirt
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
>>>>>> >> > >> > > >>> > > > work
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>>>>>> >> hypervisor
>>>>>> >> > >> > > snapshots?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
>>>>>> hypervisor
>>>>>> >> > >> > snapshot,
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > VDI
>>>>>> >> > >> > > >>> > > > for
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
>>>>>> storage
>>>>>> >> > >> > > repository
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> as
>>>>>> >> > >> > > >>> > > the
>>>>>> >> > >> > > >>> > > > volume is on.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case
>>>>>> (let's
>>>>>> >> say
>>>>>> >> > >> for
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>>>>>> >> > >> > > >>> > > and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't
>>>>>> support
>>>>>> >> > >> hypervisor
>>>>>> >> > >> > > >>> > snapshots
>>>>>> >> > >> > > >>> > > > in 4.2) is I'd
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is
>>>>>> larger than
>>>>>> >> > what
>>>>>> >> > >> the
>>>>>> >> > >> > > user
>>>>>> >> > >> > > >>> > > > requested for the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
>>>>>> because
>>>>>> >> our
>>>>>> >> > >> SAN
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>>>>>> >> > >> > > >>> > > > provisions volumes,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
>>>>>> unless
>>>>>> >> it
>>>>>> >> > >> needs
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
>>>>>> >> > >> > > >>> > > > The CloudStack
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object"
>>>>>> on the
>>>>>> >> SAN
>>>>>> >> > >> > volume
>>>>>> >> > >> > > >>> > until a
>>>>>> >> > >> > > >>> > > > hypervisor
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot
>>>>>> would
>>>>>> >> also
>>>>>> >> > >> > reside
>>>>>> >> > >> > > on
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the
>>>>>> >> > >> > > >>> > > > SAN volume.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves
>>>>>> and there
>>>>>> >> is
>>>>>> >> > >> no
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
>>>>>> >> > >> > > >>> > of
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
>>>>>> libvirt
>>>>>> >> > >> (which,
>>>>>> >> > >> > > even
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> if
>>>>>> >> > >> > > >>> > > > there were support
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only
>>>>>> allows
>>>>>> >> one
>>>>>> >> > >> LUN
>>>>>> >> > >> > per
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>>>>>> >> > >> > > >>> > > > target), then I
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model
>>>>>> will work.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance
>>>>>> the
>>>>>> >> > current
>>>>>> >> > >> way
>>>>>> >> > >> > > this
>>>>>> >> > >> > > >>> > > works
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM,
>>>>>> Mike
>>>>>> >> > >> Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com>
>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's
>>>>>> used for
>>>>>> >> > >> iSCSI
>>>>>> >> > >> > > access
>>>>>> >> > >> > > >>> > > today.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route,
>>>>>> too,
>>>>>> >> but I
>>>>>> >> > >> > might
>>>>>> >> > >> > > as
>>>>>> >> > >> > > >>> > well
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for
>>>>>> iSCSI
>>>>>> >> > instead.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
>>>>>> Marcus
>>>>>> >> > >> Sorensen
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>>>>>> >> SharedMountPoint, I
>>>>>> >> > >> > > believe
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>>>>>> >> > >> > > >>> > > just
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
>>>>>> similar
>>>>>> >> to
>>>>>> >> > >> > that.
>>>>>> >> > >> > > The
>>>>>> >> > >> > > >>> > > > end-user
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
>>>>>> system
>>>>>> >> > that
>>>>>> >> > >> all
>>>>>> >> > >> > > KVM
>>>>>> >> > >> > > >>> > hosts
>>>>>> >> > >> > > >>> > > > can
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to
>>>>>> what is
>>>>>> >> > >> > providing
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > > > storage.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>>>>>> >> clustered
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided
>>>>>> directory path
>>>>>> >> has
>>>>>> >> > >> VM
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
>>>>>> Marcus
>>>>>> >> > >> > Sorensen
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM,
>>>>>> and
>>>>>> >> iSCSI
>>>>>> >> > >> all
>>>>>> >> > >> > at
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > same
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19
>>>>>> PM, Mike
>>>>>> >> > >> > Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
>>>>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have
>>>>>> multiple
>>>>>> >> > storage
>>>>>> >> > >> > > pools:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
>>>>>> pool-list
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>>>>>> >> >  Autostart
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > -----------------------------------------
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
>>>>>>   yes
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active
>>>>>>   no
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12
>>>>>> PM, Mike
>>>>>> >> > >> > > Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mike.tutkowski@solidfire.com
>>>>>> >
>>>>>> >> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
>>>>>> pointed
>>>>>> >> > >> out.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI
>>>>>> (libvirt)
>>>>>> >> > >> storage
>>>>>> >> > >> > > pool
>>>>>> >> > >> > > >>> > based
>>>>>> >> > >> > > >>> > > on
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
>>>>>> would
>>>>>> >> > only
>>>>>> >> > >> > have
>>>>>> >> > >> > > one
>>>>>> >> > >> > > >>> > LUN,
>>>>>> >> > >> > > >>> > > > so
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt)
>>>>>> storage
>>>>>> >> volume
>>>>>> >> > in
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> > > (libvirt)
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in
>>>>>> creates and
>>>>>> >> > >> destroys
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>>>>>> >> problem
>>>>>> >> > >> that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>>>>>> >> > >> > > >>> > > does
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>>>>>> >> targets/LUNs.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
>>>>>> this a
>>>>>> >> > bit
>>>>>> >> > >> to
>>>>>> >> > >> > > see
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>>>>>> >> > >> > > >>> > > > libvirt
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools
>>>>>> (as you
>>>>>> >> > >> > > mentioned,
>>>>>> >> > >> > > >>> > since
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to
>>>>>> one of my
>>>>>> >> > >> iSCSI
>>>>>> >> > >> > > >>> > > > targets/LUNs).
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58
>>>>>> PM,
>>>>>> >> Mike
>>>>>> >> > >> > > Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <
>>>>>> mike.tutkowski@solidfire.com>
>>>>>> >> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has
>>>>>> this
>>>>>> >> type:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>>>>>> >> > NETFS("netfs"),
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"),
>>>>>> DIR("dir"),
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
>>>>>> poolType) {
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
>>>>>> poolType;
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
>>>>>> toString() {
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return
>>>>>> _poolType;
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the
>>>>>> iSCSI type
>>>>>> >> > is
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>>>>>> >> > >> > > >>> > > being
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you
>>>>>> were
>>>>>> >> > >> getting
>>>>>> >> > >> > at.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today
>>>>>> (say,
>>>>>> >> 4.2),
>>>>>> >> > >> when
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
>>>>>> uses it
>>>>>> >> > >> with
>>>>>> >> > >> > > iSCSI,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>>>>>> >> > >> > > >>> > > > that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for
>>>>>> NFS?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at
>>>>>> 5:50 PM,
>>>>>> >> > Marcus
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > >
>>>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be
>>>>>> pre-allocated on
>>>>>> >> > the
>>>>>> >> > >> > iSCSI
>>>>>> >> > >> > > >>> > server,
>>>>>> >> > >> > > >>> > > > and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt
>>>>>> APIs.",
>>>>>> >> > which
>>>>>> >> > >> I
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>>>>>> >> > >> > > >>> > > your
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does
>>>>>> the
>>>>>> >> work
>>>>>> >> > of
>>>>>> >> > >> > > logging
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>>>> >> > >> > > >>> > > and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen
>>>>>> api does
>>>>>> >> > >> that
>>>>>> >> > >> > > work
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>>>> >> > >> > > >>> > the
>>>>>> >> > >> > > >>> > > > Xen
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
>>>>>> whether
>>>>>> >> > >> this
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>>>>>> >> > >> > > >>> > a
>>>>>> >> > >> > > >>> > > > 1:1
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to
>>>>>> register 1
>>>>>> >> > iscsi
>>>>>> >> > >> > > device
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>>>>>> >> > >> > > >>> > a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or
>>>>>> read
>>>>>> >> up a
>>>>>> >> > >> bit
>>>>>> >> > >> > > more
>>>>>> >> > >> > > >>> > about
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may
>>>>>> just have
>>>>>> >> to
>>>>>> >> > >> write
>>>>>> >> > >> > > your
>>>>>> >> > >> > > >>> > own
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>>>> >> > >> > > >>> > >  We
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>>>>>> >> libvirt,
>>>>>> >> > >> see
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made,
>>>>>> then
>>>>>> >> > calls
>>>>>> >> > >> > made
>>>>>> >> > >> > > to
>>>>>> >> > >> > > >>> > that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>>>>>> >> > LibvirtStorageAdaptor
>>>>>> >> > >> to
>>>>>> >> > >> > > see
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>>>>>> >> > >> > > >>> > > that
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and
>>>>>> maybe write
>>>>>> >> > some
>>>>>> >> > >> > test
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>>>> >> > >> > > >>> > > code
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt
>>>>>> and
>>>>>> >> > >> register
>>>>>> >> > >> > > iscsi
>>>>>> >> > >> > > >>> > > storage
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at
>>>>>> 5:31 PM,
>>>>>> >> > Mike
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <
>>>>>> mike.tutkowski@solidfire.com>
>>>>>> >> > wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>>>>>> >> investigate
>>>>>> >> > >> > libvirt
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>>>>>> >> > >> > > >>> > > but
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting
>>>>>> to/disconnecting from
>>>>>> >> > >> iSCSI
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at
>>>>>> 5:29 PM,
>>>>>> >> > >> Mike
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
>>>>>> mike.tutkowski@solidfire.com>
>>>>>> >> > >> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
>>>>>> through
>>>>>> >> > >> some of
>>>>>> >> > >> > > the
>>>>>> >> > >> > > >>> > > classes
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at
>>>>>> 5:26
>>>>>> >> PM,
>>>>>> >> > >> > Marcus
>>>>>> >> > >> > > >>> > Sorensen
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that
>>>>>> you will
>>>>>> >> > >> need
>>>>>> >> > >> > the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There
>>>>>> should be
>>>>>> >> > >> standard
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>>>>>> >> > >> > > >>> > > for
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage
>>>>>> adaptor to do
>>>>>> >> > the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>>>>>> >> > >> > > >>> > > > login.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>>> LibvirtStorageAdaptor.java
>>>>>> >> > >> > > >>> > and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits
>>>>>> your need.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55
>>>>>> PM, "Mike
>>>>>> >> > >> > > Tutkowski"
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
>>>>>> during
>>>>>> >> the
>>>>>> >> > >> 4.2
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>>>>>> >> > >> > > >>> > I
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>>>>>> >> > CloudStack.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was
>>>>>> invoked by
>>>>>> >> the
>>>>>> >> > >> > > storage
>>>>>> >> > >> > > >>> > > > framework
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
>>>>>> dynamically
>>>>>> >> > >> create
>>>>>> >> > >> > and
>>>>>> >> > >> > > >>> > delete
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other
>>>>>> activities).
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so
>>>>>> I can
>>>>>> >> > >> > establish a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>>>> >> > >> > > >>> > > > mapping
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a
>>>>>> SolidFire volume
>>>>>> >> > for
>>>>>> >> > >> > QoS.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past,
>>>>>> CloudStack
>>>>>> >> always
>>>>>> >> > >> > > expected
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> > >> > > >>> > > > admin
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time
>>>>>> and
>>>>>> >> those
>>>>>> >> > >> > > volumes
>>>>>> >> > >> > > >>> > would
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is
>>>>>> not QoS
>>>>>> >> > >> > > friendly).
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1
>>>>>> mapping
>>>>>> >> scheme
>>>>>> >> > >> > work,
>>>>>> >> > >> > > I
>>>>>> >> > >> > > >>> > needed
>>>>>> >> > >> > > >>> > > > to
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
>>>>>> plug-ins
>>>>>> >> > so
>>>>>> >> > >> > they
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> repositories/datastores as
>>>>>> >> > >> needed.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to
>>>>>> make this
>>>>>> >> > >> happen
>>>>>> >> > >> > > with
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to
>>>>>> speed with
>>>>>> >> how
>>>>>> >> > >> this
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>>>>>> >> > >> > > >>> > > work
>>>>>> >> > >> > > >>> > > > on
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar
>>>>>> with KVM
>>>>>> >> > >> know
>>>>>> >> > >> > > how I
>>>>>> >> > >> > > >>> > will
>>>>>> >> > >> > > >>> > > > need
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For
>>>>>> example,
>>>>>> >> > will I
>>>>>> >> > >> > > have to
>>>>>> >> > >> > > >>> > > expect
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM
>>>>>> host and
>>>>>> >> > >> use it
>>>>>> >> > >> > > for
>>>>>> >> > >> > > >>> > this
>>>>>> >> > >> > > >>> > > to
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
>>>>>> suggestions,
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
>>>>>> Developer,
>>>>>> >> > >> > SolidFire
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>>>>>> >> > mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
>>>>>> world
>>>>>> >> > uses
>>>>>> >> > >> the
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack
>>>>>> Developer,
>>>>>> >> > >> SolidFire
>>>>>> >> > >> > > Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>>>>>> >> mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the
>>>>>> world
>>>>>> >> uses
>>>>>> >> > >> the
>>>>>> >> > >> > > cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack
>>>>>> Developer,
>>>>>> >> > >> SolidFire
>>>>>> >> > >> > > Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the
>>>>>> world uses
>>>>>> >> > the
>>>>>> >> > >> > > cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>>>>>> >> > SolidFire
>>>>>> >> > >> > Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e:
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world
>>>>>> uses
>>>>>> >> the
>>>>>> >> > >> > cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>>>>>> >> SolidFire
>>>>>> >> > >> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e:
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world
>>>>>> uses the
>>>>>> >> > >> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>>>>>> >> SolidFire
>>>>>> >> > >> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e:
>>>>>> mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world
>>>>>> uses the
>>>>>> >> > >> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer,
>>>>>> SolidFire
>>>>>> >> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses
>>>>>> the
>>>>>> >> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer,
>>>>>> SolidFire
>>>>>> >> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses
>>>>>> the
>>>>>> >> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>>>> --
>>>>>> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire
>>>>>> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the
>>>>>> cloud™
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>>
>>>>>> >> > >> > > >>> > > > >>>>>> --
>>>>>> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire
>>>>>> Inc.
>>>>>> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the
>>>>>> cloud™
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>>
>>>>>> >> > >> > > >>> > > > >>> --
>>>>>> >> > >> > > >>> > > > >>> Mike Tutkowski
>>>>>> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > >>> o: 303.746.7302
>>>>>> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > >
>>>>>> >> > >> > > >>> > > > > --
>>>>>> >> > >> > > >>> > > > > Mike Tutkowski
>>>>>> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>>>>>> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > > > o: 303.746.7302
>>>>>> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>>>>>> >> > >> > > >>> > > >
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> > > --
>>>>>> >> > >> > > >>> > > *Mike Tutkowski*
>>>>>> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> > > o: 303.746.7302
>>>>>> >> > >> > > >>> > > Advancing the way the world uses the
>>>>>> >> > >> > > >>> > > cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > >> > > >>> > > *™*
>>>>>> >> > >> > > >>> > >
>>>>>> >> > >> > > >>> >
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>>
>>>>>> >> > >> > > >>> --
>>>>>> >> > >> > > >>> *Mike Tutkowski*
>>>>>> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > >> > > >>> e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > > >>> o: 303.746.7302
>>>>>> >> > >> > > >>> Advancing the way the world uses the
>>>>>> >> > >> > > >>> cloud<
>>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > >> > > >>> *™*
>>>>>> >> > >> > >
>>>>>> >> > >> >
>>>>>> >> > >> >
>>>>>> >> > >> >
>>>>>> >> > >> > --
>>>>>> >> > >> > *Mike Tutkowski*
>>>>>> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > >> > e: mike.tutkowski@solidfire.com
>>>>>> >> > >> > o: 303.746.7302
>>>>>> >> > >> > Advancing the way the world uses the
>>>>>> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > >> > *™*
>>>>>> >> > >> >
>>>>>> >> > >>
>>>>>> >> > >
>>>>>> >> > >
>>>>>> >> > >
>>>>>> >> > > --
>>>>>> >> > > *Mike Tutkowski*
>>>>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > > e: mike.tutkowski@solidfire.com
>>>>>> >> > > o: 303.746.7302
>>>>>> >> > > Advancing the way the world uses the cloud<
>>>>>> >> > http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > > *™*
>>>>>> >> > >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > --
>>>>>> >> > *Mike Tutkowski*
>>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>>> >> > o: 303.746.7302
>>>>>> >> > Advancing the way the world uses the
>>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> >> > *™*
>>>>>> >> >
>>>>>> >>
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > *Mike Tutkowski*
>>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> > e: mike.tutkowski@solidfire.com
>>>>>> > o: 303.746.7302
>>>>>> > Advancing the way the world uses the
>>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> > *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Got most of my KVM code written to support managed storage (I believe).

I'm in the process of testing it, but having a hard time adding a KVM host
to my MG.

I took a look at agent.log and it repeats the following line many times:

2013-09-20 14:00:17,264 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to 192.168.233.1:8250
2013-09-20 14:00:17,265 WARN  [utils.nio.NioConnection]
(Agent-Selector:null) Unable to connect to remote: is there a server
running on port 8250

When I go over to the host running my management server (on
192.168.233.10), it appears it's listening on port 8250 (the CS MS is
running on Mac OS X 10.8.3):

mtutkowski-LT:~ mtutkowski$ sudo lsof -i -P | grep 8250
java      35404     mtutkowski  303u  IPv6 0x91ca095ebc7bd3eb      0t0
 TCP *:8250 (LISTEN)

The KVM host is on Ubuntu 12.04.1. I don't think any firewalls are in play.

Any thoughts on this?

Thanks!


On Wed, Sep 18, 2013 at 1:51 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Actually, I may have put that code in a place that is never called
> nowadays.
>
> Do you know why we have AttachVolumeCommand in LibvirtComputingResource
> and AttachCommand DetachCommand in KVMStorageProcessor?
>
> They seem like they do the same thing.
>
> If I recall, this change went in with 4.2 and it's the same story for
> XenServer and VMware.
>
> Maybe this location (in KVMStorageProcessor) makes more sense:
>
>     @Override
>
>     public Answer attachVolume(AttachCommand cmd) {
>
>         DiskTO disk = cmd.getDisk();
>
>         VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>
>         PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
> vol.getDataStore();
>
>         String vmName = cmd.getVmName();
>
>         try {
>
>             Connect conn = LibvirtConnection.getConnectionByVmName(vmName);
>
> *            if (cmd.isManaged()) {*
>
> *                storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
> cmd.getStorageHost(), cmd.getStoragePort(),*
>
> *                        vol.getPath(), null, primaryStore.getPoolType());
> *
>
> *            }*
>
>             KVMStoragePool primary = storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
> primaryStore.getUuid());
>
>             KVMPhysicalDisk phyDisk =
> primary.getPhysicalDisk(vol.getPath());
>
>             attachOrDetachDisk(conn, true, vmName, phyDisk,
> disk.getDiskSeq().intValue());
>
>             return new AttachAnswer(disk);
>
>         } catch (LibvirtException e) {
>
>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
> due to " + e.toString());
>
>             return new AttachAnswer(e.toString());
>
>         } catch (InternalErrorException e) {
>
>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
> due to " + e.toString());
>
>             return new AttachAnswer(e.toString());
>
>         }
>
>     }
>
>
>     @Override
>
>     public Answer dettachVolume(DettachCommand cmd) {
>
>         DiskTO disk = cmd.getDisk();
>
>         VolumeObjectTO vol = (VolumeObjectTO) disk.getData();
>
>         PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
> vol.getDataStore();
>
>         String vmName = cmd.getVmName();
>
>         try {
>
>             Connect conn = LibvirtConnection.getConnectionByVmName(vmName);
>
>             KVMStoragePool primary = storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
> primaryStore.getUuid());
>
>             KVMPhysicalDisk phyDisk =
> primary.getPhysicalDisk(vol.getPath());
>
>             attachOrDetachDisk(conn, false, vmName, phyDisk,
> disk.getDiskSeq().intValue());
>
> *            if (cmd.isManaged()) {*
>
> *                storagePoolMgr.deleteStoragePool(primaryStore.getPoolType(),
> cmd.get_iScsiName());*
>
> *            }*
>
>             return new DettachAnswer(disk);
>
>         } catch (LibvirtException e) {
>
>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
> due to " + e.toString());
>
>             return new DettachAnswer(e.toString());
>
>         } catch (InternalErrorException e) {
>
>             s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
> due to " + e.toString());
>
>             return new DettachAnswer(e.toString());
>
>         }
>
>     }
>
>
> On Wed, Sep 18, 2013 at 1:07 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Hey Marcus,
>>
>> What do you think about something relatively simple like this (below)? It
>> parallels the XenServer and VMware code nicely.
>>
>> The idea is to only deal with the AttachVolumeCommand.
>>
>> If we are attaching a volume AND the underlying storage is so-called
>> managed storage, at this point I invoke createStoragePool to create my
>> iScsiAdmStoragePool object, establish a connection with the LUN, and
>> prepare a KVMPhysicalDisk object, which will be requested a bit later
>> during the actual attach.
>>
>> If we are detaching a volume AND the underlying storage is managed, the
>> KVMStoragePool already exists, so we don't have to do anything special
>> until after the volume is detached. At this point, we delete the storage
>> pool (remove the iSCSI connection to the LUN and remove the reference to
>> the iScsiAdmStoragePool from my adaptor).
>>
>>     private AttachVolumeAnswer execute(AttachVolumeCommand cmd) {
>>
>>         try {
>>
>>             Connect conn =
>> LibvirtConnection.getConnectionByVmName(cmd.getVmName());
>>
>> *            if (cmd.getAttach() && cmd.isManaged()) {*
>>
>> *                _storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
>> cmd.getStorageHost(), cmd.getStoragePort(),*
>>
>> *                        cmd.getVolumePath(), null, cmd.getPooltype());*
>>
>> *            }*
>>
>>             KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>
>>                     cmd.getPooltype(),
>>
>>                     cmd.getPoolUuid());
>>
>>             KVMPhysicalDisk disk =
>> primary.getPhysicalDisk(cmd.getVolumePath());
>>
>>             attachOrDetachDisk(conn, cmd.getAttach(), cmd.getVmName(),
>> disk,
>>
>>                     cmd.getDeviceId().intValue(), cmd.getBytesReadRate(),
>> cmd.getBytesWriteRate(), cmd.getIopsReadRate(), cmd.getIopsWriteRate());
>>
>> *            if (!cmd.getAttach() && cmd.isManaged()) {*
>>
>> *                _storagePoolMgr.deleteStoragePool(cmd.getPooltype(),
>> cmd.get_iScsiName());*
>>
>> *            }*
>>
>>         } catch (LibvirtException e) {
>>
>>             return new AttachVolumeAnswer(cmd, e.toString());
>>
>>         } catch (InternalErrorException e) {
>>
>>             return new AttachVolumeAnswer(cmd, e.toString());
>>
>>         }
>>
>>
>>         return new AttachVolumeAnswer(cmd, cmd.getDeviceId(),
>> cmd.getVolumePath());
>>
>>     }
>>
>>
>> On Wed, Sep 18, 2013 at 11:27 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Since a KVMStoragePool returns a capacity, used, and available number of
>>> bytes, I will probably need to look into having this information ignored if
>>> the storage_pool in question is "managed" as - in my case - it wouldn't
>>> really make any sense.
>>>
>>>
>>> On Wed, Sep 18, 2013 at 10:53 AM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> Sure, sounds good.
>>>>
>>>> Right now there are only two storage plug-ins: Edison's default plug-in
>>>> and the SolidFire plug-in.
>>>>
>>>> As an example, when createAsync is called in the plug-in, mine creates
>>>> a new volume (LUN) on the SAN with a capacity and number of Min, Max, and
>>>> Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
>>>> of preallocated storage for a new volume (like create a new VDI in an
>>>> existing SR).
>>>>
>>>>
>>>> On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> That wasn't my question, but I feel we're getting off in the weeds and
>>>>> I can just look at the storage framework to see how it works and what
>>>>> options it supports.
>>>>>
>>>>> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
>>>>> <mi...@solidfire.com> wrote:
>>>>> > At the time being, I am not aware of any other storage vendor with
>>>>> truly
>>>>> > guaranteed QoS.
>>>>> >
>>>>> > Most implement QoS in a relative sense (like thread priorities).
>>>>> >
>>>>> >
>>>>> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <
>>>>> shadowsor@gmail.com>wrote:
>>>>> >
>>>>> >> Yeah, that's why I thought it was specific to your implementation.
>>>>> Perhaps
>>>>> >> that's true, then?
>>>>> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com>
>>>>> >> wrote:
>>>>> >>
>>>>> >> > I totally get where you're coming from with the tiered-pool
>>>>> approach,
>>>>> >> > though.
>>>>> >> >
>>>>> >> > Prior to SolidFire, I worked at HP and the product I worked on
>>>>> allowed a
>>>>> >> > single, clustered SAN to host multiple pools of storage. One pool
>>>>> might
>>>>> >> be
>>>>> >> > made up of all-SSD storage nodes while another pool might be made
>>>>> up of
>>>>> >> > slower HDDs.
>>>>> >> >
>>>>> >> > That kind of tiering is not what SolidFire QoS is about, though,
>>>>> as that
>>>>> >> > kind of tiering does not guarantee QoS.
>>>>> >> >
>>>>> >> > In the SolidFire SAN, QoS was designed in from the beginning and
>>>>> is
>>>>> >> > extremely granular. Each volume has its own performance and
>>>>> capacity. You
>>>>> >> > do not have to worry about Noisy Neighbors.
>>>>> >> >
>>>>> >> > The idea is to encourage businesses to trust the cloud with their
>>>>> most
>>>>> >> > critical business applications at a price point on par with
>>>>> traditional
>>>>> >> > SANs.
>>>>> >> >
>>>>> >> >
>>>>> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>>>>> >> > mike.tutkowski@solidfire.com> wrote:
>>>>> >> >
>>>>> >> > > Ah, I think I see the miscommunication.
>>>>> >> > >
>>>>> >> > > I should have gone into a bit more detail about the SolidFire
>>>>> SAN.
>>>>> >> > >
>>>>> >> > > It is built from the ground up to support QoS on a LUN-by-LUN
>>>>> basis.
>>>>> >> > Every
>>>>> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>>>>> >> > >
>>>>> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself
>>>>> is not
>>>>> >> > > over provisioned). Capacity and IOPS are provisioned
>>>>> independently.
>>>>> >> > > Multiple volumes and multiple tenants using the same SAN do not
>>>>> suffer
>>>>> >> > from
>>>>> >> > > the Noisy Neighbor effect.
>>>>> >> > >
>>>>> >> > > When you create a Disk Offering in CS that is storage tagged to
>>>>> use
>>>>> >> > > SolidFire primary storage, you specify a Min, Max, and Burst
>>>>> number of
>>>>> >> > IOPS
>>>>> >> > > to provision from the SAN for volumes created from that Disk
>>>>> Offering.
>>>>> >> > >
>>>>> >> > > There is no notion of RAID groups that you see in more
>>>>> traditional
>>>>> >> SANs.
>>>>> >> > > The SAN is built from clusters of storage nodes and data is
>>>>> replicated
>>>>> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN)
>>>>> in the
>>>>> >> > > cluster to avoid hot spots and protect the data should a drives
>>>>> and/or
>>>>> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
>>>>> >> > >
>>>>> >> > > Data is compressed and de-duplicated inline across the cluster
>>>>> and all
>>>>> >> > > volumes are thinly provisioned.
>>>>> >> > >
>>>>> >> > >
>>>>> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com
>>>>> >> > >wrote:
>>>>> >> > >
>>>>> >> > >> I'm surprised there's no mention of pool on the SAN in your
>>>>> >> description
>>>>> >> > of
>>>>> >> > >> the framework. I had assumed this was specific to your
>>>>> implementation,
>>>>> >> > >> because normally SANs host multiple disk pools, maybe multiple
>>>>> RAID
>>>>> >> 50s
>>>>> >> > >> and
>>>>> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a
>>>>> pool
>>>>> >> > intended
>>>>> >> > >> for root disks and a separate one for data disks. Or one pool
>>>>> for
>>>>> >> > >> cloudstack and one dedicated to some other internal db
>>>>> application.
>>>>> >> But
>>>>> >> > it
>>>>> >> > >> sounds as though there's no place to specify which disks or
>>>>> pool on
>>>>> >> the
>>>>> >> > >> SAN
>>>>> >> > >> to use.
>>>>> >> > >>
>>>>> >> > >> We implemented our own internal storage SAN plugin based on
>>>>> 4.1. We
>>>>> >> used
>>>>> >> > >> the 'path' attribute of the primary storage pool object to
>>>>> specify
>>>>> >> which
>>>>> >> > >> pool name on the back end SAN to use, so we could create
>>>>> all-ssd pools
>>>>> >> > and
>>>>> >> > >> slower spindle pools, then differentiate between them based on
>>>>> storage
>>>>> >> > >> tags. Normally the path attribute would be the mount point for
>>>>> NFS,
>>>>> >> but
>>>>> >> > >> its
>>>>> >> > >> just a string. So when registering ours we enter San dns host
>>>>> name,
>>>>> >> the
>>>>> >> > >> san's rest api port, and the pool name. Then luns created from
>>>>> that
>>>>> >> > >> primary
>>>>> >> > >> storage come from the matching disk pool on the SAN. We can
>>>>> create and
>>>>> >> > >> register multiple pools of different types and purposes on the
>>>>> same
>>>>> >> SAN.
>>>>> >> > >> We
>>>>> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it
>>>>> will be
>>>>> >> > >> interesting to see what we can come up with to make it work
>>>>> similarly.
>>>>> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>>>>> >> > mike.tutkowski@solidfire.com
>>>>> >> > >> >
>>>>> >> > >> wrote:
>>>>> >> > >>
>>>>> >> > >> > What you're saying here is definitely something we should
>>>>> talk
>>>>> >> about.
>>>>> >> > >> >
>>>>> >> > >> > Hopefully my previous e-mail has clarified how this works a
>>>>> bit.
>>>>> >> > >> >
>>>>> >> > >> > It mainly comes down to this:
>>>>> >> > >> >
>>>>> >> > >> > For the first time in CS history, primary storage is no
>>>>> longer
>>>>> >> > required
>>>>> >> > >> to
>>>>> >> > >> > be preallocated by the admin and then handed to CS. CS
>>>>> volumes don't
>>>>> >> > >> have
>>>>> >> > >> > to share a preallocated volume anymore.
>>>>> >> > >> >
>>>>> >> > >> > As of 4.2, primary storage can be based on a SAN (or some
>>>>> other
>>>>> >> > storage
>>>>> >> > >> > device). You can tell CS how many bytes and IOPS to use from
>>>>> this
>>>>> >> > >> storage
>>>>> >> > >> > device and CS invokes the appropriate plug-in to carve out
>>>>> LUNs
>>>>> >> > >> > dynamically.
>>>>> >> > >> >
>>>>> >> > >> > Each LUN is home to one and only one data disk. Data disks -
>>>>> in this
>>>>> >> > >> model
>>>>> >> > >> > - never share a LUN.
>>>>> >> > >> >
>>>>> >> > >> > The main use case for this is so a CS volume can deliver
>>>>> guaranteed
>>>>> >> > >> IOPS if
>>>>> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed
>>>>> IOPS on a
>>>>> >> > >> > LUN-by-LUN basis.
>>>>> >> > >> >
>>>>> >> > >> >
>>>>> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>>>>> >> > shadowsor@gmail.com
>>>>> >> > >> > >wrote:
>>>>> >> > >> >
>>>>> >> > >> > > I guess whether or not a solidfire device is capable of
>>>>> hosting
>>>>> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could
>>>>> get the
>>>>> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs).
>>>>> But if
>>>>> >> > these
>>>>> >> > >> > > stats aren't collected, I can't as an admin define
>>>>> multiple pools
>>>>> >> > and
>>>>> >> > >> > > expect cloudstack to allocate evenly from them or fill one
>>>>> up and
>>>>> >> > move
>>>>> >> > >> > > to the next, because it doesn't know how big it is.
>>>>> >> > >> > >
>>>>> >> > >> > > Ultimately this discussion has nothing to do with the KVM
>>>>> stuff
>>>>> >> > >> > > itself, just a tangent, but something to think about.
>>>>> >> > >> > >
>>>>> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>>>>> >> > >> shadowsor@gmail.com>
>>>>> >> > >> > > wrote:
>>>>> >> > >> > > > Ok, on most storage pools it shows how many GB free/used
>>>>> when
>>>>> >> > >> listing
>>>>> >> > >> > > > the pool both via API and in the UI. I'm guessing those
>>>>> are
>>>>> >> empty
>>>>> >> > >> then
>>>>> >> > >> > > > for the solid fire storage, but it seems like the user
>>>>> should
>>>>> >> have
>>>>> >> > >> to
>>>>> >> > >> > > > define some sort of pool that the luns get carved out
>>>>> of, and
>>>>> >> you
>>>>> >> > >> > > > should be able to get the stats for that, right? Or is a
>>>>> solid
>>>>> >> > fire
>>>>> >> > >> > > > appliance only one pool per appliance? This isn't about
>>>>> billing,
>>>>> >> > but
>>>>> >> > >> > > > just so cloudstack itself knows whether or not there is
>>>>> space
>>>>> >> left
>>>>> >> > >> on
>>>>> >> > >> > > > the storage device, so cloudstack can go on allocating
>>>>> from a
>>>>> >> > >> > > > different primary storage as this one fills up. There
>>>>> are also
>>>>> >> > >> > > > notifications and things. It seems like there should be
>>>>> a call
>>>>> >> you
>>>>> >> > >> can
>>>>> >> > >> > > > handle for this, maybe Edison knows.
>>>>> >> > >> > > >
>>>>> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>>>>> >> > >> shadowsor@gmail.com>
>>>>> >> > >> > > wrote:
>>>>> >> > >> > > >> You respond to more than attach and detach, right?
>>>>> Don't you
>>>>> >> > create
>>>>> >> > >> > > luns as
>>>>> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>>>>> >> > >> > > >>
>>>>> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>>>>> >> > >> > mike.tutkowski@solidfire.com
>>>>> >> > >> > > >
>>>>> >> > >> > > >> wrote:
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> Hi Marcus,
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> I never need to respond to a CreateStoragePool call
>>>>> for either
>>>>> >> > >> > > XenServer
>>>>> >> > >> > > >>> or
>>>>> >> > >> > > >>> VMware.
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> What happens is I respond only to the Attach- and
>>>>> >> Detach-volume
>>>>> >> > >> > > commands.
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> Let's say an attach comes in:
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> In this case, I check to see if the storage is
>>>>> "managed."
>>>>> >> > Talking
>>>>> >> > >> > > >>> XenServer
>>>>> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk
>>>>> we want
>>>>> >> to
>>>>> >> > >> > attach.
>>>>> >> > >> > > >>> After, if this is the first time attaching this disk,
>>>>> I create
>>>>> >> > an
>>>>> >> > >> SR
>>>>> >> > >> > > and a
>>>>> >> > >> > > >>> VDI within the SR. If it is not the first time
>>>>> attaching this
>>>>> >> > >> disk,
>>>>> >> > >> > the
>>>>> >> > >> > > >>> LUN
>>>>> >> > >> > > >>> already has the SR and VDI on it.
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
>>>>> because
>>>>> >> > >> this
>>>>> >> > >> > > logic
>>>>> >> > >> > > >>> expected an SR and a VDI and now it has it.
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> It's the same thing for VMware: Just substitute
>>>>> datastore for
>>>>> >> SR
>>>>> >> > >> and
>>>>> >> > >> > > VMDK
>>>>> >> > >> > > >>> for VDI.
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> Does that make sense?
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> Thanks!
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>>>>> >> > >> > > >>> <sh...@gmail.com>wrote:
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> > What do you do with Xen? I imagine the user enter
>>>>> the SAN
>>>>> >> > >> details
>>>>> >> > >> > > when
>>>>> >> > >> > > >>> > registering the pool? A the pool details are
>>>>> basically just
>>>>> >> > >> > > instructions
>>>>> >> > >> > > >>> > on
>>>>> >> > >> > > >>> > how to log into a target, correct?
>>>>> >> > >> > > >>> >
>>>>> >> > >> > > >>> > You can choose to log in a KVM host to the target
>>>>> during
>>>>> >> > >> > > >>> > createStoragePool
>>>>> >> > >> > > >>> > and save the pool in a map, or just save the pool
>>>>> info in a
>>>>> >> > map
>>>>> >> > >> for
>>>>> >> > >> > > >>> > future
>>>>> >> > >> > > >>> > reference by uuid, for when you do need to log in.
>>>>> The
>>>>> >> > >> > > createStoragePool
>>>>> >> > >> > > >>> > then just becomes a way to save the pool info to the
>>>>> agent.
>>>>> >> > >> > > Personally,
>>>>> >> > >> > > >>> > I'd
>>>>> >> > >> > > >>> > log in on the pool create and look/scan for specific
>>>>> luns
>>>>> >> when
>>>>> >> > >> > > they're
>>>>> >> > >> > > >>> > needed, but I haven't thought it through thoroughly.
>>>>> I just
>>>>> >> > say
>>>>> >> > >> > that
>>>>> >> > >> > > >>> > mainly
>>>>> >> > >> > > >>> > because login only happens once, the first time the
>>>>> pool is
>>>>> >> > >> used,
>>>>> >> > >> > and
>>>>> >> > >> > > >>> > every
>>>>> >> > >> > > >>> > other storage command is about discovering new luns
>>>>> or maybe
>>>>> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
>>>>> other
>>>>> >> > hand,
>>>>> >> > >> > you
>>>>> >> > >> > > >>> > could
>>>>> >> > >> > > >>> > do all of the above: log in on pool create, then
>>>>> also check
>>>>> >> if
>>>>> >> > >> > you're
>>>>> >> > >> > > >>> > logged in on other commands and log in if you've lost
>>>>> >> > >> connection.
>>>>> >> > >> > > >>> >
>>>>> >> > >> > > >>> > With Xen, what does your registered pool   show in
>>>>> the UI
>>>>> >> for
>>>>> >> > >> > > avail/used
>>>>> >> > >> > > >>> > capacity, and how does it get that info? I assume
>>>>> there is
>>>>> >> > some
>>>>> >> > >> > sort
>>>>> >> > >> > > of
>>>>> >> > >> > > >>> > disk pool that the luns are carved from, and that
>>>>> your
>>>>> >> plugin
>>>>> >> > is
>>>>> >> > >> > > called
>>>>> >> > >> > > >>> > to
>>>>> >> > >> > > >>> > talk to the SAN and expose to the user how much of
>>>>> that pool
>>>>> >> > has
>>>>> >> > >> > been
>>>>> >> > >> > > >>> > allocated. Knowing how you already solves these
>>>>> problems
>>>>> >> with
>>>>> >> > >> Xen
>>>>> >> > >> > > will
>>>>> >> > >> > > >>> > help
>>>>> >> > >> > > >>> > figure out what to do with KVM.
>>>>> >> > >> > > >>> >
>>>>> >> > >> > > >>> > If this is the case, I think the plugin can continue
>>>>> to
>>>>> >> handle
>>>>> >> > >> it
>>>>> >> > >> > > rather
>>>>> >> > >> > > >>> > than getting details from the agent. I'm not sure if
>>>>> that
>>>>> >> > means
>>>>> >> > >> > nulls
>>>>> >> > >> > > >>> > are
>>>>> >> > >> > > >>> > OK for these on the agent side or what, I need to
>>>>> look at
>>>>> >> the
>>>>> >> > >> > storage
>>>>> >> > >> > > >>> > plugin arch more closely.
>>>>> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>>>>> >> > >> > > mike.tutkowski@solidfire.com>
>>>>> >> > >> > > >>> > wrote:
>>>>> >> > >> > > >>> >
>>>>> >> > >> > > >>> > > Hey Marcus,
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the
>>>>> necessary
>>>>> >> > >> methods
>>>>> >> > >> > in
>>>>> >> > >> > > new
>>>>> >> > >> > > >>> > > classes.
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > "So, referencing StorageAdaptor.java,
>>>>> createStoragePool
>>>>> >> > >> accepts
>>>>> >> > >> > > all of
>>>>> >> > >> > > >>> > > the pool data (host, port, name, path) which would
>>>>> be used
>>>>> >> > to
>>>>> >> > >> log
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > > host into the initiator."
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
>>>>> (primary
>>>>> >> > >> > > storage) is
>>>>> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>>>>> >> anything
>>>>> >> > >> at
>>>>> >> > >> > > this
>>>>> >> > >> > > >>> > point,
>>>>> >> > >> > > >>> > > correct?
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > Also, what kind of capacity, available, and used
>>>>> bytes
>>>>> >> make
>>>>> >> > >> sense
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > report
>>>>> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool
>>>>> represents the
>>>>> >> SAN
>>>>> >> > >> in my
>>>>> >> > >> > > case
>>>>> >> > >> > > >>> > and
>>>>> >> > >> > > >>> > > not an individual LUN)?
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > Thanks!
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>>>>> >> > >> > > shadowsor@gmail.com
>>>>> >> > >> > > >>> > > >wrote:
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > > Ok, KVM will be close to that, of course,
>>>>> because only
>>>>> >> the
>>>>> >> > >> > > >>> > > > hypervisor
>>>>> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
>>>>> Creating a
>>>>> >> > >> volume
>>>>> >> > >> > is
>>>>> >> > >> > > >>> > > > just
>>>>> >> > >> > > >>> > > > a db entry until it's deployed for the first
>>>>> time.
>>>>> >> > >> > > >>> > > > AttachVolumeCommand
>>>>> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>>>>> >> analogous
>>>>> >> > >> to
>>>>> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
>>>>> commands
>>>>> >> > (via
>>>>> >> > >> a
>>>>> >> > >> > KVM
>>>>> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target
>>>>> and
>>>>> >> then
>>>>> >> > >> you
>>>>> >> > >> > > have a
>>>>> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for
>>>>> you, but
>>>>> >> my
>>>>> >> > >> quick
>>>>> >> > >> > > read
>>>>> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>>>>> >> > actually a
>>>>> >> > >> > > pool,
>>>>> >> > >> > > >>> > > > not
>>>>> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if
>>>>> that
>>>>> >> > works
>>>>> >> > >> or
>>>>> >> > >> > if
>>>>> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>>>>> >> (because
>>>>> >> > >> > Libvirt
>>>>> >> > >> > > >>> > > > doesn't really manage your pool the way you
>>>>> want),
>>>>> >> you're
>>>>> >> > >> going
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > > > have to create a version of KVMStoragePool class
>>>>> and a
>>>>> >> > >> > > >>> > > > StorageAdaptor
>>>>> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>>>>> >> > >> > > LibvirtStorageAdaptor.java),
>>>>> >> > >> > > >>> > > > implementing all of the methods, then in
>>>>> >> > >> KVMStorageManager.java
>>>>> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
>>>>> select
>>>>> >> the
>>>>> >> > >> > > correct
>>>>> >> > >> > > >>> > > > adaptor, you can see in this file that every
>>>>> call first
>>>>> >> > >> pulls
>>>>> >> > >> > the
>>>>> >> > >> > > >>> > > > correct adaptor out of this map via
>>>>> getStorageAdaptor.
>>>>> >> So
>>>>> >> > >> you
>>>>> >> > >> > can
>>>>> >> > >> > > >>> > > > see
>>>>> >> > >> > > >>> > > > a comment in this file that says "add other
>>>>> storage
>>>>> >> > adaptors
>>>>> >> > >> > > here",
>>>>> >> > >> > > >>> > > > where it puts to this map, this is where you'd
>>>>> register
>>>>> >> > your
>>>>> >> > >> > > >>> > > > adaptor.
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
>>>>> createStoragePool
>>>>> >> > >> accepts
>>>>> >> > >> > > all
>>>>> >> > >> > > >>> > > > of
>>>>> >> > >> > > >>> > > > the pool data (host, port, name, path) which
>>>>> would be
>>>>> >> used
>>>>> >> > >> to
>>>>> >> > >> > log
>>>>> >> > >> > > >>> > > > the
>>>>> >> > >> > > >>> > > > host into the initiator. I *believe* the method
>>>>> >> > >> getPhysicalDisk
>>>>> >> > >> > > will
>>>>> >> > >> > > >>> > > > need to do the work of attaching the lun.
>>>>> >> > >>  AttachVolumeCommand
>>>>> >> > >> > > calls
>>>>> >> > >> > > >>> > > > this and then creates the XML diskdef and
>>>>> attaches it to
>>>>> >> > the
>>>>> >> > >> > VM.
>>>>> >> > >> > > >>> > > > Now,
>>>>> >> > >> > > >>> > > > one thing you need to know is that
>>>>> createStoragePool is
>>>>> >> > >> called
>>>>> >> > >> > > >>> > > > often,
>>>>> >> > >> > > >>> > > > sometimes just to make sure the pool is there.
>>>>> You may
>>>>> >> > want
>>>>> >> > >> to
>>>>> >> > >> > > >>> > > > create
>>>>> >> > >> > > >>> > > > a map in your adaptor class and keep track of
>>>>> pools that
>>>>> >> > >> have
>>>>> >> > >> > > been
>>>>> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to
>>>>> do this
>>>>> >> > >> because
>>>>> >> > >> > it
>>>>> >> > >> > > >>> > > > asks
>>>>> >> > >> > > >>> > > > libvirt about which storage pools exist. There
>>>>> are also
>>>>> >> > >> calls
>>>>> >> > >> > to
>>>>> >> > >> > > >>> > > > refresh the pool stats, and all of the other
>>>>> calls can
>>>>> >> be
>>>>> >> > >> seen
>>>>> >> > >> > in
>>>>> >> > >> > > >>> > > > the
>>>>> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
>>>>> disk,
>>>>> >> > >> clone,
>>>>> >> > >> > > etc,
>>>>> >> > >> > > >>> > > > but
>>>>> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have
>>>>> the vague
>>>>> >> > idea
>>>>> >> > >> > that
>>>>> >> > >> > > >>> > > > volumes are created on the mgmt server via the
>>>>> plugin
>>>>> >> now,
>>>>> >> > >> so
>>>>> >> > >> > > >>> > > > whatever
>>>>> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or
>>>>> optionally
>>>>> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind
>>>>> the hosts
>>>>> >> > >> > talking
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > > > the san api).
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > There is a difference between attaching new
>>>>> volumes and
>>>>> >> > >> > > launching a
>>>>> >> > >> > > >>> > > > VM
>>>>> >> > >> > > >>> > > > with existing volumes.  In the latter case, the
>>>>> VM
>>>>> >> > >> definition
>>>>> >> > >> > > that
>>>>> >> > >> > > >>> > > > was
>>>>> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
>>>>> >> > (StartCommand).
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > I'd be interested in how your pool is defined
>>>>> for Xen, I
>>>>> >> > >> > imagine
>>>>> >> > >> > > it
>>>>> >> > >> > > >>> > > > would need to be kept the same. Is it just a
>>>>> definition
>>>>> >> to
>>>>> >> > >> the
>>>>> >> > >> > > SAN
>>>>> >> > >> > > >>> > > > (ip address or some such, port number) and
>>>>> perhaps a
>>>>> >> > volume
>>>>> >> > >> > pool
>>>>> >> > >> > > >>> > > > name?
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL
>>>>> list on the
>>>>> >> > >> SAN to
>>>>> >> > >> > > have
>>>>> >> > >> > > >>> > > only a
>>>>> >> > >> > > >>> > > > > single KVM host have access to the volume,
>>>>> that would
>>>>> >> be
>>>>> >> > >> > ideal.
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
>>>>> >> impression
>>>>> >> > >> that
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > > > storage plugin framework allowed for acls, or
>>>>> for you to
>>>>> >> > do
>>>>> >> > >> > > whatever
>>>>> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
>>>>> You'd
>>>>> >> > just
>>>>> >> > >> > call
>>>>> >> > >> > > >>> > > > your
>>>>> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to
>>>>> when
>>>>> >> the
>>>>> >> > >> disk
>>>>> >> > >> > is
>>>>> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to
>>>>> look more
>>>>> >> at
>>>>> >> > >> the
>>>>> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
>>>>> this in
>>>>> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the
>>>>> LUN.
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>>> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>>>>> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting.
>>>>> That is a
>>>>> >> > bit
>>>>> >> > >> > > >>> > > > > different
>>>>> >> > >> > > >>> > > from
>>>>> >> > >> > > >>> > > > how
>>>>> >> > >> > > >>> > > > > it works with XenServer and VMware.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2
>>>>> with
>>>>> >> > >> XenServer:
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
>>>>> recorded
>>>>> >> in
>>>>> >> > >> the
>>>>> >> > >> > > >>> > > > cloud.volumes
>>>>> >> > >> > > >>> > > > > table).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a
>>>>> VM for
>>>>> >> the
>>>>> >> > >> > first
>>>>> >> > >> > > >>> > > > > time
>>>>> >> > >> > > >>> > (if
>>>>> >> > >> > > >>> > > > the
>>>>> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in,
>>>>> the
>>>>> >> > storage
>>>>> >> > >> > > >>> > > > > framework
>>>>> >> > >> > > >>> > > > invokes
>>>>> >> > >> > > >>> > > > > a method on the plug-in that creates a volume
>>>>> on the
>>>>> >> > >> > SAN...info
>>>>> >> > >> > > >>> > > > > like
>>>>> >> > >> > > >>> > > the
>>>>> >> > >> > > >>> > > > IQN
>>>>> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > * CitrixResourceBase's
>>>>> execute(AttachVolumeCommand) is
>>>>> >> > >> > > executed.
>>>>> >> > >> > > >>> > > > > It
>>>>> >> > >> > > >>> > > > > determines based on a flag passed in that the
>>>>> storage
>>>>> >> in
>>>>> >> > >> > > question
>>>>> >> > >> > > >>> > > > > is
>>>>> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>>>>> >> > "traditional"
>>>>> >> > >> > > >>> > preallocated
>>>>> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
>>>>> target.
>>>>> >> > Once
>>>>> >> > >> > > >>> > > > > discovered
>>>>> >> > >> > > >>> > > it
>>>>> >> > >> > > >>> > > > > determines if the iSCSI target already
>>>>> contains a
>>>>> >> > storage
>>>>> >> > >> > > >>> > > > > repository
>>>>> >> > >> > > >>> > > (it
>>>>> >> > >> > > >>> > > > > would if this were a re-attach situation). If
>>>>> it does
>>>>> >> > >> contain
>>>>> >> > >> > > an
>>>>> >> > >> > > >>> > > > > SR
>>>>> >> > >> > > >>> > > > already,
>>>>> >> > >> > > >>> > > > > then there should already be one VDI, as well.
>>>>> If
>>>>> >> there
>>>>> >> > >> is no
>>>>> >> > >> > > SR,
>>>>> >> > >> > > >>> > > > > an
>>>>> >> > >> > > >>> > SR
>>>>> >> > >> > > >>> > > > is
>>>>> >> > >> > > >>> > > > > created and a single VDI is created within it
>>>>> (that
>>>>> >> > takes
>>>>> >> > >> up
>>>>> >> > >> > > about
>>>>> >> > >> > > >>> > > > > as
>>>>> >> > >> > > >>> > > > much
>>>>> >> > >> > > >>> > > > > space as was requested for the CloudStack
>>>>> volume).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
>>>>> depends
>>>>> >> > on
>>>>> >> > >> the
>>>>> >> > >> > > >>> > existence
>>>>> >> > >> > > >>> > > > of
>>>>> >> > >> > > >>> > > > > an SR and a VDI).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > The VMware case is essentially the same
>>>>> (mainly just
>>>>> >> > >> > substitute
>>>>> >> > >> > > >>> > > datastore
>>>>> >> > >> > > >>> > > > > for SR and VMDK for VDI).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>>>>> >> discovered
>>>>> >> > >> the
>>>>> >> > >> > > iSCSI
>>>>> >> > >> > > >>> > > target,
>>>>> >> > >> > > >>> > > > > but only the host that is currently running
>>>>> the VM
>>>>> >> that
>>>>> >> > is
>>>>> >> > >> > > using
>>>>> >> > >> > > >>> > > > > the
>>>>> >> > >> > > >>> > > VDI
>>>>> >> > >> > > >>> > > > (or
>>>>> >> > >> > > >>> > > > > VMKD) is actually using the disk.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > Live Migration should be OK because the
>>>>> hypervisors
>>>>> >> > >> > communicate
>>>>> >> > >> > > >>> > > > > with
>>>>> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
>>>>> datastore).
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
>>>>> >> > CloudStack's
>>>>> >> > >> > > eyes.
>>>>> >> > >> > > >>> > > > > CS
>>>>> >> > >> > > >>> > > > controls
>>>>> >> > >> > > >>> > > > > Live Migration. You don't really need a
>>>>> clustered
>>>>> >> > >> filesystem
>>>>> >> > >> > on
>>>>> >> > >> > > >>> > > > > the
>>>>> >> > >> > > >>> > > LUN.
>>>>> >> > >> > > >>> > > > The
>>>>> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using
>>>>> it.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL
>>>>> list on the
>>>>> >> > >> SAN to
>>>>> >> > >> > > have
>>>>> >> > >> > > >>> > > only a
>>>>> >> > >> > > >>> > > > > single KVM host have access to the volume,
>>>>> that would
>>>>> >> be
>>>>> >> > >> > ideal.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to
>>>>> discover
>>>>> >> and
>>>>> >> > >> log
>>>>> >> > >> > in
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > > > > the
>>>>> >> > >> > > >>> > > > iSCSI
>>>>> >> > >> > > >>> > > > > target. I'll also need to take the resultant
>>>>> new
>>>>> >> device
>>>>> >> > >> and
>>>>> >> > >> > > pass
>>>>> >> > >> > > >>> > > > > it
>>>>> >> > >> > > >>> > > into
>>>>> >> > >> > > >>> > > > the
>>>>> >> > >> > > >>> > > > > VM.
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out
>>>>> on
>>>>> >> > >> anything I
>>>>> >> > >> > > seem
>>>>> >> > >> > > >>> > > > incorrect
>>>>> >> > >> > > >>> > > > > about. :)
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus
>>>>> Sorensen <
>>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>>> >> > >> > > >>> > > > > wrote:
>>>>> >> > >> > > >>> > > > >>
>>>>> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM),
>>>>> a disk
>>>>> >> > def,
>>>>> >> > >> and
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > > attach
>>>>> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do
>>>>> your own
>>>>> >> > >> > > >>> > > > >> StorageAdaptor
>>>>> >> > >> > > >>> > and
>>>>> >> > >> > > >>> > > > run
>>>>> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that,
>>>>> depending on
>>>>> >> how
>>>>> >> > >> the
>>>>> >> > >> > > >>> > > > >> libvirt
>>>>> >> > >> > > >>> > > iscsi
>>>>> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
>>>>> pool/lun/volume
>>>>> >> > >> isn't
>>>>> >> > >> > > how it
>>>>> >> > >> > > >>> > > works
>>>>> >> > >> > > >>> > > > on
>>>>> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>>>>> >> > >> > > >>> > > > >>
>>>>> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which
>>>>> host can
>>>>> >> > see
>>>>> >> > >> > > which
>>>>> >> > >> > > >>> > > > >> luns
>>>>> >> > >> > > >>> > > as
>>>>> >> > >> > > >>> > > > >> well, I remember discussing that months ago,
>>>>> so that
>>>>> >> a
>>>>> >> > >> disk
>>>>> >> > >> > > won't
>>>>> >> > >> > > >>> > > > >> be
>>>>> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
>>>>> access,
>>>>> >> so
>>>>> >> > >> it
>>>>> >> > >> > > will
>>>>> >> > >> > > >>> > > > >> be
>>>>> >> > >> > > >>> > > safe
>>>>> >> > >> > > >>> > > > and
>>>>> >> > >> > > >>> > > > >> fence the disk from rogue nodes that
>>>>> cloudstack loses
>>>>> >> > >> > > >>> > > > >> connectivity
>>>>> >> > >> > > >>> > > > with. It
>>>>> >> > >> > > >>> > > > >> should revoke access to everything but the
>>>>> target
>>>>> >> > host...
>>>>> >> > >> > > Except
>>>>> >> > >> > > >>> > > > >> for
>>>>> >> > >> > > >>> > > > during
>>>>> >> > >> > > >>> > > > >> migration but we can discuss that later,
>>>>> there's a
>>>>> >> > >> migration
>>>>> >> > >> > > prep
>>>>> >> > >> > > >>> > > > process
>>>>> >> > >> > > >>> > > > >> where the new host can be added to the acls,
>>>>> and the
>>>>> >> > old
>>>>> >> > >> > host
>>>>> >> > >> > > can
>>>>> >> > >> > > >>> > > > >> be
>>>>> >> > >> > > >>> > > > removed
>>>>> >> > >> > > >>> > > > >> post migration.
>>>>> >> > >> > > >>> > > > >>
>>>>> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>>>> >> > >> > > >>> > > mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >> wrote:
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
>>>>> target,
>>>>> >> > >> log in
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > > > >>> it,
>>>>> >> > >> > > >>> > > then
>>>>> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a
>>>>> result
>>>>> >> (and
>>>>> >> > >> leave
>>>>> >> > >> > > it
>>>>> >> > >> > > >>> > > > >>> as
>>>>> >> > >> > > >>> > is
>>>>> >> > >> > > >>> > > -
>>>>> >> > >> > > >>> > > > do
>>>>> >> > >> > > >>> > > > >>> not format it with any file
>>>>> system...clustered or
>>>>> >> > not).
>>>>> >> > >> I
>>>>> >> > >> > > would
>>>>> >> > >> > > >>> > pass
>>>>> >> > >> > > >>> > > > that
>>>>> >> > >> > > >>> > > > >>> device into the VM.
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>> Kind of accurate?
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus
>>>>> Sorensen <
>>>>> >> > >> > > >>> > > shadowsor@gmail.com>
>>>>> >> > >> > > >>> > > > >>> wrote:
>>>>> >> > >> > > >>> > > > >>>>
>>>>> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the
>>>>> disk
>>>>> >> > >> > > definitions.
>>>>> >> > >> > > >>> > There
>>>>> >> > >> > > >>> > > > are
>>>>> >> > >> > > >>> > > > >>>> ones that work for block devices rather
>>>>> than files.
>>>>> >> > You
>>>>> >> > >> > can
>>>>> >> > >> > > >>> > > > >>>> piggy
>>>>> >> > >> > > >>> > > > back off
>>>>> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach
>>>>> it to
>>>>> >> the
>>>>> >> > >> vm
>>>>> >> > >> > as
>>>>> >> > >> > > a
>>>>> >> > >> > > >>> > block
>>>>> >> > >> > > >>> > > > device.
>>>>> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt
>>>>> XML
>>>>> >> > format.
>>>>> >> > >> > You
>>>>> >> > >> > > may
>>>>> >> > >> > > >>> > want
>>>>> >> > >> > > >>> > > > to use
>>>>> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than
>>>>> just
>>>>> >> > /dev/sdx
>>>>> >> > >> > > like I
>>>>> >> > >> > > >>> > > > mentioned,
>>>>> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices,
>>>>> as well
>>>>> >> > as
>>>>> >> > >> > other
>>>>> >> > >> > > >>> > > > >>>> ones
>>>>> >> > >> > > >>> > > > that will
>>>>> >> > >> > > >>> > > > >>>> be consistent and easier for management,
>>>>> not sure
>>>>> >> how
>>>>> >> > >> > > familiar
>>>>> >> > >> > > >>> > > > >>>> you
>>>>> >> > >> > > >>> > > > are with
>>>>> >> > >> > > >>> > > > >>>> device naming on Linux.
>>>>> >> > >> > > >>> > > > >>>>
>>>>> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>>> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
>>>>> >> > >> > > >>> > > > wrote:
>>>>> >> > >> > > >>> > > > >>>>>
>>>>> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>>>>> >> network/iscsi
>>>>> >> > >> > > initiator
>>>>> >> > >> > > >>> > > inside
>>>>> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
>>>>> /dev/sdx
>>>>> >> > (your
>>>>> >> > >> > lun
>>>>> >> > >> > > on
>>>>> >> > >> > > >>> > > > hypervisor) as
>>>>> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching
>>>>> some image
>>>>> >> > >> file
>>>>> >> > >> > > that
>>>>> >> > >> > > >>> > > resides
>>>>> >> > >> > > >>> > > > on a
>>>>> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on
>>>>> a
>>>>> >> target.
>>>>> >> > >> > > >>> > > > >>>>>
>>>>> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage
>>>>> supporting
>>>>> >> live
>>>>> >> > >> > > migration
>>>>> >> > >> > > >>> > > > >>>>> I
>>>>> >> > >> > > >>> > > > think
>>>>> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
>>>>> filesystem
>>>>> >> on
>>>>> >> > it
>>>>> >> > >> > and
>>>>> >> > >> > > >>> > > > >>>>> mount
>>>>> >> > >> > > >>> > it
>>>>> >> > >> > > >>> > > > in two
>>>>> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>>>>> >> > clustered
>>>>> >> > >> > > >>> > > > >>>>> filesystem,
>>>>> >> > >> > > >>> > > in
>>>>> >> > >> > > >>> > > > which
>>>>> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>>>>> >> > >> > > >>> > > > >>>>>
>>>>> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR
>>>>> style is
>>>>> >> > >> basically
>>>>> >> > >> > > LVM
>>>>> >> > >> > > >>> > with a
>>>>> >> > >> > > >>> > > > xen
>>>>> >> > >> > > >>> > > > >>>>> specific cluster management, a custom
>>>>> CLVM. They
>>>>> >> > don't
>>>>> >> > >> > use
>>>>> >> > >> > > a
>>>>> >> > >> > > >>> > > > filesystem
>>>>> >> > >> > > >>> > > > >>>>> either.
>>>>> >> > >> > > >>> > > > >>>>>
>>>>> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly
>>>>> to the
>>>>> >> vm,"
>>>>> >> > >> do
>>>>> >> > >> > you
>>>>> >> > >> > > >>> > > > >>>>>> mean
>>>>> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't
>>>>> think we
>>>>> >> > >> could do
>>>>> >> > >> > > that
>>>>> >> > >> > > >>> > > > >>>>>> in
>>>>> >> > >> > > >>> > > CS.
>>>>> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
>>>>> circumvents
>>>>> >> > the
>>>>> >> > >> > > >>> > > > >>>>>> hypervisor,
>>>>> >> > >> > > >>> > > as
>>>>> >> > >> > > >>> > > > far as I
>>>>> >> > >> > > >>> > > > >>>>>> know.
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
>>>>> Sorensen
>>>>> >> <
>>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>>> >> > >> > > >>> > > > >>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to
>>>>> the vm
>>>>> >> > unless
>>>>> >> > >> > > there is
>>>>> >> > >> > > >>> > > > >>>>>>> a
>>>>> >> > >> > > >>> > > good
>>>>> >> > >> > > >>> > > > >>>>>>> reason not to.
>>>>> >> > >> > > >>> > > > >>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus
>>>>> Sorensen" <
>>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>>> >> > >> > > >>> > > > >>>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I
>>>>> think
>>>>> >> its a
>>>>> >> > >> > > mistake
>>>>> >> > >> > > >>> > > > >>>>>>>> to
>>>>> >> > >> > > >>> > go
>>>>> >> > >> > > >>> > > to
>>>>> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping
>>>>> of CS
>>>>> >> > >> volumes to
>>>>> >> > >> > > luns
>>>>> >> > >> > > >>> > and
>>>>> >> > >> > > >>> > > > then putting
>>>>> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and
>>>>> then
>>>>> >> > putting a
>>>>> >> > >> > > QCOW2
>>>>> >> > >> > > >>> > > > >>>>>>>> or
>>>>> >> > >> > > >>> > > even
>>>>> >> > >> > > >>> > > > RAW disk
>>>>> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a
>>>>> lot of
>>>>> >> > iops
>>>>> >> > >> > > along
>>>>> >> > >> > > >>> > > > >>>>>>>> the
>>>>> >> > >> > > >>> > > > way, and have
>>>>> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and
>>>>> its
>>>>> >> > >> journaling,
>>>>> >> > >> > > etc.
>>>>> >> > >> > > >>> > > > >>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike
>>>>> Tutkowski"
>>>>> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such
>>>>> new ground
>>>>> >> > in
>>>>> >> > >> KVM
>>>>> >> > >> > > with
>>>>> >> > >> > > >>> > CS.
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with
>>>>> KVM and CS
>>>>> >> > >> today
>>>>> >> > >> > > is by
>>>>> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and
>>>>> specifying the
>>>>> >> > >> > location
>>>>> >> > >> > > of
>>>>> >> > >> > > >>> > > > >>>>>>>>> the
>>>>> >> > >> > > >>> > > > share.
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
>>>>> iSCSI
>>>>> >> by
>>>>> >> > >> > > >>> > > > >>>>>>>>> discovering
>>>>> >> > >> > > >>> > > their
>>>>> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
>>>>> mounting
>>>>> >> it
>>>>> >> > >> > > somewhere
>>>>> >> > >> > > >>> > > > >>>>>>>>> on
>>>>> >> > >> > > >>> > > > their file
>>>>> >> > >> > > >>> > > > >>>>>>>>> system.
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do
>>>>> that
>>>>> >> > >> discovery,
>>>>> >> > >> > > >>> > > > >>>>>>>>> logging
>>>>> >> > >> > > >>> > > in,
>>>>> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for
>>>>> them and
>>>>> >> > >> letting
>>>>> >> > >> > the
>>>>> >> > >> > > >>> > current
>>>>> >> > >> > > >>> > > > code manage
>>>>> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>>>>> >> Sorensen
>>>>> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>>>>> >> different. I
>>>>> >> > >> need
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > catch
>>>>> >> > >> > > >>> > > up
>>>>> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>>>>> >> basically
>>>>> >> > >> just
>>>>> >> > >> > > disk
>>>>> >> > >> > > >>> > > > snapshots + memory
>>>>> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots
>>>>> would
>>>>> >> > >> preferably
>>>>> >> > >> > be
>>>>> >> > >> > > >>> > handled
>>>>> >> > >> > > >>> > > > by the SAN,
>>>>> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to
>>>>> secondary
>>>>> >> > >> storage or
>>>>> >> > >> > > >>> > something
>>>>> >> > >> > > >>> > > > else. This is
>>>>> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and
>>>>> KVM, so we
>>>>> >> > will
>>>>> >> > >> > > want to
>>>>> >> > >> > > >>> > see
>>>>> >> > >> > > >>> > > > how others are
>>>>> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
>>>>> Sorensen" <
>>>>> >> > >> > > >>> > > shadowsor@gmail.com
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >>>>>>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
>>>>> you'd
>>>>> >> > use a
>>>>> >> > >> > vdi
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> style
>>>>> >> > >> > > >>> > on
>>>>> >> > >> > > >>> > > > an
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to
>>>>> treat it
>>>>> >> as a
>>>>> >> > >> RAW
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> format.
>>>>> >> > >> > > >>> > > > Otherwise you're
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
>>>>> mounting
>>>>> >> it,
>>>>> >> > >> > > creating
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> a
>>>>> >> > >> > > >>> > > > QCOW2 disk image,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
>>>>> performance
>>>>> >> > >> > killer.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi
>>>>> lun as a
>>>>> >> > >> disk
>>>>> >> > >> > to
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > VM,
>>>>> >> > >> > > >>> > > > and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side
>>>>> via the
>>>>> >> > >> storage
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> plugin
>>>>> >> > >> > > >>> > is
>>>>> >> > >> > > >>> > > > best. My
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
>>>>> refactor
>>>>> >> > was
>>>>> >> > >> > that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> there
>>>>> >> > >> > > >>> > > was
>>>>> >> > >> > > >>> > > > a snapshot
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
>>>>> handle
>>>>> >> > >> > snapshots.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
>>>>> Sorensen" <
>>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be
>>>>> handled by
>>>>> >> > the
>>>>> >> > >> SAN
>>>>> >> > >> > > back
>>>>> >> > >> > > >>> > end,
>>>>> >> > >> > > >>> > > > if
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack
>>>>> mgmt
>>>>> >> > server
>>>>> >> > >> > > could
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> call
>>>>> >> > >> > > >>> > > > your plugin for
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
>>>>> hypervisor
>>>>> >> > >> > > agnostic. As
>>>>> >> > >> > > >>> > far
>>>>> >> > >> > > >>> > > > as space, that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN
>>>>> handles it.
>>>>> >> With
>>>>> >> > >> > ours,
>>>>> >> > >> > > we
>>>>> >> > >> > > >>> > carve
>>>>> >> > >> > > >>> > > > out luns from a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes
>>>>> from the
>>>>> >> > >> pool
>>>>> >> > >> > > and is
>>>>> >> > >> > > >>> > > > independent of the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
>>>>> Tutkowski"
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com>
>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool
>>>>> type
>>>>> >> for
>>>>> >> > >> > libvirt
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
>>>>> >> > >> > > >>> > > > work
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>>>>> >> hypervisor
>>>>> >> > >> > > snapshots?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
>>>>> hypervisor
>>>>> >> > >> > snapshot,
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > VDI
>>>>> >> > >> > > >>> > > > for
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
>>>>> storage
>>>>> >> > >> > > repository
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> as
>>>>> >> > >> > > >>> > > the
>>>>> >> > >> > > >>> > > > volume is on.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case
>>>>> (let's
>>>>> >> say
>>>>> >> > >> for
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>>>>> >> > >> > > >>> > > and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't
>>>>> support
>>>>> >> > >> hypervisor
>>>>> >> > >> > > >>> > snapshots
>>>>> >> > >> > > >>> > > > in 4.2) is I'd
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is
>>>>> larger than
>>>>> >> > what
>>>>> >> > >> the
>>>>> >> > >> > > user
>>>>> >> > >> > > >>> > > > requested for the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
>>>>> because
>>>>> >> our
>>>>> >> > >> SAN
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>>>>> >> > >> > > >>> > > > provisions volumes,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
>>>>> unless
>>>>> >> it
>>>>> >> > >> needs
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
>>>>> >> > >> > > >>> > > > The CloudStack
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object"
>>>>> on the
>>>>> >> SAN
>>>>> >> > >> > volume
>>>>> >> > >> > > >>> > until a
>>>>> >> > >> > > >>> > > > hypervisor
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot
>>>>> would
>>>>> >> also
>>>>> >> > >> > reside
>>>>> >> > >> > > on
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the
>>>>> >> > >> > > >>> > > > SAN volume.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves
>>>>> and there
>>>>> >> is
>>>>> >> > >> no
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
>>>>> >> > >> > > >>> > of
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
>>>>> libvirt
>>>>> >> > >> (which,
>>>>> >> > >> > > even
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> if
>>>>> >> > >> > > >>> > > > there were support
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only
>>>>> allows
>>>>> >> one
>>>>> >> > >> LUN
>>>>> >> > >> > per
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>>>>> >> > >> > > >>> > > > target), then I
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model
>>>>> will work.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance
>>>>> the
>>>>> >> > current
>>>>> >> > >> way
>>>>> >> > >> > > this
>>>>> >> > >> > > >>> > > works
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM,
>>>>> Mike
>>>>> >> > >> Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com>
>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's
>>>>> used for
>>>>> >> > >> iSCSI
>>>>> >> > >> > > access
>>>>> >> > >> > > >>> > > today.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route,
>>>>> too,
>>>>> >> but I
>>>>> >> > >> > might
>>>>> >> > >> > > as
>>>>> >> > >> > > >>> > well
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for
>>>>> iSCSI
>>>>> >> > instead.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
>>>>> Marcus
>>>>> >> > >> Sorensen
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>>>>> >> SharedMountPoint, I
>>>>> >> > >> > > believe
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>>>>> >> > >> > > >>> > > just
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
>>>>> similar
>>>>> >> to
>>>>> >> > >> > that.
>>>>> >> > >> > > The
>>>>> >> > >> > > >>> > > > end-user
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
>>>>> system
>>>>> >> > that
>>>>> >> > >> all
>>>>> >> > >> > > KVM
>>>>> >> > >> > > >>> > hosts
>>>>> >> > >> > > >>> > > > can
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to
>>>>> what is
>>>>> >> > >> > providing
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > > > storage.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>>>>> >> clustered
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided
>>>>> directory path
>>>>> >> has
>>>>> >> > >> VM
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
>>>>> Marcus
>>>>> >> > >> > Sorensen
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM,
>>>>> and
>>>>> >> iSCSI
>>>>> >> > >> all
>>>>> >> > >> > at
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > same
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19
>>>>> PM, Mike
>>>>> >> > >> > Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
>>>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have
>>>>> multiple
>>>>> >> > storage
>>>>> >> > >> > > pools:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
>>>>> pool-list
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>>>>> >> >  Autostart
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > -----------------------------------------
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
>>>>>   yes
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active
>>>>>   no
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12
>>>>> PM, Mike
>>>>> >> > >> > > Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mike.tutkowski@solidfire.com
>>>>> >
>>>>> >> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
>>>>> pointed
>>>>> >> > >> out.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI
>>>>> (libvirt)
>>>>> >> > >> storage
>>>>> >> > >> > > pool
>>>>> >> > >> > > >>> > based
>>>>> >> > >> > > >>> > > on
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
>>>>> would
>>>>> >> > only
>>>>> >> > >> > have
>>>>> >> > >> > > one
>>>>> >> > >> > > >>> > LUN,
>>>>> >> > >> > > >>> > > > so
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt)
>>>>> storage
>>>>> >> volume
>>>>> >> > in
>>>>> >> > >> > the
>>>>> >> > >> > > >>> > > (libvirt)
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in
>>>>> creates and
>>>>> >> > >> destroys
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>>>>> >> problem
>>>>> >> > >> that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>>>>> >> > >> > > >>> > > does
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>>>>> >> targets/LUNs.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
>>>>> this a
>>>>> >> > bit
>>>>> >> > >> to
>>>>> >> > >> > > see
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>>>>> >> > >> > > >>> > > > libvirt
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools
>>>>> (as you
>>>>> >> > >> > > mentioned,
>>>>> >> > >> > > >>> > since
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to
>>>>> one of my
>>>>> >> > >> iSCSI
>>>>> >> > >> > > >>> > > > targets/LUNs).
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58
>>>>> PM,
>>>>> >> Mike
>>>>> >> > >> > > Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <
>>>>> mike.tutkowski@solidfire.com>
>>>>> >> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has
>>>>> this
>>>>> >> type:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>>>>> >> > NETFS("netfs"),
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"),
>>>>> DIR("dir"),
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
>>>>> poolType) {
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
>>>>> poolType;
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
>>>>> toString() {
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return
>>>>> _poolType;
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the
>>>>> iSCSI type
>>>>> >> > is
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>>>>> >> > >> > > >>> > > being
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you
>>>>> were
>>>>> >> > >> getting
>>>>> >> > >> > at.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today
>>>>> (say,
>>>>> >> 4.2),
>>>>> >> > >> when
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
>>>>> uses it
>>>>> >> > >> with
>>>>> >> > >> > > iSCSI,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>>>>> >> > >> > > >>> > > > that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for
>>>>> NFS?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at
>>>>> 5:50 PM,
>>>>> >> > Marcus
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > >
>>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be
>>>>> pre-allocated on
>>>>> >> > the
>>>>> >> > >> > iSCSI
>>>>> >> > >> > > >>> > server,
>>>>> >> > >> > > >>> > > > and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt
>>>>> APIs.",
>>>>> >> > which
>>>>> >> > >> I
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>>>>> >> > >> > > >>> > > your
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does
>>>>> the
>>>>> >> work
>>>>> >> > of
>>>>> >> > >> > > logging
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>>> >> > >> > > >>> > > and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen
>>>>> api does
>>>>> >> > >> that
>>>>> >> > >> > > work
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>>> >> > >> > > >>> > the
>>>>> >> > >> > > >>> > > > Xen
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
>>>>> whether
>>>>> >> > >> this
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>>>>> >> > >> > > >>> > a
>>>>> >> > >> > > >>> > > > 1:1
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to
>>>>> register 1
>>>>> >> > iscsi
>>>>> >> > >> > > device
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>>>>> >> > >> > > >>> > a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or
>>>>> read
>>>>> >> up a
>>>>> >> > >> bit
>>>>> >> > >> > > more
>>>>> >> > >> > > >>> > about
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may
>>>>> just have
>>>>> >> to
>>>>> >> > >> write
>>>>> >> > >> > > your
>>>>> >> > >> > > >>> > own
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>>> >> > >> > > >>> > >  We
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>>>>> >> libvirt,
>>>>> >> > >> see
>>>>> >> > >> > the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made,
>>>>> then
>>>>> >> > calls
>>>>> >> > >> > made
>>>>> >> > >> > > to
>>>>> >> > >> > > >>> > that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>>>>> >> > LibvirtStorageAdaptor
>>>>> >> > >> to
>>>>> >> > >> > > see
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>>>>> >> > >> > > >>> > > that
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and
>>>>> maybe write
>>>>> >> > some
>>>>> >> > >> > test
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>>> >> > >> > > >>> > > code
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt
>>>>> and
>>>>> >> > >> register
>>>>> >> > >> > > iscsi
>>>>> >> > >> > > >>> > > storage
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at
>>>>> 5:31 PM,
>>>>> >> > Mike
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <
>>>>> mike.tutkowski@solidfire.com>
>>>>> >> > wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>>>>> >> investigate
>>>>> >> > >> > libvirt
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>>>>> >> > >> > > >>> > > but
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting
>>>>> to/disconnecting from
>>>>> >> > >> iSCSI
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at
>>>>> 5:29 PM,
>>>>> >> > >> Mike
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
>>>>> mike.tutkowski@solidfire.com>
>>>>> >> > >> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
>>>>> through
>>>>> >> > >> some of
>>>>> >> > >> > > the
>>>>> >> > >> > > >>> > > classes
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at
>>>>> 5:26
>>>>> >> PM,
>>>>> >> > >> > Marcus
>>>>> >> > >> > > >>> > Sorensen
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that
>>>>> you will
>>>>> >> > >> need
>>>>> >> > >> > the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There
>>>>> should be
>>>>> >> > >> standard
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>>>>> >> > >> > > >>> > > for
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage
>>>>> adaptor to do
>>>>> >> > the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>>>>> >> > >> > > >>> > > > login.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>> LibvirtStorageAdaptor.java
>>>>> >> > >> > > >>> > and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits
>>>>> your need.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55
>>>>> PM, "Mike
>>>>> >> > >> > > Tutkowski"
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
>>>>> mike.tutkowski@solidfire.com
>>>>> >> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
>>>>> during
>>>>> >> the
>>>>> >> > >> 4.2
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>>>>> >> > >> > > >>> > I
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>>>>> >> > CloudStack.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was
>>>>> invoked by
>>>>> >> the
>>>>> >> > >> > > storage
>>>>> >> > >> > > >>> > > > framework
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
>>>>> dynamically
>>>>> >> > >> create
>>>>> >> > >> > and
>>>>> >> > >> > > >>> > delete
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other
>>>>> activities).
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so
>>>>> I can
>>>>> >> > >> > establish a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>>> >> > >> > > >>> > > > mapping
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a
>>>>> SolidFire volume
>>>>> >> > for
>>>>> >> > >> > QoS.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past,
>>>>> CloudStack
>>>>> >> always
>>>>> >> > >> > > expected
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> > >> > > >>> > > > admin
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time
>>>>> and
>>>>> >> those
>>>>> >> > >> > > volumes
>>>>> >> > >> > > >>> > would
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is
>>>>> not QoS
>>>>> >> > >> > > friendly).
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1
>>>>> mapping
>>>>> >> scheme
>>>>> >> > >> > work,
>>>>> >> > >> > > I
>>>>> >> > >> > > >>> > needed
>>>>> >> > >> > > >>> > > > to
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
>>>>> plug-ins
>>>>> >> > so
>>>>> >> > >> > they
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> repositories/datastores as
>>>>> >> > >> needed.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to
>>>>> make this
>>>>> >> > >> happen
>>>>> >> > >> > > with
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to
>>>>> speed with
>>>>> >> how
>>>>> >> > >> this
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>>>>> >> > >> > > >>> > > work
>>>>> >> > >> > > >>> > > > on
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar
>>>>> with KVM
>>>>> >> > >> know
>>>>> >> > >> > > how I
>>>>> >> > >> > > >>> > will
>>>>> >> > >> > > >>> > > > need
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For
>>>>> example,
>>>>> >> > will I
>>>>> >> > >> > > have to
>>>>> >> > >> > > >>> > > expect
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM
>>>>> host and
>>>>> >> > >> use it
>>>>> >> > >> > > for
>>>>> >> > >> > > >>> > this
>>>>> >> > >> > > >>> > > to
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
>>>>> suggestions,
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
>>>>> Developer,
>>>>> >> > >> > SolidFire
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>>>>> >> > mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
>>>>> world
>>>>> >> > uses
>>>>> >> > >> the
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack
>>>>> Developer,
>>>>> >> > >> SolidFire
>>>>> >> > >> > > Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>>>>> >> mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the
>>>>> world
>>>>> >> uses
>>>>> >> > >> the
>>>>> >> > >> > > cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack
>>>>> Developer,
>>>>> >> > >> SolidFire
>>>>> >> > >> > > Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
>>>>> mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the
>>>>> world uses
>>>>> >> > the
>>>>> >> > >> > > cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>>>>> >> > SolidFire
>>>>> >> > >> > Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e:
>>>>> mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world
>>>>> uses
>>>>> >> the
>>>>> >> > >> > cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>>>>> >> SolidFire
>>>>> >> > >> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e:
>>>>> mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world
>>>>> uses the
>>>>> >> > >> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>>>>> >> SolidFire
>>>>> >> > >> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e:
>>>>> mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world
>>>>> uses the
>>>>> >> > >> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer,
>>>>> SolidFire
>>>>> >> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses
>>>>> the
>>>>> >> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer,
>>>>> SolidFire
>>>>> >> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses
>>>>> the
>>>>> >> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>>>> --
>>>>> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire
>>>>> Inc.
>>>>> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the
>>>>> cloud™
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>>
>>>>> >> > >> > > >>> > > > >>>>>> --
>>>>> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire
>>>>> Inc.
>>>>> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the
>>>>> cloud™
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>>
>>>>> >> > >> > > >>> > > > >>> --
>>>>> >> > >> > > >>> > > > >>> Mike Tutkowski
>>>>> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>>>>> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > >>> o: 303.746.7302
>>>>> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > >
>>>>> >> > >> > > >>> > > > > --
>>>>> >> > >> > > >>> > > > > Mike Tutkowski
>>>>> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>>>>> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > > > o: 303.746.7302
>>>>> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>>>>> >> > >> > > >>> > > >
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> > > --
>>>>> >> > >> > > >>> > > *Mike Tutkowski*
>>>>> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> > > o: 303.746.7302
>>>>> >> > >> > > >>> > > Advancing the way the world uses the
>>>>> >> > >> > > >>> > > cloud<
>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>> >> > >> > > >>> > > *™*
>>>>> >> > >> > > >>> > >
>>>>> >> > >> > > >>> >
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>>
>>>>> >> > >> > > >>> --
>>>>> >> > >> > > >>> *Mike Tutkowski*
>>>>> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > >> > > >>> e: mike.tutkowski@solidfire.com
>>>>> >> > >> > > >>> o: 303.746.7302
>>>>> >> > >> > > >>> Advancing the way the world uses the
>>>>> >> > >> > > >>> cloud<
>>>>> http://solidfire.com/solution/overview/?video=play>
>>>>> >> > >> > > >>> *™*
>>>>> >> > >> > >
>>>>> >> > >> >
>>>>> >> > >> >
>>>>> >> > >> >
>>>>> >> > >> > --
>>>>> >> > >> > *Mike Tutkowski*
>>>>> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > >> > e: mike.tutkowski@solidfire.com
>>>>> >> > >> > o: 303.746.7302
>>>>> >> > >> > Advancing the way the world uses the
>>>>> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> > >> > *™*
>>>>> >> > >> >
>>>>> >> > >>
>>>>> >> > >
>>>>> >> > >
>>>>> >> > >
>>>>> >> > > --
>>>>> >> > > *Mike Tutkowski*
>>>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > > e: mike.tutkowski@solidfire.com
>>>>> >> > > o: 303.746.7302
>>>>> >> > > Advancing the way the world uses the cloud<
>>>>> >> > http://solidfire.com/solution/overview/?video=play>
>>>>> >> > > *™*
>>>>> >> > >
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > --
>>>>> >> > *Mike Tutkowski*
>>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> >> > e: mike.tutkowski@solidfire.com
>>>>> >> > o: 303.746.7302
>>>>> >> > Advancing the way the world uses the
>>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> >> > *™*
>>>>> >> >
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > *Mike Tutkowski*
>>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>>> > e: mike.tutkowski@solidfire.com
>>>>> > o: 303.746.7302
>>>>> > Advancing the way the world uses the
>>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> > *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>>  *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>>  *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Actually, I may have put that code in a place that is never called nowadays.

Do you know why we have AttachVolumeCommand in LibvirtComputingResource and
AttachCommand DetachCommand in KVMStorageProcessor?

They seem like they do the same thing.

If I recall, this change went in with 4.2 and it's the same story for
XenServer and VMware.

Maybe this location (in KVMStorageProcessor) makes more sense:

    @Override

    public Answer attachVolume(AttachCommand cmd) {

        DiskTO disk = cmd.getDisk();

        VolumeObjectTO vol = (VolumeObjectTO) disk.getData();

        PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
vol.getDataStore();

        String vmName = cmd.getVmName();

        try {

            Connect conn = LibvirtConnection.getConnectionByVmName(vmName);

*            if (cmd.isManaged()) {*

*                storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
cmd.getStorageHost(), cmd.getStoragePort(),*

*                        vol.getPath(), null, primaryStore.getPoolType());*

*            }*

            KVMStoragePool primary =
storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
primaryStore.getUuid());

            KVMPhysicalDisk phyDisk =
primary.getPhysicalDisk(vol.getPath());

            attachOrDetachDisk(conn, true, vmName, phyDisk,
disk.getDiskSeq().intValue());

            return new AttachAnswer(disk);

        } catch (LibvirtException e) {

            s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
due to " + e.toString());

            return new AttachAnswer(e.toString());

        } catch (InternalErrorException e) {

            s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
due to " + e.toString());

            return new AttachAnswer(e.toString());

        }

    }


    @Override

    public Answer dettachVolume(DettachCommand cmd) {

        DiskTO disk = cmd.getDisk();

        VolumeObjectTO vol = (VolumeObjectTO) disk.getData();

        PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)
vol.getDataStore();

        String vmName = cmd.getVmName();

        try {

            Connect conn = LibvirtConnection.getConnectionByVmName(vmName);

            KVMStoragePool primary =
storagePoolMgr.getStoragePool(primaryStore.getPoolType(),
primaryStore.getUuid());

            KVMPhysicalDisk phyDisk =
primary.getPhysicalDisk(vol.getPath());

            attachOrDetachDisk(conn, false, vmName, phyDisk,
disk.getDiskSeq().intValue());

*            if (cmd.isManaged()) {*

*                storagePoolMgr.deleteStoragePool(primaryStore.getPoolType(),
cmd.get_iScsiName());*

*            }*

            return new DettachAnswer(disk);

        } catch (LibvirtException e) {

            s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
due to " + e.toString());

            return new DettachAnswer(e.toString());

        } catch (InternalErrorException e) {

            s_logger.debug("Failed to attach volume: " + vol.getPath() + ",
due to " + e.toString());

            return new DettachAnswer(e.toString());

        }

    }


On Wed, Sep 18, 2013 at 1:07 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hey Marcus,
>
> What do you think about something relatively simple like this (below)? It
> parallels the XenServer and VMware code nicely.
>
> The idea is to only deal with the AttachVolumeCommand.
>
> If we are attaching a volume AND the underlying storage is so-called
> managed storage, at this point I invoke createStoragePool to create my
> iScsiAdmStoragePool object, establish a connection with the LUN, and
> prepare a KVMPhysicalDisk object, which will be requested a bit later
> during the actual attach.
>
> If we are detaching a volume AND the underlying storage is managed, the
> KVMStoragePool already exists, so we don't have to do anything special
> until after the volume is detached. At this point, we delete the storage
> pool (remove the iSCSI connection to the LUN and remove the reference to
> the iScsiAdmStoragePool from my adaptor).
>
>     private AttachVolumeAnswer execute(AttachVolumeCommand cmd) {
>
>         try {
>
>             Connect conn =
> LibvirtConnection.getConnectionByVmName(cmd.getVmName());
>
> *            if (cmd.getAttach() && cmd.isManaged()) {*
>
> *                _storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
> cmd.getStorageHost(), cmd.getStoragePort(),*
>
> *                        cmd.getVolumePath(), null, cmd.getPooltype());*
>
> *            }*
>
>             KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>
>                     cmd.getPooltype(),
>
>                     cmd.getPoolUuid());
>
>             KVMPhysicalDisk disk =
> primary.getPhysicalDisk(cmd.getVolumePath());
>
>             attachOrDetachDisk(conn, cmd.getAttach(), cmd.getVmName(),
> disk,
>
>                     cmd.getDeviceId().intValue(), cmd.getBytesReadRate(),
> cmd.getBytesWriteRate(), cmd.getIopsReadRate(), cmd.getIopsWriteRate());
>
> *            if (!cmd.getAttach() && cmd.isManaged()) {*
>
> *                _storagePoolMgr.deleteStoragePool(cmd.getPooltype(),
> cmd.get_iScsiName());*
>
> *            }*
>
>         } catch (LibvirtException e) {
>
>             return new AttachVolumeAnswer(cmd, e.toString());
>
>         } catch (InternalErrorException e) {
>
>             return new AttachVolumeAnswer(cmd, e.toString());
>
>         }
>
>
>         return new AttachVolumeAnswer(cmd, cmd.getDeviceId(),
> cmd.getVolumePath());
>
>     }
>
>
> On Wed, Sep 18, 2013 at 11:27 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Since a KVMStoragePool returns a capacity, used, and available number of
>> bytes, I will probably need to look into having this information ignored if
>> the storage_pool in question is "managed" as - in my case - it wouldn't
>> really make any sense.
>>
>>
>> On Wed, Sep 18, 2013 at 10:53 AM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> Sure, sounds good.
>>>
>>> Right now there are only two storage plug-ins: Edison's default plug-in
>>> and the SolidFire plug-in.
>>>
>>> As an example, when createAsync is called in the plug-in, mine creates a
>>> new volume (LUN) on the SAN with a capacity and number of Min, Max, and
>>> Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
>>> of preallocated storage for a new volume (like create a new VDI in an
>>> existing SR).
>>>
>>>
>>> On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> That wasn't my question, but I feel we're getting off in the weeds and
>>>> I can just look at the storage framework to see how it works and what
>>>> options it supports.
>>>>
>>>> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
>>>> <mi...@solidfire.com> wrote:
>>>> > At the time being, I am not aware of any other storage vendor with
>>>> truly
>>>> > guaranteed QoS.
>>>> >
>>>> > Most implement QoS in a relative sense (like thread priorities).
>>>> >
>>>> >
>>>> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <shadowsor@gmail.com
>>>> >wrote:
>>>> >
>>>> >> Yeah, that's why I thought it was specific to your implementation.
>>>> Perhaps
>>>> >> that's true, then?
>>>> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
>>>> mike.tutkowski@solidfire.com>
>>>> >> wrote:
>>>> >>
>>>> >> > I totally get where you're coming from with the tiered-pool
>>>> approach,
>>>> >> > though.
>>>> >> >
>>>> >> > Prior to SolidFire, I worked at HP and the product I worked on
>>>> allowed a
>>>> >> > single, clustered SAN to host multiple pools of storage. One pool
>>>> might
>>>> >> be
>>>> >> > made up of all-SSD storage nodes while another pool might be made
>>>> up of
>>>> >> > slower HDDs.
>>>> >> >
>>>> >> > That kind of tiering is not what SolidFire QoS is about, though,
>>>> as that
>>>> >> > kind of tiering does not guarantee QoS.
>>>> >> >
>>>> >> > In the SolidFire SAN, QoS was designed in from the beginning and is
>>>> >> > extremely granular. Each volume has its own performance and
>>>> capacity. You
>>>> >> > do not have to worry about Noisy Neighbors.
>>>> >> >
>>>> >> > The idea is to encourage businesses to trust the cloud with their
>>>> most
>>>> >> > critical business applications at a price point on par with
>>>> traditional
>>>> >> > SANs.
>>>> >> >
>>>> >> >
>>>> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>>>> >> > mike.tutkowski@solidfire.com> wrote:
>>>> >> >
>>>> >> > > Ah, I think I see the miscommunication.
>>>> >> > >
>>>> >> > > I should have gone into a bit more detail about the SolidFire
>>>> SAN.
>>>> >> > >
>>>> >> > > It is built from the ground up to support QoS on a LUN-by-LUN
>>>> basis.
>>>> >> > Every
>>>> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>>>> >> > >
>>>> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself
>>>> is not
>>>> >> > > over provisioned). Capacity and IOPS are provisioned
>>>> independently.
>>>> >> > > Multiple volumes and multiple tenants using the same SAN do not
>>>> suffer
>>>> >> > from
>>>> >> > > the Noisy Neighbor effect.
>>>> >> > >
>>>> >> > > When you create a Disk Offering in CS that is storage tagged to
>>>> use
>>>> >> > > SolidFire primary storage, you specify a Min, Max, and Burst
>>>> number of
>>>> >> > IOPS
>>>> >> > > to provision from the SAN for volumes created from that Disk
>>>> Offering.
>>>> >> > >
>>>> >> > > There is no notion of RAID groups that you see in more
>>>> traditional
>>>> >> SANs.
>>>> >> > > The SAN is built from clusters of storage nodes and data is
>>>> replicated
>>>> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN)
>>>> in the
>>>> >> > > cluster to avoid hot spots and protect the data should a drives
>>>> and/or
>>>> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
>>>> >> > >
>>>> >> > > Data is compressed and de-duplicated inline across the cluster
>>>> and all
>>>> >> > > volumes are thinly provisioned.
>>>> >> > >
>>>> >> > >
>>>> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com
>>>> >> > >wrote:
>>>> >> > >
>>>> >> > >> I'm surprised there's no mention of pool on the SAN in your
>>>> >> description
>>>> >> > of
>>>> >> > >> the framework. I had assumed this was specific to your
>>>> implementation,
>>>> >> > >> because normally SANs host multiple disk pools, maybe multiple
>>>> RAID
>>>> >> 50s
>>>> >> > >> and
>>>> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
>>>> >> > intended
>>>> >> > >> for root disks and a separate one for data disks. Or one pool
>>>> for
>>>> >> > >> cloudstack and one dedicated to some other internal db
>>>> application.
>>>> >> But
>>>> >> > it
>>>> >> > >> sounds as though there's no place to specify which disks or
>>>> pool on
>>>> >> the
>>>> >> > >> SAN
>>>> >> > >> to use.
>>>> >> > >>
>>>> >> > >> We implemented our own internal storage SAN plugin based on
>>>> 4.1. We
>>>> >> used
>>>> >> > >> the 'path' attribute of the primary storage pool object to
>>>> specify
>>>> >> which
>>>> >> > >> pool name on the back end SAN to use, so we could create
>>>> all-ssd pools
>>>> >> > and
>>>> >> > >> slower spindle pools, then differentiate between them based on
>>>> storage
>>>> >> > >> tags. Normally the path attribute would be the mount point for
>>>> NFS,
>>>> >> but
>>>> >> > >> its
>>>> >> > >> just a string. So when registering ours we enter San dns host
>>>> name,
>>>> >> the
>>>> >> > >> san's rest api port, and the pool name. Then luns created from
>>>> that
>>>> >> > >> primary
>>>> >> > >> storage come from the matching disk pool on the SAN. We can
>>>> create and
>>>> >> > >> register multiple pools of different types and purposes on the
>>>> same
>>>> >> SAN.
>>>> >> > >> We
>>>> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it
>>>> will be
>>>> >> > >> interesting to see what we can come up with to make it work
>>>> similarly.
>>>> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>>>> >> > mike.tutkowski@solidfire.com
>>>> >> > >> >
>>>> >> > >> wrote:
>>>> >> > >>
>>>> >> > >> > What you're saying here is definitely something we should talk
>>>> >> about.
>>>> >> > >> >
>>>> >> > >> > Hopefully my previous e-mail has clarified how this works a
>>>> bit.
>>>> >> > >> >
>>>> >> > >> > It mainly comes down to this:
>>>> >> > >> >
>>>> >> > >> > For the first time in CS history, primary storage is no longer
>>>> >> > required
>>>> >> > >> to
>>>> >> > >> > be preallocated by the admin and then handed to CS. CS
>>>> volumes don't
>>>> >> > >> have
>>>> >> > >> > to share a preallocated volume anymore.
>>>> >> > >> >
>>>> >> > >> > As of 4.2, primary storage can be based on a SAN (or some
>>>> other
>>>> >> > storage
>>>> >> > >> > device). You can tell CS how many bytes and IOPS to use from
>>>> this
>>>> >> > >> storage
>>>> >> > >> > device and CS invokes the appropriate plug-in to carve out
>>>> LUNs
>>>> >> > >> > dynamically.
>>>> >> > >> >
>>>> >> > >> > Each LUN is home to one and only one data disk. Data disks -
>>>> in this
>>>> >> > >> model
>>>> >> > >> > - never share a LUN.
>>>> >> > >> >
>>>> >> > >> > The main use case for this is so a CS volume can deliver
>>>> guaranteed
>>>> >> > >> IOPS if
>>>> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed
>>>> IOPS on a
>>>> >> > >> > LUN-by-LUN basis.
>>>> >> > >> >
>>>> >> > >> >
>>>> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>>>> >> > shadowsor@gmail.com
>>>> >> > >> > >wrote:
>>>> >> > >> >
>>>> >> > >> > > I guess whether or not a solidfire device is capable of
>>>> hosting
>>>> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could
>>>> get the
>>>> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs).
>>>> But if
>>>> >> > these
>>>> >> > >> > > stats aren't collected, I can't as an admin define multiple
>>>> pools
>>>> >> > and
>>>> >> > >> > > expect cloudstack to allocate evenly from them or fill one
>>>> up and
>>>> >> > move
>>>> >> > >> > > to the next, because it doesn't know how big it is.
>>>> >> > >> > >
>>>> >> > >> > > Ultimately this discussion has nothing to do with the KVM
>>>> stuff
>>>> >> > >> > > itself, just a tangent, but something to think about.
>>>> >> > >> > >
>>>> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>>>> >> > >> shadowsor@gmail.com>
>>>> >> > >> > > wrote:
>>>> >> > >> > > > Ok, on most storage pools it shows how many GB free/used
>>>> when
>>>> >> > >> listing
>>>> >> > >> > > > the pool both via API and in the UI. I'm guessing those
>>>> are
>>>> >> empty
>>>> >> > >> then
>>>> >> > >> > > > for the solid fire storage, but it seems like the user
>>>> should
>>>> >> have
>>>> >> > >> to
>>>> >> > >> > > > define some sort of pool that the luns get carved out of,
>>>> and
>>>> >> you
>>>> >> > >> > > > should be able to get the stats for that, right? Or is a
>>>> solid
>>>> >> > fire
>>>> >> > >> > > > appliance only one pool per appliance? This isn't about
>>>> billing,
>>>> >> > but
>>>> >> > >> > > > just so cloudstack itself knows whether or not there is
>>>> space
>>>> >> left
>>>> >> > >> on
>>>> >> > >> > > > the storage device, so cloudstack can go on allocating
>>>> from a
>>>> >> > >> > > > different primary storage as this one fills up. There are
>>>> also
>>>> >> > >> > > > notifications and things. It seems like there should be a
>>>> call
>>>> >> you
>>>> >> > >> can
>>>> >> > >> > > > handle for this, maybe Edison knows.
>>>> >> > >> > > >
>>>> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>>>> >> > >> shadowsor@gmail.com>
>>>> >> > >> > > wrote:
>>>> >> > >> > > >> You respond to more than attach and detach, right? Don't
>>>> you
>>>> >> > create
>>>> >> > >> > > luns as
>>>> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>>>> >> > >> > > >>
>>>> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>>>> >> > >> > mike.tutkowski@solidfire.com
>>>> >> > >> > > >
>>>> >> > >> > > >> wrote:
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> Hi Marcus,
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> I never need to respond to a CreateStoragePool call for
>>>> either
>>>> >> > >> > > XenServer
>>>> >> > >> > > >>> or
>>>> >> > >> > > >>> VMware.
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> What happens is I respond only to the Attach- and
>>>> >> Detach-volume
>>>> >> > >> > > commands.
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> Let's say an attach comes in:
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> In this case, I check to see if the storage is
>>>> "managed."
>>>> >> > Talking
>>>> >> > >> > > >>> XenServer
>>>> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk we
>>>> want
>>>> >> to
>>>> >> > >> > attach.
>>>> >> > >> > > >>> After, if this is the first time attaching this disk, I
>>>> create
>>>> >> > an
>>>> >> > >> SR
>>>> >> > >> > > and a
>>>> >> > >> > > >>> VDI within the SR. If it is not the first time
>>>> attaching this
>>>> >> > >> disk,
>>>> >> > >> > the
>>>> >> > >> > > >>> LUN
>>>> >> > >> > > >>> already has the SR and VDI on it.
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
>>>> because
>>>> >> > >> this
>>>> >> > >> > > logic
>>>> >> > >> > > >>> expected an SR and a VDI and now it has it.
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> It's the same thing for VMware: Just substitute
>>>> datastore for
>>>> >> SR
>>>> >> > >> and
>>>> >> > >> > > VMDK
>>>> >> > >> > > >>> for VDI.
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> Does that make sense?
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> Thanks!
>>>> >> > >> > > >>>
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>>>> >> > >> > > >>> <sh...@gmail.com>wrote:
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> > What do you do with Xen? I imagine the user enter the
>>>> SAN
>>>> >> > >> details
>>>> >> > >> > > when
>>>> >> > >> > > >>> > registering the pool? A the pool details are
>>>> basically just
>>>> >> > >> > > instructions
>>>> >> > >> > > >>> > on
>>>> >> > >> > > >>> > how to log into a target, correct?
>>>> >> > >> > > >>> >
>>>> >> > >> > > >>> > You can choose to log in a KVM host to the target
>>>> during
>>>> >> > >> > > >>> > createStoragePool
>>>> >> > >> > > >>> > and save the pool in a map, or just save the pool
>>>> info in a
>>>> >> > map
>>>> >> > >> for
>>>> >> > >> > > >>> > future
>>>> >> > >> > > >>> > reference by uuid, for when you do need to log in. The
>>>> >> > >> > > createStoragePool
>>>> >> > >> > > >>> > then just becomes a way to save the pool info to the
>>>> agent.
>>>> >> > >> > > Personally,
>>>> >> > >> > > >>> > I'd
>>>> >> > >> > > >>> > log in on the pool create and look/scan for specific
>>>> luns
>>>> >> when
>>>> >> > >> > > they're
>>>> >> > >> > > >>> > needed, but I haven't thought it through thoroughly.
>>>> I just
>>>> >> > say
>>>> >> > >> > that
>>>> >> > >> > > >>> > mainly
>>>> >> > >> > > >>> > because login only happens once, the first time the
>>>> pool is
>>>> >> > >> used,
>>>> >> > >> > and
>>>> >> > >> > > >>> > every
>>>> >> > >> > > >>> > other storage command is about discovering new luns
>>>> or maybe
>>>> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
>>>> other
>>>> >> > hand,
>>>> >> > >> > you
>>>> >> > >> > > >>> > could
>>>> >> > >> > > >>> > do all of the above: log in on pool create, then also
>>>> check
>>>> >> if
>>>> >> > >> > you're
>>>> >> > >> > > >>> > logged in on other commands and log in if you've lost
>>>> >> > >> connection.
>>>> >> > >> > > >>> >
>>>> >> > >> > > >>> > With Xen, what does your registered pool   show in
>>>> the UI
>>>> >> for
>>>> >> > >> > > avail/used
>>>> >> > >> > > >>> > capacity, and how does it get that info? I assume
>>>> there is
>>>> >> > some
>>>> >> > >> > sort
>>>> >> > >> > > of
>>>> >> > >> > > >>> > disk pool that the luns are carved from, and that your
>>>> >> plugin
>>>> >> > is
>>>> >> > >> > > called
>>>> >> > >> > > >>> > to
>>>> >> > >> > > >>> > talk to the SAN and expose to the user how much of
>>>> that pool
>>>> >> > has
>>>> >> > >> > been
>>>> >> > >> > > >>> > allocated. Knowing how you already solves these
>>>> problems
>>>> >> with
>>>> >> > >> Xen
>>>> >> > >> > > will
>>>> >> > >> > > >>> > help
>>>> >> > >> > > >>> > figure out what to do with KVM.
>>>> >> > >> > > >>> >
>>>> >> > >> > > >>> > If this is the case, I think the plugin can continue
>>>> to
>>>> >> handle
>>>> >> > >> it
>>>> >> > >> > > rather
>>>> >> > >> > > >>> > than getting details from the agent. I'm not sure if
>>>> that
>>>> >> > means
>>>> >> > >> > nulls
>>>> >> > >> > > >>> > are
>>>> >> > >> > > >>> > OK for these on the agent side or what, I need to
>>>> look at
>>>> >> the
>>>> >> > >> > storage
>>>> >> > >> > > >>> > plugin arch more closely.
>>>> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>>>> >> > >> > > mike.tutkowski@solidfire.com>
>>>> >> > >> > > >>> > wrote:
>>>> >> > >> > > >>> >
>>>> >> > >> > > >>> > > Hey Marcus,
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the
>>>> necessary
>>>> >> > >> methods
>>>> >> > >> > in
>>>> >> > >> > > new
>>>> >> > >> > > >>> > > classes.
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > "So, referencing StorageAdaptor.java,
>>>> createStoragePool
>>>> >> > >> accepts
>>>> >> > >> > > all of
>>>> >> > >> > > >>> > > the pool data (host, port, name, path) which would
>>>> be used
>>>> >> > to
>>>> >> > >> log
>>>> >> > >> > > the
>>>> >> > >> > > >>> > > host into the initiator."
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
>>>> (primary
>>>> >> > >> > > storage) is
>>>> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>>>> >> anything
>>>> >> > >> at
>>>> >> > >> > > this
>>>> >> > >> > > >>> > point,
>>>> >> > >> > > >>> > > correct?
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > Also, what kind of capacity, available, and used
>>>> bytes
>>>> >> make
>>>> >> > >> sense
>>>> >> > >> > > to
>>>> >> > >> > > >>> > report
>>>> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents
>>>> the
>>>> >> SAN
>>>> >> > >> in my
>>>> >> > >> > > case
>>>> >> > >> > > >>> > and
>>>> >> > >> > > >>> > > not an individual LUN)?
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > Thanks!
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>>>> >> > >> > > shadowsor@gmail.com
>>>> >> > >> > > >>> > > >wrote:
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > > Ok, KVM will be close to that, of course, because
>>>> only
>>>> >> the
>>>> >> > >> > > >>> > > > hypervisor
>>>> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
>>>> Creating a
>>>> >> > >> volume
>>>> >> > >> > is
>>>> >> > >> > > >>> > > > just
>>>> >> > >> > > >>> > > > a db entry until it's deployed for the first time.
>>>> >> > >> > > >>> > > > AttachVolumeCommand
>>>> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>>>> >> analogous
>>>> >> > >> to
>>>> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
>>>> commands
>>>> >> > (via
>>>> >> > >> a
>>>> >> > >> > KVM
>>>> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target
>>>> and
>>>> >> then
>>>> >> > >> you
>>>> >> > >> > > have a
>>>> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for
>>>> you, but
>>>> >> my
>>>> >> > >> quick
>>>> >> > >> > > read
>>>> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>>>> >> > actually a
>>>> >> > >> > > pool,
>>>> >> > >> > > >>> > > > not
>>>> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if
>>>> that
>>>> >> > works
>>>> >> > >> or
>>>> >> > >> > if
>>>> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>>>> >> (because
>>>> >> > >> > Libvirt
>>>> >> > >> > > >>> > > > doesn't really manage your pool the way you want),
>>>> >> you're
>>>> >> > >> going
>>>> >> > >> > > to
>>>> >> > >> > > >>> > > > have to create a version of KVMStoragePool class
>>>> and a
>>>> >> > >> > > >>> > > > StorageAdaptor
>>>> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>>>> >> > >> > > LibvirtStorageAdaptor.java),
>>>> >> > >> > > >>> > > > implementing all of the methods, then in
>>>> >> > >> KVMStorageManager.java
>>>> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
>>>> select
>>>> >> the
>>>> >> > >> > > correct
>>>> >> > >> > > >>> > > > adaptor, you can see in this file that every call
>>>> first
>>>> >> > >> pulls
>>>> >> > >> > the
>>>> >> > >> > > >>> > > > correct adaptor out of this map via
>>>> getStorageAdaptor.
>>>> >> So
>>>> >> > >> you
>>>> >> > >> > can
>>>> >> > >> > > >>> > > > see
>>>> >> > >> > > >>> > > > a comment in this file that says "add other
>>>> storage
>>>> >> > adaptors
>>>> >> > >> > > here",
>>>> >> > >> > > >>> > > > where it puts to this map, this is where you'd
>>>> register
>>>> >> > your
>>>> >> > >> > > >>> > > > adaptor.
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
>>>> createStoragePool
>>>> >> > >> accepts
>>>> >> > >> > > all
>>>> >> > >> > > >>> > > > of
>>>> >> > >> > > >>> > > > the pool data (host, port, name, path) which
>>>> would be
>>>> >> used
>>>> >> > >> to
>>>> >> > >> > log
>>>> >> > >> > > >>> > > > the
>>>> >> > >> > > >>> > > > host into the initiator. I *believe* the method
>>>> >> > >> getPhysicalDisk
>>>> >> > >> > > will
>>>> >> > >> > > >>> > > > need to do the work of attaching the lun.
>>>> >> > >>  AttachVolumeCommand
>>>> >> > >> > > calls
>>>> >> > >> > > >>> > > > this and then creates the XML diskdef and
>>>> attaches it to
>>>> >> > the
>>>> >> > >> > VM.
>>>> >> > >> > > >>> > > > Now,
>>>> >> > >> > > >>> > > > one thing you need to know is that
>>>> createStoragePool is
>>>> >> > >> called
>>>> >> > >> > > >>> > > > often,
>>>> >> > >> > > >>> > > > sometimes just to make sure the pool is there.
>>>> You may
>>>> >> > want
>>>> >> > >> to
>>>> >> > >> > > >>> > > > create
>>>> >> > >> > > >>> > > > a map in your adaptor class and keep track of
>>>> pools that
>>>> >> > >> have
>>>> >> > >> > > been
>>>> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do
>>>> this
>>>> >> > >> because
>>>> >> > >> > it
>>>> >> > >> > > >>> > > > asks
>>>> >> > >> > > >>> > > > libvirt about which storage pools exist. There
>>>> are also
>>>> >> > >> calls
>>>> >> > >> > to
>>>> >> > >> > > >>> > > > refresh the pool stats, and all of the other
>>>> calls can
>>>> >> be
>>>> >> > >> seen
>>>> >> > >> > in
>>>> >> > >> > > >>> > > > the
>>>> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
>>>> disk,
>>>> >> > >> clone,
>>>> >> > >> > > etc,
>>>> >> > >> > > >>> > > > but
>>>> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the
>>>> vague
>>>> >> > idea
>>>> >> > >> > that
>>>> >> > >> > > >>> > > > volumes are created on the mgmt server via the
>>>> plugin
>>>> >> now,
>>>> >> > >> so
>>>> >> > >> > > >>> > > > whatever
>>>> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or
>>>> optionally
>>>> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind
>>>> the hosts
>>>> >> > >> > talking
>>>> >> > >> > > to
>>>> >> > >> > > >>> > > > the san api).
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > There is a difference between attaching new
>>>> volumes and
>>>> >> > >> > > launching a
>>>> >> > >> > > >>> > > > VM
>>>> >> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
>>>> >> > >> definition
>>>> >> > >> > > that
>>>> >> > >> > > >>> > > > was
>>>> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
>>>> >> > (StartCommand).
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > I'd be interested in how your pool is defined for
>>>> Xen, I
>>>> >> > >> > imagine
>>>> >> > >> > > it
>>>> >> > >> > > >>> > > > would need to be kept the same. Is it just a
>>>> definition
>>>> >> to
>>>> >> > >> the
>>>> >> > >> > > SAN
>>>> >> > >> > > >>> > > > (ip address or some such, port number) and
>>>> perhaps a
>>>> >> > volume
>>>> >> > >> > pool
>>>> >> > >> > > >>> > > > name?
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>>>> on the
>>>> >> > >> SAN to
>>>> >> > >> > > have
>>>> >> > >> > > >>> > > only a
>>>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>>>> would
>>>> >> be
>>>> >> > >> > ideal.
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
>>>> >> impression
>>>> >> > >> that
>>>> >> > >> > > the
>>>> >> > >> > > >>> > > > storage plugin framework allowed for acls, or for
>>>> you to
>>>> >> > do
>>>> >> > >> > > whatever
>>>> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
>>>> You'd
>>>> >> > just
>>>> >> > >> > call
>>>> >> > >> > > >>> > > > your
>>>> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to
>>>> when
>>>> >> the
>>>> >> > >> disk
>>>> >> > >> > is
>>>> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to
>>>> look more
>>>> >> at
>>>> >> > >> the
>>>> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
>>>> this in
>>>> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the
>>>> LUN.
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>>>> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting.
>>>> That is a
>>>> >> > bit
>>>> >> > >> > > >>> > > > > different
>>>> >> > >> > > >>> > > from
>>>> >> > >> > > >>> > > > how
>>>> >> > >> > > >>> > > > > it works with XenServer and VMware.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2
>>>> with
>>>> >> > >> XenServer:
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
>>>> recorded
>>>> >> in
>>>> >> > >> the
>>>> >> > >> > > >>> > > > cloud.volumes
>>>> >> > >> > > >>> > > > > table).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a
>>>> VM for
>>>> >> the
>>>> >> > >> > first
>>>> >> > >> > > >>> > > > > time
>>>> >> > >> > > >>> > (if
>>>> >> > >> > > >>> > > > the
>>>> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in,
>>>> the
>>>> >> > storage
>>>> >> > >> > > >>> > > > > framework
>>>> >> > >> > > >>> > > > invokes
>>>> >> > >> > > >>> > > > > a method on the plug-in that creates a volume
>>>> on the
>>>> >> > >> > SAN...info
>>>> >> > >> > > >>> > > > > like
>>>> >> > >> > > >>> > > the
>>>> >> > >> > > >>> > > > IQN
>>>> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > * CitrixResourceBase's
>>>> execute(AttachVolumeCommand) is
>>>> >> > >> > > executed.
>>>> >> > >> > > >>> > > > > It
>>>> >> > >> > > >>> > > > > determines based on a flag passed in that the
>>>> storage
>>>> >> in
>>>> >> > >> > > question
>>>> >> > >> > > >>> > > > > is
>>>> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>>>> >> > "traditional"
>>>> >> > >> > > >>> > preallocated
>>>> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
>>>> target.
>>>> >> > Once
>>>> >> > >> > > >>> > > > > discovered
>>>> >> > >> > > >>> > > it
>>>> >> > >> > > >>> > > > > determines if the iSCSI target already contains
>>>> a
>>>> >> > storage
>>>> >> > >> > > >>> > > > > repository
>>>> >> > >> > > >>> > > (it
>>>> >> > >> > > >>> > > > > would if this were a re-attach situation). If
>>>> it does
>>>> >> > >> contain
>>>> >> > >> > > an
>>>> >> > >> > > >>> > > > > SR
>>>> >> > >> > > >>> > > > already,
>>>> >> > >> > > >>> > > > > then there should already be one VDI, as well.
>>>> If
>>>> >> there
>>>> >> > >> is no
>>>> >> > >> > > SR,
>>>> >> > >> > > >>> > > > > an
>>>> >> > >> > > >>> > SR
>>>> >> > >> > > >>> > > > is
>>>> >> > >> > > >>> > > > > created and a single VDI is created within it
>>>> (that
>>>> >> > takes
>>>> >> > >> up
>>>> >> > >> > > about
>>>> >> > >> > > >>> > > > > as
>>>> >> > >> > > >>> > > > much
>>>> >> > >> > > >>> > > > > space as was requested for the CloudStack
>>>> volume).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
>>>> depends
>>>> >> > on
>>>> >> > >> the
>>>> >> > >> > > >>> > existence
>>>> >> > >> > > >>> > > > of
>>>> >> > >> > > >>> > > > > an SR and a VDI).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > The VMware case is essentially the same (mainly
>>>> just
>>>> >> > >> > substitute
>>>> >> > >> > > >>> > > datastore
>>>> >> > >> > > >>> > > > > for SR and VMDK for VDI).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>>>> >> discovered
>>>> >> > >> the
>>>> >> > >> > > iSCSI
>>>> >> > >> > > >>> > > target,
>>>> >> > >> > > >>> > > > > but only the host that is currently running the
>>>> VM
>>>> >> that
>>>> >> > is
>>>> >> > >> > > using
>>>> >> > >> > > >>> > > > > the
>>>> >> > >> > > >>> > > VDI
>>>> >> > >> > > >>> > > > (or
>>>> >> > >> > > >>> > > > > VMKD) is actually using the disk.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > Live Migration should be OK because the
>>>> hypervisors
>>>> >> > >> > communicate
>>>> >> > >> > > >>> > > > > with
>>>> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
>>>> datastore).
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
>>>> >> > CloudStack's
>>>> >> > >> > > eyes.
>>>> >> > >> > > >>> > > > > CS
>>>> >> > >> > > >>> > > > controls
>>>> >> > >> > > >>> > > > > Live Migration. You don't really need a
>>>> clustered
>>>> >> > >> filesystem
>>>> >> > >> > on
>>>> >> > >> > > >>> > > > > the
>>>> >> > >> > > >>> > > LUN.
>>>> >> > >> > > >>> > > > The
>>>> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>>>> on the
>>>> >> > >> SAN to
>>>> >> > >> > > have
>>>> >> > >> > > >>> > > only a
>>>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>>>> would
>>>> >> be
>>>> >> > >> > ideal.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to
>>>> discover
>>>> >> and
>>>> >> > >> log
>>>> >> > >> > in
>>>> >> > >> > > to
>>>> >> > >> > > >>> > > > > the
>>>> >> > >> > > >>> > > > iSCSI
>>>> >> > >> > > >>> > > > > target. I'll also need to take the resultant new
>>>> >> device
>>>> >> > >> and
>>>> >> > >> > > pass
>>>> >> > >> > > >>> > > > > it
>>>> >> > >> > > >>> > > into
>>>> >> > >> > > >>> > > > the
>>>> >> > >> > > >>> > > > > VM.
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out
>>>> on
>>>> >> > >> anything I
>>>> >> > >> > > seem
>>>> >> > >> > > >>> > > > incorrect
>>>> >> > >> > > >>> > > > > about. :)
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus
>>>> Sorensen <
>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>> >> > >> > > >>> > > > > wrote:
>>>> >> > >> > > >>> > > > >>
>>>> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a
>>>> disk
>>>> >> > def,
>>>> >> > >> and
>>>> >> > >> > > the
>>>> >> > >> > > >>> > > attach
>>>> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do
>>>> your own
>>>> >> > >> > > >>> > > > >> StorageAdaptor
>>>> >> > >> > > >>> > and
>>>> >> > >> > > >>> > > > run
>>>> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that,
>>>> depending on
>>>> >> how
>>>> >> > >> the
>>>> >> > >> > > >>> > > > >> libvirt
>>>> >> > >> > > >>> > > iscsi
>>>> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
>>>> pool/lun/volume
>>>> >> > >> isn't
>>>> >> > >> > > how it
>>>> >> > >> > > >>> > > works
>>>> >> > >> > > >>> > > > on
>>>> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>>>> >> > >> > > >>> > > > >>
>>>> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which
>>>> host can
>>>> >> > see
>>>> >> > >> > > which
>>>> >> > >> > > >>> > > > >> luns
>>>> >> > >> > > >>> > > as
>>>> >> > >> > > >>> > > > >> well, I remember discussing that months ago,
>>>> so that
>>>> >> a
>>>> >> > >> disk
>>>> >> > >> > > won't
>>>> >> > >> > > >>> > > > >> be
>>>> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
>>>> access,
>>>> >> so
>>>> >> > >> it
>>>> >> > >> > > will
>>>> >> > >> > > >>> > > > >> be
>>>> >> > >> > > >>> > > safe
>>>> >> > >> > > >>> > > > and
>>>> >> > >> > > >>> > > > >> fence the disk from rogue nodes that
>>>> cloudstack loses
>>>> >> > >> > > >>> > > > >> connectivity
>>>> >> > >> > > >>> > > > with. It
>>>> >> > >> > > >>> > > > >> should revoke access to everything but the
>>>> target
>>>> >> > host...
>>>> >> > >> > > Except
>>>> >> > >> > > >>> > > > >> for
>>>> >> > >> > > >>> > > > during
>>>> >> > >> > > >>> > > > >> migration but we can discuss that later,
>>>> there's a
>>>> >> > >> migration
>>>> >> > >> > > prep
>>>> >> > >> > > >>> > > > process
>>>> >> > >> > > >>> > > > >> where the new host can be added to the acls,
>>>> and the
>>>> >> > old
>>>> >> > >> > host
>>>> >> > >> > > can
>>>> >> > >> > > >>> > > > >> be
>>>> >> > >> > > >>> > > > removed
>>>> >> > >> > > >>> > > > >> post migration.
>>>> >> > >> > > >>> > > > >>
>>>> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>>> >> > >> > > >>> > > mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >> wrote:
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
>>>> target,
>>>> >> > >> log in
>>>> >> > >> > > to
>>>> >> > >> > > >>> > > > >>> it,
>>>> >> > >> > > >>> > > then
>>>> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a
>>>> result
>>>> >> (and
>>>> >> > >> leave
>>>> >> > >> > > it
>>>> >> > >> > > >>> > > > >>> as
>>>> >> > >> > > >>> > is
>>>> >> > >> > > >>> > > -
>>>> >> > >> > > >>> > > > do
>>>> >> > >> > > >>> > > > >>> not format it with any file
>>>> system...clustered or
>>>> >> > not).
>>>> >> > >> I
>>>> >> > >> > > would
>>>> >> > >> > > >>> > pass
>>>> >> > >> > > >>> > > > that
>>>> >> > >> > > >>> > > > >>> device into the VM.
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>> Kind of accurate?
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus
>>>> Sorensen <
>>>> >> > >> > > >>> > > shadowsor@gmail.com>
>>>> >> > >> > > >>> > > > >>> wrote:
>>>> >> > >> > > >>> > > > >>>>
>>>> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the
>>>> disk
>>>> >> > >> > > definitions.
>>>> >> > >> > > >>> > There
>>>> >> > >> > > >>> > > > are
>>>> >> > >> > > >>> > > > >>>> ones that work for block devices rather than
>>>> files.
>>>> >> > You
>>>> >> > >> > can
>>>> >> > >> > > >>> > > > >>>> piggy
>>>> >> > >> > > >>> > > > back off
>>>> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach
>>>> it to
>>>> >> the
>>>> >> > >> vm
>>>> >> > >> > as
>>>> >> > >> > > a
>>>> >> > >> > > >>> > block
>>>> >> > >> > > >>> > > > device.
>>>> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt
>>>> XML
>>>> >> > format.
>>>> >> > >> > You
>>>> >> > >> > > may
>>>> >> > >> > > >>> > want
>>>> >> > >> > > >>> > > > to use
>>>> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than
>>>> just
>>>> >> > /dev/sdx
>>>> >> > >> > > like I
>>>> >> > >> > > >>> > > > mentioned,
>>>> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices,
>>>> as well
>>>> >> > as
>>>> >> > >> > other
>>>> >> > >> > > >>> > > > >>>> ones
>>>> >> > >> > > >>> > > > that will
>>>> >> > >> > > >>> > > > >>>> be consistent and easier for management, not
>>>> sure
>>>> >> how
>>>> >> > >> > > familiar
>>>> >> > >> > > >>> > > > >>>> you
>>>> >> > >> > > >>> > > > are with
>>>> >> > >> > > >>> > > > >>>> device naming on Linux.
>>>> >> > >> > > >>> > > > >>>>
>>>> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
>>>> >> > >> > > >>> > > > wrote:
>>>> >> > >> > > >>> > > > >>>>>
>>>> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>>>> >> network/iscsi
>>>> >> > >> > > initiator
>>>> >> > >> > > >>> > > inside
>>>> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
>>>> /dev/sdx
>>>> >> > (your
>>>> >> > >> > lun
>>>> >> > >> > > on
>>>> >> > >> > > >>> > > > hypervisor) as
>>>> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching
>>>> some image
>>>> >> > >> file
>>>> >> > >> > > that
>>>> >> > >> > > >>> > > resides
>>>> >> > >> > > >>> > > > on a
>>>> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
>>>> >> target.
>>>> >> > >> > > >>> > > > >>>>>
>>>> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage
>>>> supporting
>>>> >> live
>>>> >> > >> > > migration
>>>> >> > >> > > >>> > > > >>>>> I
>>>> >> > >> > > >>> > > > think
>>>> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
>>>> filesystem
>>>> >> on
>>>> >> > it
>>>> >> > >> > and
>>>> >> > >> > > >>> > > > >>>>> mount
>>>> >> > >> > > >>> > it
>>>> >> > >> > > >>> > > > in two
>>>> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>>>> >> > clustered
>>>> >> > >> > > >>> > > > >>>>> filesystem,
>>>> >> > >> > > >>> > > in
>>>> >> > >> > > >>> > > > which
>>>> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>>>> >> > >> > > >>> > > > >>>>>
>>>> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style
>>>> is
>>>> >> > >> basically
>>>> >> > >> > > LVM
>>>> >> > >> > > >>> > with a
>>>> >> > >> > > >>> > > > xen
>>>> >> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM.
>>>> They
>>>> >> > don't
>>>> >> > >> > use
>>>> >> > >> > > a
>>>> >> > >> > > >>> > > > filesystem
>>>> >> > >> > > >>> > > > >>>>> either.
>>>> >> > >> > > >>> > > > >>>>>
>>>> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to
>>>> the
>>>> >> vm,"
>>>> >> > >> do
>>>> >> > >> > you
>>>> >> > >> > > >>> > > > >>>>>> mean
>>>> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't
>>>> think we
>>>> >> > >> could do
>>>> >> > >> > > that
>>>> >> > >> > > >>> > > > >>>>>> in
>>>> >> > >> > > >>> > > CS.
>>>> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
>>>> circumvents
>>>> >> > the
>>>> >> > >> > > >>> > > > >>>>>> hypervisor,
>>>> >> > >> > > >>> > > as
>>>> >> > >> > > >>> > > > far as I
>>>> >> > >> > > >>> > > > >>>>>> know.
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
>>>> Sorensen
>>>> >> <
>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>> >> > >> > > >>> > > > >>>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the
>>>> vm
>>>> >> > unless
>>>> >> > >> > > there is
>>>> >> > >> > > >>> > > > >>>>>>> a
>>>> >> > >> > > >>> > > good
>>>> >> > >> > > >>> > > > >>>>>>> reason not to.
>>>> >> > >> > > >>> > > > >>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus
>>>> Sorensen" <
>>>> >> > >> > > >>> > shadowsor@gmail.com>
>>>> >> > >> > > >>> > > > >>>>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I
>>>> think
>>>> >> its a
>>>> >> > >> > > mistake
>>>> >> > >> > > >>> > > > >>>>>>>> to
>>>> >> > >> > > >>> > go
>>>> >> > >> > > >>> > > to
>>>> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of
>>>> CS
>>>> >> > >> volumes to
>>>> >> > >> > > luns
>>>> >> > >> > > >>> > and
>>>> >> > >> > > >>> > > > then putting
>>>> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
>>>> >> > putting a
>>>> >> > >> > > QCOW2
>>>> >> > >> > > >>> > > > >>>>>>>> or
>>>> >> > >> > > >>> > > even
>>>> >> > >> > > >>> > > > RAW disk
>>>> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a
>>>> lot of
>>>> >> > iops
>>>> >> > >> > > along
>>>> >> > >> > > >>> > > > >>>>>>>> the
>>>> >> > >> > > >>> > > > way, and have
>>>> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
>>>> >> > >> journaling,
>>>> >> > >> > > etc.
>>>> >> > >> > > >>> > > > >>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new
>>>> ground
>>>> >> > in
>>>> >> > >> KVM
>>>> >> > >> > > with
>>>> >> > >> > > >>> > CS.
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM
>>>> and CS
>>>> >> > >> today
>>>> >> > >> > > is by
>>>> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and
>>>> specifying the
>>>> >> > >> > location
>>>> >> > >> > > of
>>>> >> > >> > > >>> > > > >>>>>>>>> the
>>>> >> > >> > > >>> > > > share.
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
>>>> iSCSI
>>>> >> by
>>>> >> > >> > > >>> > > > >>>>>>>>> discovering
>>>> >> > >> > > >>> > > their
>>>> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
>>>> mounting
>>>> >> it
>>>> >> > >> > > somewhere
>>>> >> > >> > > >>> > > > >>>>>>>>> on
>>>> >> > >> > > >>> > > > their file
>>>> >> > >> > > >>> > > > >>>>>>>>> system.
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do
>>>> that
>>>> >> > >> discovery,
>>>> >> > >> > > >>> > > > >>>>>>>>> logging
>>>> >> > >> > > >>> > > in,
>>>> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them
>>>> and
>>>> >> > >> letting
>>>> >> > >> > the
>>>> >> > >> > > >>> > current
>>>> >> > >> > > >>> > > > code manage
>>>> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>>>> >> Sorensen
>>>> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>>>> >> different. I
>>>> >> > >> need
>>>> >> > >> > > to
>>>> >> > >> > > >>> > catch
>>>> >> > >> > > >>> > > up
>>>> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>>>> >> basically
>>>> >> > >> just
>>>> >> > >> > > disk
>>>> >> > >> > > >>> > > > snapshots + memory
>>>> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots
>>>> would
>>>> >> > >> preferably
>>>> >> > >> > be
>>>> >> > >> > > >>> > handled
>>>> >> > >> > > >>> > > > by the SAN,
>>>> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to
>>>> secondary
>>>> >> > >> storage or
>>>> >> > >> > > >>> > something
>>>> >> > >> > > >>> > > > else. This is
>>>> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM,
>>>> so we
>>>> >> > will
>>>> >> > >> > > want to
>>>> >> > >> > > >>> > see
>>>> >> > >> > > >>> > > > how others are
>>>> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>>>> >> > >> > > >>> > > > >>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
>>>> Sorensen" <
>>>> >> > >> > > >>> > > shadowsor@gmail.com
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >>>>>>>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
>>>> you'd
>>>> >> > use a
>>>> >> > >> > vdi
>>>> >> > >> > > >>> > > > >>>>>>>>>>> style
>>>> >> > >> > > >>> > on
>>>> >> > >> > > >>> > > > an
>>>> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to
>>>> treat it
>>>> >> as a
>>>> >> > >> RAW
>>>> >> > >> > > >>> > > > >>>>>>>>>>> format.
>>>> >> > >> > > >>> > > > Otherwise you're
>>>> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
>>>> mounting
>>>> >> it,
>>>> >> > >> > > creating
>>>> >> > >> > > >>> > > > >>>>>>>>>>> a
>>>> >> > >> > > >>> > > > QCOW2 disk image,
>>>> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
>>>> performance
>>>> >> > >> > killer.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi
>>>> lun as a
>>>> >> > >> disk
>>>> >> > >> > to
>>>> >> > >> > > the
>>>> >> > >> > > >>> > VM,
>>>> >> > >> > > >>> > > > and
>>>> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side
>>>> via the
>>>> >> > >> storage
>>>> >> > >> > > >>> > > > >>>>>>>>>>> plugin
>>>> >> > >> > > >>> > is
>>>> >> > >> > > >>> > > > best. My
>>>> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
>>>> refactor
>>>> >> > was
>>>> >> > >> > that
>>>> >> > >> > > >>> > > > >>>>>>>>>>> there
>>>> >> > >> > > >>> > > was
>>>> >> > >> > > >>> > > > a snapshot
>>>> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
>>>> handle
>>>> >> > >> > snapshots.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
>>>> Sorensen" <
>>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>>> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be
>>>> handled by
>>>> >> > the
>>>> >> > >> SAN
>>>> >> > >> > > back
>>>> >> > >> > > >>> > end,
>>>> >> > >> > > >>> > > > if
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack
>>>> mgmt
>>>> >> > server
>>>> >> > >> > > could
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> call
>>>> >> > >> > > >>> > > > your plugin for
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
>>>> hypervisor
>>>> >> > >> > > agnostic. As
>>>> >> > >> > > >>> > far
>>>> >> > >> > > >>> > > > as space, that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles
>>>> it.
>>>> >> With
>>>> >> > >> > ours,
>>>> >> > >> > > we
>>>> >> > >> > > >>> > carve
>>>> >> > >> > > >>> > > > out luns from a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes
>>>> from the
>>>> >> > >> pool
>>>> >> > >> > > and is
>>>> >> > >> > > >>> > > > independent of the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
>>>> Tutkowski"
>>>> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com>
>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool
>>>> type
>>>> >> for
>>>> >> > >> > libvirt
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
>>>> >> > >> > > >>> > > > work
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>>>> >> hypervisor
>>>> >> > >> > > snapshots?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
>>>> hypervisor
>>>> >> > >> > snapshot,
>>>> >> > >> > > the
>>>> >> > >> > > >>> > VDI
>>>> >> > >> > > >>> > > > for
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
>>>> storage
>>>> >> > >> > > repository
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> as
>>>> >> > >> > > >>> > > the
>>>> >> > >> > > >>> > > > volume is on.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case
>>>> (let's
>>>> >> say
>>>> >> > >> for
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>>>> >> > >> > > >>> > > and
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't
>>>> support
>>>> >> > >> hypervisor
>>>> >> > >> > > >>> > snapshots
>>>> >> > >> > > >>> > > > in 4.2) is I'd
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger
>>>> than
>>>> >> > what
>>>> >> > >> the
>>>> >> > >> > > user
>>>> >> > >> > > >>> > > > requested for the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
>>>> because
>>>> >> our
>>>> >> > >> SAN
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>>>> >> > >> > > >>> > > > provisions volumes,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
>>>> unless
>>>> >> it
>>>> >> > >> needs
>>>> >> > >> > > to
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
>>>> >> > >> > > >>> > > > The CloudStack
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object"
>>>> on the
>>>> >> SAN
>>>> >> > >> > volume
>>>> >> > >> > > >>> > until a
>>>> >> > >> > > >>> > > > hypervisor
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot
>>>> would
>>>> >> also
>>>> >> > >> > reside
>>>> >> > >> > > on
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the
>>>> >> > >> > > >>> > > > SAN volume.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and
>>>> there
>>>> >> is
>>>> >> > >> no
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
>>>> >> > >> > > >>> > of
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
>>>> libvirt
>>>> >> > >> (which,
>>>> >> > >> > > even
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> if
>>>> >> > >> > > >>> > > > there were support
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only
>>>> allows
>>>> >> one
>>>> >> > >> LUN
>>>> >> > >> > per
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>>>> >> > >> > > >>> > > > target), then I
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will
>>>> work.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance
>>>> the
>>>> >> > current
>>>> >> > >> way
>>>> >> > >> > > this
>>>> >> > >> > > >>> > > works
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM,
>>>> Mike
>>>> >> > >> Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com>
>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's
>>>> used for
>>>> >> > >> iSCSI
>>>> >> > >> > > access
>>>> >> > >> > > >>> > > today.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route,
>>>> too,
>>>> >> but I
>>>> >> > >> > might
>>>> >> > >> > > as
>>>> >> > >> > > >>> > well
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
>>>> >> > instead.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
>>>> Marcus
>>>> >> > >> Sorensen
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>>>> >> SharedMountPoint, I
>>>> >> > >> > > believe
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>>>> >> > >> > > >>> > > just
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
>>>> similar
>>>> >> to
>>>> >> > >> > that.
>>>> >> > >> > > The
>>>> >> > >> > > >>> > > > end-user
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
>>>> system
>>>> >> > that
>>>> >> > >> all
>>>> >> > >> > > KVM
>>>> >> > >> > > >>> > hosts
>>>> >> > >> > > >>> > > > can
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to
>>>> what is
>>>> >> > >> > providing
>>>> >> > >> > > the
>>>> >> > >> > > >>> > > > storage.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>>>> >> clustered
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory
>>>> path
>>>> >> has
>>>> >> > >> VM
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
>>>> Marcus
>>>> >> > >> > Sorensen
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM,
>>>> and
>>>> >> iSCSI
>>>> >> > >> all
>>>> >> > >> > at
>>>> >> > >> > > the
>>>> >> > >> > > >>> > same
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19
>>>> PM, Mike
>>>> >> > >> > Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
>>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have
>>>> multiple
>>>> >> > storage
>>>> >> > >> > > pools:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
>>>> pool-list
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>>>> >> >  Autostart
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > -----------------------------------------
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
>>>>   yes
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active
>>>>   no
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12
>>>> PM, Mike
>>>> >> > >> > > Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
>>>> >> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
>>>> pointed
>>>> >> > >> out.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI
>>>> (libvirt)
>>>> >> > >> storage
>>>> >> > >> > > pool
>>>> >> > >> > > >>> > based
>>>> >> > >> > > >>> > > on
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
>>>> would
>>>> >> > only
>>>> >> > >> > have
>>>> >> > >> > > one
>>>> >> > >> > > >>> > LUN,
>>>> >> > >> > > >>> > > > so
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
>>>> >> volume
>>>> >> > in
>>>> >> > >> > the
>>>> >> > >> > > >>> > > (libvirt)
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in
>>>> creates and
>>>> >> > >> destroys
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>>>> >> problem
>>>> >> > >> that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>>>> >> > >> > > >>> > > does
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>>>> >> targets/LUNs.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
>>>> this a
>>>> >> > bit
>>>> >> > >> to
>>>> >> > >> > > see
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>>>> >> > >> > > >>> > > > libvirt
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools
>>>> (as you
>>>> >> > >> > > mentioned,
>>>> >> > >> > > >>> > since
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to
>>>> one of my
>>>> >> > >> iSCSI
>>>> >> > >> > > >>> > > > targets/LUNs).
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58
>>>> PM,
>>>> >> Mike
>>>> >> > >> > > Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mike.tutkowski@solidfire.com
>>>> >
>>>> >> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has
>>>> this
>>>> >> type:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>>>> >> > NETFS("netfs"),
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"),
>>>> DIR("dir"),
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
>>>> poolType) {
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
>>>> poolType;
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
>>>> toString() {
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the
>>>> iSCSI type
>>>> >> > is
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>>>> >> > >> > > >>> > > being
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you
>>>> were
>>>> >> > >> getting
>>>> >> > >> > at.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today
>>>> (say,
>>>> >> 4.2),
>>>> >> > >> when
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
>>>> uses it
>>>> >> > >> with
>>>> >> > >> > > iSCSI,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>>>> >> > >> > > >>> > > > that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for
>>>> NFS?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50
>>>> PM,
>>>> >> > Marcus
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be
>>>> pre-allocated on
>>>> >> > the
>>>> >> > >> > iSCSI
>>>> >> > >> > > >>> > server,
>>>> >> > >> > > >>> > > > and
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt
>>>> APIs.",
>>>> >> > which
>>>> >> > >> I
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>>>> >> > >> > > >>> > > your
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does
>>>> the
>>>> >> work
>>>> >> > of
>>>> >> > >> > > logging
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>> >> > >> > > >>> > > and
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen
>>>> api does
>>>> >> > >> that
>>>> >> > >> > > work
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>>> >> > >> > > >>> > the
>>>> >> > >> > > >>> > > > Xen
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
>>>> whether
>>>> >> > >> this
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>>>> >> > >> > > >>> > a
>>>> >> > >> > > >>> > > > 1:1
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to
>>>> register 1
>>>> >> > iscsi
>>>> >> > >> > > device
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>>>> >> > >> > > >>> > a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or
>>>> read
>>>> >> up a
>>>> >> > >> bit
>>>> >> > >> > > more
>>>> >> > >> > > >>> > about
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just
>>>> have
>>>> >> to
>>>> >> > >> write
>>>> >> > >> > > your
>>>> >> > >> > > >>> > own
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>> >> > >> > > >>> > >  We
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>>>> >> libvirt,
>>>> >> > >> see
>>>> >> > >> > the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made,
>>>> then
>>>> >> > calls
>>>> >> > >> > made
>>>> >> > >> > > to
>>>> >> > >> > > >>> > that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>>>> >> > LibvirtStorageAdaptor
>>>> >> > >> to
>>>> >> > >> > > see
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>>>> >> > >> > > >>> > > that
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe
>>>> write
>>>> >> > some
>>>> >> > >> > test
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>>> >> > >> > > >>> > > code
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt
>>>> and
>>>> >> > >> register
>>>> >> > >> > > iscsi
>>>> >> > >> > > >>> > > storage
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at
>>>> 5:31 PM,
>>>> >> > Mike
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <
>>>> mike.tutkowski@solidfire.com>
>>>> >> > wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>>>> >> investigate
>>>> >> > >> > libvirt
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>>>> >> > >> > > >>> > > but
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting
>>>> to/disconnecting from
>>>> >> > >> iSCSI
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at
>>>> 5:29 PM,
>>>> >> > >> Mike
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
>>>> mike.tutkowski@solidfire.com>
>>>> >> > >> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
>>>> through
>>>> >> > >> some of
>>>> >> > >> > > the
>>>> >> > >> > > >>> > > classes
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at
>>>> 5:26
>>>> >> PM,
>>>> >> > >> > Marcus
>>>> >> > >> > > >>> > Sorensen
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that
>>>> you will
>>>> >> > >> need
>>>> >> > >> > the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should
>>>> be
>>>> >> > >> standard
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>>>> >> > >> > > >>> > > for
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage
>>>> adaptor to do
>>>> >> > the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>>>> >> > >> > > >>> > > > login.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>> LibvirtStorageAdaptor.java
>>>> >> > >> > > >>> > and
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits
>>>> your need.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55
>>>> PM, "Mike
>>>> >> > >> > > Tutkowski"
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
>>>> mike.tutkowski@solidfire.com
>>>> >> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
>>>> during
>>>> >> the
>>>> >> > >> 4.2
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>>>> >> > >> > > >>> > I
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>>>> >> > CloudStack.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was
>>>> invoked by
>>>> >> the
>>>> >> > >> > > storage
>>>> >> > >> > > >>> > > > framework
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
>>>> dynamically
>>>> >> > >> create
>>>> >> > >> > and
>>>> >> > >> > > >>> > delete
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other
>>>> activities).
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I
>>>> can
>>>> >> > >> > establish a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>> >> > >> > > >>> > > > mapping
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire
>>>> volume
>>>> >> > for
>>>> >> > >> > QoS.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
>>>> >> always
>>>> >> > >> > > expected
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> > >> > > >>> > > > admin
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time
>>>> and
>>>> >> those
>>>> >> > >> > > volumes
>>>> >> > >> > > >>> > would
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is
>>>> not QoS
>>>> >> > >> > > friendly).
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1
>>>> mapping
>>>> >> scheme
>>>> >> > >> > work,
>>>> >> > >> > > I
>>>> >> > >> > > >>> > needed
>>>> >> > >> > > >>> > > > to
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
>>>> plug-ins
>>>> >> > so
>>>> >> > >> > they
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> repositories/datastores as
>>>> >> > >> needed.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make
>>>> this
>>>> >> > >> happen
>>>> >> > >> > > with
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed
>>>> with
>>>> >> how
>>>> >> > >> this
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>>>> >> > >> > > >>> > > work
>>>> >> > >> > > >>> > > > on
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar
>>>> with KVM
>>>> >> > >> know
>>>> >> > >> > > how I
>>>> >> > >> > > >>> > will
>>>> >> > >> > > >>> > > > need
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For
>>>> example,
>>>> >> > will I
>>>> >> > >> > > have to
>>>> >> > >> > > >>> > > expect
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM
>>>> host and
>>>> >> > >> use it
>>>> >> > >> > > for
>>>> >> > >> > > >>> > this
>>>> >> > >> > > >>> > > to
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
>>>> suggestions,
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
>>>> Developer,
>>>> >> > >> > SolidFire
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>>>> >> > mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
>>>> world
>>>> >> > uses
>>>> >> > >> the
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack
>>>> Developer,
>>>> >> > >> SolidFire
>>>> >> > >> > > Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>>>> >> mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the
>>>> world
>>>> >> uses
>>>> >> > >> the
>>>> >> > >> > > cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack
>>>> Developer,
>>>> >> > >> SolidFire
>>>> >> > >> > > Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
>>>> mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the
>>>> world uses
>>>> >> > the
>>>> >> > >> > > cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>>>> >> > SolidFire
>>>> >> > >> > Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e:
>>>> mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world
>>>> uses
>>>> >> the
>>>> >> > >> > cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>>>> >> SolidFire
>>>> >> > >> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e:
>>>> mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world
>>>> uses the
>>>> >> > >> cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>>>> >> SolidFire
>>>> >> > >> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e:
>>>> mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world
>>>> uses the
>>>> >> > >> cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer,
>>>> SolidFire
>>>> >> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses
>>>> the
>>>> >> cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> --
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer,
>>>> SolidFire
>>>> >> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
>>>> >> cloud™
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>>
>>>> >> > >> > > >>> > > > >>>>>>>>> --
>>>> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire
>>>> Inc.
>>>> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the
>>>> cloud™
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>>
>>>> >> > >> > > >>> > > > >>>>>> --
>>>> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>>
>>>> >> > >> > > >>> > > > >>> --
>>>> >> > >> > > >>> > > > >>> Mike Tutkowski
>>>> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>>>> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > >>> o: 303.746.7302
>>>> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > >
>>>> >> > >> > > >>> > > > > --
>>>> >> > >> > > >>> > > > > Mike Tutkowski
>>>> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>>>> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > > > o: 303.746.7302
>>>> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>>>> >> > >> > > >>> > > >
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> > > --
>>>> >> > >> > > >>> > > *Mike Tutkowski*
>>>> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> > > o: 303.746.7302
>>>> >> > >> > > >>> > > Advancing the way the world uses the
>>>> >> > >> > > >>> > > cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>> >> > >> > > >>> > > *™*
>>>> >> > >> > > >>> > >
>>>> >> > >> > > >>> >
>>>> >> > >> > > >>>
>>>> >> > >> > > >>>
>>>> >> > >> > > >>>
>>>> >> > >> > > >>> --
>>>> >> > >> > > >>> *Mike Tutkowski*
>>>> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > >> > > >>> e: mike.tutkowski@solidfire.com
>>>> >> > >> > > >>> o: 303.746.7302
>>>> >> > >> > > >>> Advancing the way the world uses the
>>>> >> > >> > > >>> cloud<
>>>> http://solidfire.com/solution/overview/?video=play>
>>>> >> > >> > > >>> *™*
>>>> >> > >> > >
>>>> >> > >> >
>>>> >> > >> >
>>>> >> > >> >
>>>> >> > >> > --
>>>> >> > >> > *Mike Tutkowski*
>>>> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > >> > e: mike.tutkowski@solidfire.com
>>>> >> > >> > o: 303.746.7302
>>>> >> > >> > Advancing the way the world uses the
>>>> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> > >> > *™*
>>>> >> > >> >
>>>> >> > >>
>>>> >> > >
>>>> >> > >
>>>> >> > >
>>>> >> > > --
>>>> >> > > *Mike Tutkowski*
>>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > > e: mike.tutkowski@solidfire.com
>>>> >> > > o: 303.746.7302
>>>> >> > > Advancing the way the world uses the cloud<
>>>> >> > http://solidfire.com/solution/overview/?video=play>
>>>> >> > > *™*
>>>> >> > >
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > --
>>>> >> > *Mike Tutkowski*
>>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> >> > e: mike.tutkowski@solidfire.com
>>>> >> > o: 303.746.7302
>>>> >> > Advancing the way the world uses the
>>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> >> > *™*
>>>> >> >
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > *Mike Tutkowski*
>>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>>> > e: mike.tutkowski@solidfire.com
>>>> > o: 303.746.7302
>>>> > Advancing the way the world uses the
>>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>>> > *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>>  *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

What do you think about something relatively simple like this (below)? It
parallels the XenServer and VMware code nicely.

The idea is to only deal with the AttachVolumeCommand.

If we are attaching a volume AND the underlying storage is so-called
managed storage, at this point I invoke createStoragePool to create my
iScsiAdmStoragePool object, establish a connection with the LUN, and
prepare a KVMPhysicalDisk object, which will be requested a bit later
during the actual attach.

If we are detaching a volume AND the underlying storage is managed, the
KVMStoragePool already exists, so we don't have to do anything special
until after the volume is detached. At this point, we delete the storage
pool (remove the iSCSI connection to the LUN and remove the reference to
the iScsiAdmStoragePool from my adaptor).

    private AttachVolumeAnswer execute(AttachVolumeCommand cmd) {

        try {

            Connect conn =
LibvirtConnection.getConnectionByVmName(cmd.getVmName());

*            if (cmd.getAttach() && cmd.isManaged()) {*

*                _storagePoolMgr.createStoragePool(cmd.get_iScsiName(),
cmd.getStorageHost(), cmd.getStoragePort(),*

*                        cmd.getVolumePath(), null, cmd.getPooltype());*

*            }*

            KVMStoragePool primary = _storagePoolMgr.getStoragePool(

                    cmd.getPooltype(),

                    cmd.getPoolUuid());

            KVMPhysicalDisk disk =
primary.getPhysicalDisk(cmd.getVolumePath());

            attachOrDetachDisk(conn, cmd.getAttach(), cmd.getVmName(), disk,

                    cmd.getDeviceId().intValue(), cmd.getBytesReadRate(),
cmd.getBytesWriteRate(), cmd.getIopsReadRate(), cmd.getIopsWriteRate());

*            if (!cmd.getAttach() && cmd.isManaged()) {*

*                _storagePoolMgr.deleteStoragePool(cmd.getPooltype(),
cmd.get_iScsiName());*

*            }*

        } catch (LibvirtException e) {

            return new AttachVolumeAnswer(cmd, e.toString());

        } catch (InternalErrorException e) {

            return new AttachVolumeAnswer(cmd, e.toString());

        }


        return new AttachVolumeAnswer(cmd, cmd.getDeviceId(),
cmd.getVolumePath());

    }


On Wed, Sep 18, 2013 at 11:27 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Since a KVMStoragePool returns a capacity, used, and available number of
> bytes, I will probably need to look into having this information ignored if
> the storage_pool in question is "managed" as - in my case - it wouldn't
> really make any sense.
>
>
> On Wed, Sep 18, 2013 at 10:53 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Sure, sounds good.
>>
>> Right now there are only two storage plug-ins: Edison's default plug-in
>> and the SolidFire plug-in.
>>
>> As an example, when createAsync is called in the plug-in, mine creates a
>> new volume (LUN) on the SAN with a capacity and number of Min, Max, and
>> Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
>> of preallocated storage for a new volume (like create a new VDI in an
>> existing SR).
>>
>>
>> On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> That wasn't my question, but I feel we're getting off in the weeds and
>>> I can just look at the storage framework to see how it works and what
>>> options it supports.
>>>
>>> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > At the time being, I am not aware of any other storage vendor with
>>> truly
>>> > guaranteed QoS.
>>> >
>>> > Most implement QoS in a relative sense (like thread priorities).
>>> >
>>> >
>>> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <shadowsor@gmail.com
>>> >wrote:
>>> >
>>> >> Yeah, that's why I thought it was specific to your implementation.
>>> Perhaps
>>> >> that's true, then?
>>> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
>>> mike.tutkowski@solidfire.com>
>>> >> wrote:
>>> >>
>>> >> > I totally get where you're coming from with the tiered-pool
>>> approach,
>>> >> > though.
>>> >> >
>>> >> > Prior to SolidFire, I worked at HP and the product I worked on
>>> allowed a
>>> >> > single, clustered SAN to host multiple pools of storage. One pool
>>> might
>>> >> be
>>> >> > made up of all-SSD storage nodes while another pool might be made
>>> up of
>>> >> > slower HDDs.
>>> >> >
>>> >> > That kind of tiering is not what SolidFire QoS is about, though, as
>>> that
>>> >> > kind of tiering does not guarantee QoS.
>>> >> >
>>> >> > In the SolidFire SAN, QoS was designed in from the beginning and is
>>> >> > extremely granular. Each volume has its own performance and
>>> capacity. You
>>> >> > do not have to worry about Noisy Neighbors.
>>> >> >
>>> >> > The idea is to encourage businesses to trust the cloud with their
>>> most
>>> >> > critical business applications at a price point on par with
>>> traditional
>>> >> > SANs.
>>> >> >
>>> >> >
>>> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>>> >> > mike.tutkowski@solidfire.com> wrote:
>>> >> >
>>> >> > > Ah, I think I see the miscommunication.
>>> >> > >
>>> >> > > I should have gone into a bit more detail about the SolidFire SAN.
>>> >> > >
>>> >> > > It is built from the ground up to support QoS on a LUN-by-LUN
>>> basis.
>>> >> > Every
>>> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>>> >> > >
>>> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself
>>> is not
>>> >> > > over provisioned). Capacity and IOPS are provisioned
>>> independently.
>>> >> > > Multiple volumes and multiple tenants using the same SAN do not
>>> suffer
>>> >> > from
>>> >> > > the Noisy Neighbor effect.
>>> >> > >
>>> >> > > When you create a Disk Offering in CS that is storage tagged to
>>> use
>>> >> > > SolidFire primary storage, you specify a Min, Max, and Burst
>>> number of
>>> >> > IOPS
>>> >> > > to provision from the SAN for volumes created from that Disk
>>> Offering.
>>> >> > >
>>> >> > > There is no notion of RAID groups that you see in more traditional
>>> >> SANs.
>>> >> > > The SAN is built from clusters of storage nodes and data is
>>> replicated
>>> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN)
>>> in the
>>> >> > > cluster to avoid hot spots and protect the data should a drives
>>> and/or
>>> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
>>> >> > >
>>> >> > > Data is compressed and de-duplicated inline across the cluster
>>> and all
>>> >> > > volumes are thinly provisioned.
>>> >> > >
>>> >> > >
>>> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
>>> shadowsor@gmail.com
>>> >> > >wrote:
>>> >> > >
>>> >> > >> I'm surprised there's no mention of pool on the SAN in your
>>> >> description
>>> >> > of
>>> >> > >> the framework. I had assumed this was specific to your
>>> implementation,
>>> >> > >> because normally SANs host multiple disk pools, maybe multiple
>>> RAID
>>> >> 50s
>>> >> > >> and
>>> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
>>> >> > intended
>>> >> > >> for root disks and a separate one for data disks. Or one pool for
>>> >> > >> cloudstack and one dedicated to some other internal db
>>> application.
>>> >> But
>>> >> > it
>>> >> > >> sounds as though there's no place to specify which disks or pool
>>> on
>>> >> the
>>> >> > >> SAN
>>> >> > >> to use.
>>> >> > >>
>>> >> > >> We implemented our own internal storage SAN plugin based on 4.1.
>>> We
>>> >> used
>>> >> > >> the 'path' attribute of the primary storage pool object to
>>> specify
>>> >> which
>>> >> > >> pool name on the back end SAN to use, so we could create all-ssd
>>> pools
>>> >> > and
>>> >> > >> slower spindle pools, then differentiate between them based on
>>> storage
>>> >> > >> tags. Normally the path attribute would be the mount point for
>>> NFS,
>>> >> but
>>> >> > >> its
>>> >> > >> just a string. So when registering ours we enter San dns host
>>> name,
>>> >> the
>>> >> > >> san's rest api port, and the pool name. Then luns created from
>>> that
>>> >> > >> primary
>>> >> > >> storage come from the matching disk pool on the SAN. We can
>>> create and
>>> >> > >> register multiple pools of different types and purposes on the
>>> same
>>> >> SAN.
>>> >> > >> We
>>> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it
>>> will be
>>> >> > >> interesting to see what we can come up with to make it work
>>> similarly.
>>> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>>> >> > mike.tutkowski@solidfire.com
>>> >> > >> >
>>> >> > >> wrote:
>>> >> > >>
>>> >> > >> > What you're saying here is definitely something we should talk
>>> >> about.
>>> >> > >> >
>>> >> > >> > Hopefully my previous e-mail has clarified how this works a
>>> bit.
>>> >> > >> >
>>> >> > >> > It mainly comes down to this:
>>> >> > >> >
>>> >> > >> > For the first time in CS history, primary storage is no longer
>>> >> > required
>>> >> > >> to
>>> >> > >> > be preallocated by the admin and then handed to CS. CS volumes
>>> don't
>>> >> > >> have
>>> >> > >> > to share a preallocated volume anymore.
>>> >> > >> >
>>> >> > >> > As of 4.2, primary storage can be based on a SAN (or some other
>>> >> > storage
>>> >> > >> > device). You can tell CS how many bytes and IOPS to use from
>>> this
>>> >> > >> storage
>>> >> > >> > device and CS invokes the appropriate plug-in to carve out LUNs
>>> >> > >> > dynamically.
>>> >> > >> >
>>> >> > >> > Each LUN is home to one and only one data disk. Data disks -
>>> in this
>>> >> > >> model
>>> >> > >> > - never share a LUN.
>>> >> > >> >
>>> >> > >> > The main use case for this is so a CS volume can deliver
>>> guaranteed
>>> >> > >> IOPS if
>>> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed
>>> IOPS on a
>>> >> > >> > LUN-by-LUN basis.
>>> >> > >> >
>>> >> > >> >
>>> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>>> >> > shadowsor@gmail.com
>>> >> > >> > >wrote:
>>> >> > >> >
>>> >> > >> > > I guess whether or not a solidfire device is capable of
>>> hosting
>>> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could
>>> get the
>>> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs).
>>> But if
>>> >> > these
>>> >> > >> > > stats aren't collected, I can't as an admin define multiple
>>> pools
>>> >> > and
>>> >> > >> > > expect cloudstack to allocate evenly from them or fill one
>>> up and
>>> >> > move
>>> >> > >> > > to the next, because it doesn't know how big it is.
>>> >> > >> > >
>>> >> > >> > > Ultimately this discussion has nothing to do with the KVM
>>> stuff
>>> >> > >> > > itself, just a tangent, but something to think about.
>>> >> > >> > >
>>> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>>> >> > >> shadowsor@gmail.com>
>>> >> > >> > > wrote:
>>> >> > >> > > > Ok, on most storage pools it shows how many GB free/used
>>> when
>>> >> > >> listing
>>> >> > >> > > > the pool both via API and in the UI. I'm guessing those are
>>> >> empty
>>> >> > >> then
>>> >> > >> > > > for the solid fire storage, but it seems like the user
>>> should
>>> >> have
>>> >> > >> to
>>> >> > >> > > > define some sort of pool that the luns get carved out of,
>>> and
>>> >> you
>>> >> > >> > > > should be able to get the stats for that, right? Or is a
>>> solid
>>> >> > fire
>>> >> > >> > > > appliance only one pool per appliance? This isn't about
>>> billing,
>>> >> > but
>>> >> > >> > > > just so cloudstack itself knows whether or not there is
>>> space
>>> >> left
>>> >> > >> on
>>> >> > >> > > > the storage device, so cloudstack can go on allocating
>>> from a
>>> >> > >> > > > different primary storage as this one fills up. There are
>>> also
>>> >> > >> > > > notifications and things. It seems like there should be a
>>> call
>>> >> you
>>> >> > >> can
>>> >> > >> > > > handle for this, maybe Edison knows.
>>> >> > >> > > >
>>> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>>> >> > >> shadowsor@gmail.com>
>>> >> > >> > > wrote:
>>> >> > >> > > >> You respond to more than attach and detach, right? Don't
>>> you
>>> >> > create
>>> >> > >> > > luns as
>>> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>>> >> > >> > > >>
>>> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>>> >> > >> > mike.tutkowski@solidfire.com
>>> >> > >> > > >
>>> >> > >> > > >> wrote:
>>> >> > >> > > >>>
>>> >> > >> > > >>> Hi Marcus,
>>> >> > >> > > >>>
>>> >> > >> > > >>> I never need to respond to a CreateStoragePool call for
>>> either
>>> >> > >> > > XenServer
>>> >> > >> > > >>> or
>>> >> > >> > > >>> VMware.
>>> >> > >> > > >>>
>>> >> > >> > > >>> What happens is I respond only to the Attach- and
>>> >> Detach-volume
>>> >> > >> > > commands.
>>> >> > >> > > >>>
>>> >> > >> > > >>> Let's say an attach comes in:
>>> >> > >> > > >>>
>>> >> > >> > > >>> In this case, I check to see if the storage is "managed."
>>> >> > Talking
>>> >> > >> > > >>> XenServer
>>> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk we
>>> want
>>> >> to
>>> >> > >> > attach.
>>> >> > >> > > >>> After, if this is the first time attaching this disk, I
>>> create
>>> >> > an
>>> >> > >> SR
>>> >> > >> > > and a
>>> >> > >> > > >>> VDI within the SR. If it is not the first time attaching
>>> this
>>> >> > >> disk,
>>> >> > >> > the
>>> >> > >> > > >>> LUN
>>> >> > >> > > >>> already has the SR and VDI on it.
>>> >> > >> > > >>>
>>> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
>>> because
>>> >> > >> this
>>> >> > >> > > logic
>>> >> > >> > > >>> expected an SR and a VDI and now it has it.
>>> >> > >> > > >>>
>>> >> > >> > > >>> It's the same thing for VMware: Just substitute
>>> datastore for
>>> >> SR
>>> >> > >> and
>>> >> > >> > > VMDK
>>> >> > >> > > >>> for VDI.
>>> >> > >> > > >>>
>>> >> > >> > > >>> Does that make sense?
>>> >> > >> > > >>>
>>> >> > >> > > >>> Thanks!
>>> >> > >> > > >>>
>>> >> > >> > > >>>
>>> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>>> >> > >> > > >>> <sh...@gmail.com>wrote:
>>> >> > >> > > >>>
>>> >> > >> > > >>> > What do you do with Xen? I imagine the user enter the
>>> SAN
>>> >> > >> details
>>> >> > >> > > when
>>> >> > >> > > >>> > registering the pool? A the pool details are basically
>>> just
>>> >> > >> > > instructions
>>> >> > >> > > >>> > on
>>> >> > >> > > >>> > how to log into a target, correct?
>>> >> > >> > > >>> >
>>> >> > >> > > >>> > You can choose to log in a KVM host to the target
>>> during
>>> >> > >> > > >>> > createStoragePool
>>> >> > >> > > >>> > and save the pool in a map, or just save the pool info
>>> in a
>>> >> > map
>>> >> > >> for
>>> >> > >> > > >>> > future
>>> >> > >> > > >>> > reference by uuid, for when you do need to log in. The
>>> >> > >> > > createStoragePool
>>> >> > >> > > >>> > then just becomes a way to save the pool info to the
>>> agent.
>>> >> > >> > > Personally,
>>> >> > >> > > >>> > I'd
>>> >> > >> > > >>> > log in on the pool create and look/scan for specific
>>> luns
>>> >> when
>>> >> > >> > > they're
>>> >> > >> > > >>> > needed, but I haven't thought it through thoroughly. I
>>> just
>>> >> > say
>>> >> > >> > that
>>> >> > >> > > >>> > mainly
>>> >> > >> > > >>> > because login only happens once, the first time the
>>> pool is
>>> >> > >> used,
>>> >> > >> > and
>>> >> > >> > > >>> > every
>>> >> > >> > > >>> > other storage command is about discovering new luns or
>>> maybe
>>> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
>>> other
>>> >> > hand,
>>> >> > >> > you
>>> >> > >> > > >>> > could
>>> >> > >> > > >>> > do all of the above: log in on pool create, then also
>>> check
>>> >> if
>>> >> > >> > you're
>>> >> > >> > > >>> > logged in on other commands and log in if you've lost
>>> >> > >> connection.
>>> >> > >> > > >>> >
>>> >> > >> > > >>> > With Xen, what does your registered pool   show in the
>>> UI
>>> >> for
>>> >> > >> > > avail/used
>>> >> > >> > > >>> > capacity, and how does it get that info? I assume
>>> there is
>>> >> > some
>>> >> > >> > sort
>>> >> > >> > > of
>>> >> > >> > > >>> > disk pool that the luns are carved from, and that your
>>> >> plugin
>>> >> > is
>>> >> > >> > > called
>>> >> > >> > > >>> > to
>>> >> > >> > > >>> > talk to the SAN and expose to the user how much of
>>> that pool
>>> >> > has
>>> >> > >> > been
>>> >> > >> > > >>> > allocated. Knowing how you already solves these
>>> problems
>>> >> with
>>> >> > >> Xen
>>> >> > >> > > will
>>> >> > >> > > >>> > help
>>> >> > >> > > >>> > figure out what to do with KVM.
>>> >> > >> > > >>> >
>>> >> > >> > > >>> > If this is the case, I think the plugin can continue to
>>> >> handle
>>> >> > >> it
>>> >> > >> > > rather
>>> >> > >> > > >>> > than getting details from the agent. I'm not sure if
>>> that
>>> >> > means
>>> >> > >> > nulls
>>> >> > >> > > >>> > are
>>> >> > >> > > >>> > OK for these on the agent side or what, I need to look
>>> at
>>> >> the
>>> >> > >> > storage
>>> >> > >> > > >>> > plugin arch more closely.
>>> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>>> >> > >> > > mike.tutkowski@solidfire.com>
>>> >> > >> > > >>> > wrote:
>>> >> > >> > > >>> >
>>> >> > >> > > >>> > > Hey Marcus,
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the
>>> necessary
>>> >> > >> methods
>>> >> > >> > in
>>> >> > >> > > new
>>> >> > >> > > >>> > > classes.
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > "So, referencing StorageAdaptor.java,
>>> createStoragePool
>>> >> > >> accepts
>>> >> > >> > > all of
>>> >> > >> > > >>> > > the pool data (host, port, name, path) which would
>>> be used
>>> >> > to
>>> >> > >> log
>>> >> > >> > > the
>>> >> > >> > > >>> > > host into the initiator."
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
>>> (primary
>>> >> > >> > > storage) is
>>> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>>> >> anything
>>> >> > >> at
>>> >> > >> > > this
>>> >> > >> > > >>> > point,
>>> >> > >> > > >>> > > correct?
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > Also, what kind of capacity, available, and used
>>> bytes
>>> >> make
>>> >> > >> sense
>>> >> > >> > > to
>>> >> > >> > > >>> > report
>>> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents
>>> the
>>> >> SAN
>>> >> > >> in my
>>> >> > >> > > case
>>> >> > >> > > >>> > and
>>> >> > >> > > >>> > > not an individual LUN)?
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > Thanks!
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>>> >> > >> > > shadowsor@gmail.com
>>> >> > >> > > >>> > > >wrote:
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > > Ok, KVM will be close to that, of course, because
>>> only
>>> >> the
>>> >> > >> > > >>> > > > hypervisor
>>> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
>>> Creating a
>>> >> > >> volume
>>> >> > >> > is
>>> >> > >> > > >>> > > > just
>>> >> > >> > > >>> > > > a db entry until it's deployed for the first time.
>>> >> > >> > > >>> > > > AttachVolumeCommand
>>> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>>> >> analogous
>>> >> > >> to
>>> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
>>> commands
>>> >> > (via
>>> >> > >> a
>>> >> > >> > KVM
>>> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target
>>> and
>>> >> then
>>> >> > >> you
>>> >> > >> > > have a
>>> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for you,
>>> but
>>> >> my
>>> >> > >> quick
>>> >> > >> > > read
>>> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>>> >> > actually a
>>> >> > >> > > pool,
>>> >> > >> > > >>> > > > not
>>> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if
>>> that
>>> >> > works
>>> >> > >> or
>>> >> > >> > if
>>> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>>> >> (because
>>> >> > >> > Libvirt
>>> >> > >> > > >>> > > > doesn't really manage your pool the way you want),
>>> >> you're
>>> >> > >> going
>>> >> > >> > > to
>>> >> > >> > > >>> > > > have to create a version of KVMStoragePool class
>>> and a
>>> >> > >> > > >>> > > > StorageAdaptor
>>> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>>> >> > >> > > LibvirtStorageAdaptor.java),
>>> >> > >> > > >>> > > > implementing all of the methods, then in
>>> >> > >> KVMStorageManager.java
>>> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
>>> select
>>> >> the
>>> >> > >> > > correct
>>> >> > >> > > >>> > > > adaptor, you can see in this file that every call
>>> first
>>> >> > >> pulls
>>> >> > >> > the
>>> >> > >> > > >>> > > > correct adaptor out of this map via
>>> getStorageAdaptor.
>>> >> So
>>> >> > >> you
>>> >> > >> > can
>>> >> > >> > > >>> > > > see
>>> >> > >> > > >>> > > > a comment in this file that says "add other storage
>>> >> > adaptors
>>> >> > >> > > here",
>>> >> > >> > > >>> > > > where it puts to this map, this is where you'd
>>> register
>>> >> > your
>>> >> > >> > > >>> > > > adaptor.
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
>>> createStoragePool
>>> >> > >> accepts
>>> >> > >> > > all
>>> >> > >> > > >>> > > > of
>>> >> > >> > > >>> > > > the pool data (host, port, name, path) which would
>>> be
>>> >> used
>>> >> > >> to
>>> >> > >> > log
>>> >> > >> > > >>> > > > the
>>> >> > >> > > >>> > > > host into the initiator. I *believe* the method
>>> >> > >> getPhysicalDisk
>>> >> > >> > > will
>>> >> > >> > > >>> > > > need to do the work of attaching the lun.
>>> >> > >>  AttachVolumeCommand
>>> >> > >> > > calls
>>> >> > >> > > >>> > > > this and then creates the XML diskdef and attaches
>>> it to
>>> >> > the
>>> >> > >> > VM.
>>> >> > >> > > >>> > > > Now,
>>> >> > >> > > >>> > > > one thing you need to know is that
>>> createStoragePool is
>>> >> > >> called
>>> >> > >> > > >>> > > > often,
>>> >> > >> > > >>> > > > sometimes just to make sure the pool is there. You
>>> may
>>> >> > want
>>> >> > >> to
>>> >> > >> > > >>> > > > create
>>> >> > >> > > >>> > > > a map in your adaptor class and keep track of
>>> pools that
>>> >> > >> have
>>> >> > >> > > been
>>> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do
>>> this
>>> >> > >> because
>>> >> > >> > it
>>> >> > >> > > >>> > > > asks
>>> >> > >> > > >>> > > > libvirt about which storage pools exist. There are
>>> also
>>> >> > >> calls
>>> >> > >> > to
>>> >> > >> > > >>> > > > refresh the pool stats, and all of the other calls
>>> can
>>> >> be
>>> >> > >> seen
>>> >> > >> > in
>>> >> > >> > > >>> > > > the
>>> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
>>> disk,
>>> >> > >> clone,
>>> >> > >> > > etc,
>>> >> > >> > > >>> > > > but
>>> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the
>>> vague
>>> >> > idea
>>> >> > >> > that
>>> >> > >> > > >>> > > > volumes are created on the mgmt server via the
>>> plugin
>>> >> now,
>>> >> > >> so
>>> >> > >> > > >>> > > > whatever
>>> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or
>>> optionally
>>> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind the
>>> hosts
>>> >> > >> > talking
>>> >> > >> > > to
>>> >> > >> > > >>> > > > the san api).
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > There is a difference between attaching new
>>> volumes and
>>> >> > >> > > launching a
>>> >> > >> > > >>> > > > VM
>>> >> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
>>> >> > >> definition
>>> >> > >> > > that
>>> >> > >> > > >>> > > > was
>>> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
>>> >> > (StartCommand).
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > I'd be interested in how your pool is defined for
>>> Xen, I
>>> >> > >> > imagine
>>> >> > >> > > it
>>> >> > >> > > >>> > > > would need to be kept the same. Is it just a
>>> definition
>>> >> to
>>> >> > >> the
>>> >> > >> > > SAN
>>> >> > >> > > >>> > > > (ip address or some such, port number) and perhaps
>>> a
>>> >> > volume
>>> >> > >> > pool
>>> >> > >> > > >>> > > > name?
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>>> on the
>>> >> > >> SAN to
>>> >> > >> > > have
>>> >> > >> > > >>> > > only a
>>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>>> would
>>> >> be
>>> >> > >> > ideal.
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
>>> >> impression
>>> >> > >> that
>>> >> > >> > > the
>>> >> > >> > > >>> > > > storage plugin framework allowed for acls, or for
>>> you to
>>> >> > do
>>> >> > >> > > whatever
>>> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
>>> You'd
>>> >> > just
>>> >> > >> > call
>>> >> > >> > > >>> > > > your
>>> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to
>>> when
>>> >> the
>>> >> > >> disk
>>> >> > >> > is
>>> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to look
>>> more
>>> >> at
>>> >> > >> the
>>> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
>>> this in
>>> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the
>>> LUN.
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>>> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That
>>> is a
>>> >> > bit
>>> >> > >> > > >>> > > > > different
>>> >> > >> > > >>> > > from
>>> >> > >> > > >>> > > > how
>>> >> > >> > > >>> > > > > it works with XenServer and VMware.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
>>> >> > >> XenServer:
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
>>> recorded
>>> >> in
>>> >> > >> the
>>> >> > >> > > >>> > > > cloud.volumes
>>> >> > >> > > >>> > > > > table).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a VM
>>> for
>>> >> the
>>> >> > >> > first
>>> >> > >> > > >>> > > > > time
>>> >> > >> > > >>> > (if
>>> >> > >> > > >>> > > > the
>>> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in,
>>> the
>>> >> > storage
>>> >> > >> > > >>> > > > > framework
>>> >> > >> > > >>> > > > invokes
>>> >> > >> > > >>> > > > > a method on the plug-in that creates a volume on
>>> the
>>> >> > >> > SAN...info
>>> >> > >> > > >>> > > > > like
>>> >> > >> > > >>> > > the
>>> >> > >> > > >>> > > > IQN
>>> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > * CitrixResourceBase's
>>> execute(AttachVolumeCommand) is
>>> >> > >> > > executed.
>>> >> > >> > > >>> > > > > It
>>> >> > >> > > >>> > > > > determines based on a flag passed in that the
>>> storage
>>> >> in
>>> >> > >> > > question
>>> >> > >> > > >>> > > > > is
>>> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>>> >> > "traditional"
>>> >> > >> > > >>> > preallocated
>>> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
>>> target.
>>> >> > Once
>>> >> > >> > > >>> > > > > discovered
>>> >> > >> > > >>> > > it
>>> >> > >> > > >>> > > > > determines if the iSCSI target already contains a
>>> >> > storage
>>> >> > >> > > >>> > > > > repository
>>> >> > >> > > >>> > > (it
>>> >> > >> > > >>> > > > > would if this were a re-attach situation). If it
>>> does
>>> >> > >> contain
>>> >> > >> > > an
>>> >> > >> > > >>> > > > > SR
>>> >> > >> > > >>> > > > already,
>>> >> > >> > > >>> > > > > then there should already be one VDI, as well. If
>>> >> there
>>> >> > >> is no
>>> >> > >> > > SR,
>>> >> > >> > > >>> > > > > an
>>> >> > >> > > >>> > SR
>>> >> > >> > > >>> > > > is
>>> >> > >> > > >>> > > > > created and a single VDI is created within it
>>> (that
>>> >> > takes
>>> >> > >> up
>>> >> > >> > > about
>>> >> > >> > > >>> > > > > as
>>> >> > >> > > >>> > > > much
>>> >> > >> > > >>> > > > > space as was requested for the CloudStack
>>> volume).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
>>> depends
>>> >> > on
>>> >> > >> the
>>> >> > >> > > >>> > existence
>>> >> > >> > > >>> > > > of
>>> >> > >> > > >>> > > > > an SR and a VDI).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > The VMware case is essentially the same (mainly
>>> just
>>> >> > >> > substitute
>>> >> > >> > > >>> > > datastore
>>> >> > >> > > >>> > > > > for SR and VMDK for VDI).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>>> >> discovered
>>> >> > >> the
>>> >> > >> > > iSCSI
>>> >> > >> > > >>> > > target,
>>> >> > >> > > >>> > > > > but only the host that is currently running the
>>> VM
>>> >> that
>>> >> > is
>>> >> > >> > > using
>>> >> > >> > > >>> > > > > the
>>> >> > >> > > >>> > > VDI
>>> >> > >> > > >>> > > > (or
>>> >> > >> > > >>> > > > > VMKD) is actually using the disk.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > Live Migration should be OK because the
>>> hypervisors
>>> >> > >> > communicate
>>> >> > >> > > >>> > > > > with
>>> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
>>> datastore).
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
>>> >> > CloudStack's
>>> >> > >> > > eyes.
>>> >> > >> > > >>> > > > > CS
>>> >> > >> > > >>> > > > controls
>>> >> > >> > > >>> > > > > Live Migration. You don't really need a clustered
>>> >> > >> filesystem
>>> >> > >> > on
>>> >> > >> > > >>> > > > > the
>>> >> > >> > > >>> > > LUN.
>>> >> > >> > > >>> > > > The
>>> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>>> on the
>>> >> > >> SAN to
>>> >> > >> > > have
>>> >> > >> > > >>> > > only a
>>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>>> would
>>> >> be
>>> >> > >> > ideal.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to
>>> discover
>>> >> and
>>> >> > >> log
>>> >> > >> > in
>>> >> > >> > > to
>>> >> > >> > > >>> > > > > the
>>> >> > >> > > >>> > > > iSCSI
>>> >> > >> > > >>> > > > > target. I'll also need to take the resultant new
>>> >> device
>>> >> > >> and
>>> >> > >> > > pass
>>> >> > >> > > >>> > > > > it
>>> >> > >> > > >>> > > into
>>> >> > >> > > >>> > > > the
>>> >> > >> > > >>> > > > > VM.
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out on
>>> >> > >> anything I
>>> >> > >> > > seem
>>> >> > >> > > >>> > > > incorrect
>>> >> > >> > > >>> > > > > about. :)
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>> <
>>> >> > >> > > >>> > shadowsor@gmail.com>
>>> >> > >> > > >>> > > > > wrote:
>>> >> > >> > > >>> > > > >>
>>> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a
>>> disk
>>> >> > def,
>>> >> > >> and
>>> >> > >> > > the
>>> >> > >> > > >>> > > attach
>>> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do your
>>> own
>>> >> > >> > > >>> > > > >> StorageAdaptor
>>> >> > >> > > >>> > and
>>> >> > >> > > >>> > > > run
>>> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending
>>> on
>>> >> how
>>> >> > >> the
>>> >> > >> > > >>> > > > >> libvirt
>>> >> > >> > > >>> > > iscsi
>>> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
>>> pool/lun/volume
>>> >> > >> isn't
>>> >> > >> > > how it
>>> >> > >> > > >>> > > works
>>> >> > >> > > >>> > > > on
>>> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>>> >> > >> > > >>> > > > >>
>>> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which
>>> host can
>>> >> > see
>>> >> > >> > > which
>>> >> > >> > > >>> > > > >> luns
>>> >> > >> > > >>> > > as
>>> >> > >> > > >>> > > > >> well, I remember discussing that months ago, so
>>> that
>>> >> a
>>> >> > >> disk
>>> >> > >> > > won't
>>> >> > >> > > >>> > > > >> be
>>> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
>>> access,
>>> >> so
>>> >> > >> it
>>> >> > >> > > will
>>> >> > >> > > >>> > > > >> be
>>> >> > >> > > >>> > > safe
>>> >> > >> > > >>> > > > and
>>> >> > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack
>>> loses
>>> >> > >> > > >>> > > > >> connectivity
>>> >> > >> > > >>> > > > with. It
>>> >> > >> > > >>> > > > >> should revoke access to everything but the
>>> target
>>> >> > host...
>>> >> > >> > > Except
>>> >> > >> > > >>> > > > >> for
>>> >> > >> > > >>> > > > during
>>> >> > >> > > >>> > > > >> migration but we can discuss that later,
>>> there's a
>>> >> > >> migration
>>> >> > >> > > prep
>>> >> > >> > > >>> > > > process
>>> >> > >> > > >>> > > > >> where the new host can be added to the acls,
>>> and the
>>> >> > old
>>> >> > >> > host
>>> >> > >> > > can
>>> >> > >> > > >>> > > > >> be
>>> >> > >> > > >>> > > > removed
>>> >> > >> > > >>> > > > >> post migration.
>>> >> > >> > > >>> > > > >>
>>> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>> >> > >> > > >>> > > mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >> wrote:
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
>>> target,
>>> >> > >> log in
>>> >> > >> > > to
>>> >> > >> > > >>> > > > >>> it,
>>> >> > >> > > >>> > > then
>>> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a
>>> result
>>> >> (and
>>> >> > >> leave
>>> >> > >> > > it
>>> >> > >> > > >>> > > > >>> as
>>> >> > >> > > >>> > is
>>> >> > >> > > >>> > > -
>>> >> > >> > > >>> > > > do
>>> >> > >> > > >>> > > > >>> not format it with any file system...clustered
>>> or
>>> >> > not).
>>> >> > >> I
>>> >> > >> > > would
>>> >> > >> > > >>> > pass
>>> >> > >> > > >>> > > > that
>>> >> > >> > > >>> > > > >>> device into the VM.
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>> Kind of accurate?
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus
>>> Sorensen <
>>> >> > >> > > >>> > > shadowsor@gmail.com>
>>> >> > >> > > >>> > > > >>> wrote:
>>> >> > >> > > >>> > > > >>>>
>>> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the
>>> disk
>>> >> > >> > > definitions.
>>> >> > >> > > >>> > There
>>> >> > >> > > >>> > > > are
>>> >> > >> > > >>> > > > >>>> ones that work for block devices rather than
>>> files.
>>> >> > You
>>> >> > >> > can
>>> >> > >> > > >>> > > > >>>> piggy
>>> >> > >> > > >>> > > > back off
>>> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach
>>> it to
>>> >> the
>>> >> > >> vm
>>> >> > >> > as
>>> >> > >> > > a
>>> >> > >> > > >>> > block
>>> >> > >> > > >>> > > > device.
>>> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt
>>> XML
>>> >> > format.
>>> >> > >> > You
>>> >> > >> > > may
>>> >> > >> > > >>> > want
>>> >> > >> > > >>> > > > to use
>>> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than just
>>> >> > /dev/sdx
>>> >> > >> > > like I
>>> >> > >> > > >>> > > > mentioned,
>>> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices,
>>> as well
>>> >> > as
>>> >> > >> > other
>>> >> > >> > > >>> > > > >>>> ones
>>> >> > >> > > >>> > > > that will
>>> >> > >> > > >>> > > > >>>> be consistent and easier for management, not
>>> sure
>>> >> how
>>> >> > >> > > familiar
>>> >> > >> > > >>> > > > >>>> you
>>> >> > >> > > >>> > > > are with
>>> >> > >> > > >>> > > > >>>> device naming on Linux.
>>> >> > >> > > >>> > > > >>>>
>>> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
>>> >> > >> > > >>> > > > wrote:
>>> >> > >> > > >>> > > > >>>>>
>>> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>>> >> network/iscsi
>>> >> > >> > > initiator
>>> >> > >> > > >>> > > inside
>>> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
>>> /dev/sdx
>>> >> > (your
>>> >> > >> > lun
>>> >> > >> > > on
>>> >> > >> > > >>> > > > hypervisor) as
>>> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some
>>> image
>>> >> > >> file
>>> >> > >> > > that
>>> >> > >> > > >>> > > resides
>>> >> > >> > > >>> > > > on a
>>> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
>>> >> target.
>>> >> > >> > > >>> > > > >>>>>
>>> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage
>>> supporting
>>> >> live
>>> >> > >> > > migration
>>> >> > >> > > >>> > > > >>>>> I
>>> >> > >> > > >>> > > > think
>>> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
>>> filesystem
>>> >> on
>>> >> > it
>>> >> > >> > and
>>> >> > >> > > >>> > > > >>>>> mount
>>> >> > >> > > >>> > it
>>> >> > >> > > >>> > > > in two
>>> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>>> >> > clustered
>>> >> > >> > > >>> > > > >>>>> filesystem,
>>> >> > >> > > >>> > > in
>>> >> > >> > > >>> > > > which
>>> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>>> >> > >> > > >>> > > > >>>>>
>>> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style
>>> is
>>> >> > >> basically
>>> >> > >> > > LVM
>>> >> > >> > > >>> > with a
>>> >> > >> > > >>> > > > xen
>>> >> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM.
>>> They
>>> >> > don't
>>> >> > >> > use
>>> >> > >> > > a
>>> >> > >> > > >>> > > > filesystem
>>> >> > >> > > >>> > > > >>>>> either.
>>> >> > >> > > >>> > > > >>>>>
>>> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to
>>> the
>>> >> vm,"
>>> >> > >> do
>>> >> > >> > you
>>> >> > >> > > >>> > > > >>>>>> mean
>>> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't
>>> think we
>>> >> > >> could do
>>> >> > >> > > that
>>> >> > >> > > >>> > > > >>>>>> in
>>> >> > >> > > >>> > > CS.
>>> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
>>> circumvents
>>> >> > the
>>> >> > >> > > >>> > > > >>>>>> hypervisor,
>>> >> > >> > > >>> > > as
>>> >> > >> > > >>> > > > far as I
>>> >> > >> > > >>> > > > >>>>>> know.
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
>>> Sorensen
>>> >> <
>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>> >> > >> > > >>> > > > >>>>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>
>>> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the
>>> vm
>>> >> > unless
>>> >> > >> > > there is
>>> >> > >> > > >>> > > > >>>>>>> a
>>> >> > >> > > >>> > > good
>>> >> > >> > > >>> > > > >>>>>>> reason not to.
>>> >> > >> > > >>> > > > >>>>>>>
>>> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>> <
>>> >> > >> > > >>> > shadowsor@gmail.com>
>>> >> > >> > > >>> > > > >>>>>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I
>>> think
>>> >> its a
>>> >> > >> > > mistake
>>> >> > >> > > >>> > > > >>>>>>>> to
>>> >> > >> > > >>> > go
>>> >> > >> > > >>> > > to
>>> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of
>>> CS
>>> >> > >> volumes to
>>> >> > >> > > luns
>>> >> > >> > > >>> > and
>>> >> > >> > > >>> > > > then putting
>>> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
>>> >> > putting a
>>> >> > >> > > QCOW2
>>> >> > >> > > >>> > > > >>>>>>>> or
>>> >> > >> > > >>> > > even
>>> >> > >> > > >>> > > > RAW disk
>>> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a
>>> lot of
>>> >> > iops
>>> >> > >> > > along
>>> >> > >> > > >>> > > > >>>>>>>> the
>>> >> > >> > > >>> > > > way, and have
>>> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
>>> >> > >> journaling,
>>> >> > >> > > etc.
>>> >> > >> > > >>> > > > >>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new
>>> ground
>>> >> > in
>>> >> > >> KVM
>>> >> > >> > > with
>>> >> > >> > > >>> > CS.
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM
>>> and CS
>>> >> > >> today
>>> >> > >> > > is by
>>> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and
>>> specifying the
>>> >> > >> > location
>>> >> > >> > > of
>>> >> > >> > > >>> > > > >>>>>>>>> the
>>> >> > >> > > >>> > > > share.
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
>>> iSCSI
>>> >> by
>>> >> > >> > > >>> > > > >>>>>>>>> discovering
>>> >> > >> > > >>> > > their
>>> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
>>> mounting
>>> >> it
>>> >> > >> > > somewhere
>>> >> > >> > > >>> > > > >>>>>>>>> on
>>> >> > >> > > >>> > > > their file
>>> >> > >> > > >>> > > > >>>>>>>>> system.
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do
>>> that
>>> >> > >> discovery,
>>> >> > >> > > >>> > > > >>>>>>>>> logging
>>> >> > >> > > >>> > > in,
>>> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them
>>> and
>>> >> > >> letting
>>> >> > >> > the
>>> >> > >> > > >>> > current
>>> >> > >> > > >>> > > > code manage
>>> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>>> >> Sorensen
>>> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>>> >> different. I
>>> >> > >> need
>>> >> > >> > > to
>>> >> > >> > > >>> > catch
>>> >> > >> > > >>> > > up
>>> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>>> >> basically
>>> >> > >> just
>>> >> > >> > > disk
>>> >> > >> > > >>> > > > snapshots + memory
>>> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
>>> >> > >> preferably
>>> >> > >> > be
>>> >> > >> > > >>> > handled
>>> >> > >> > > >>> > > > by the SAN,
>>> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to
>>> secondary
>>> >> > >> storage or
>>> >> > >> > > >>> > something
>>> >> > >> > > >>> > > > else. This is
>>> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM,
>>> so we
>>> >> > will
>>> >> > >> > > want to
>>> >> > >> > > >>> > see
>>> >> > >> > > >>> > > > how others are
>>> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>>> >> > >> > > >>> > > > >>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
>>> Sorensen" <
>>> >> > >> > > >>> > > shadowsor@gmail.com
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >>>>>>>>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
>>> you'd
>>> >> > use a
>>> >> > >> > vdi
>>> >> > >> > > >>> > > > >>>>>>>>>>> style
>>> >> > >> > > >>> > on
>>> >> > >> > > >>> > > > an
>>> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat
>>> it
>>> >> as a
>>> >> > >> RAW
>>> >> > >> > > >>> > > > >>>>>>>>>>> format.
>>> >> > >> > > >>> > > > Otherwise you're
>>> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
>>> mounting
>>> >> it,
>>> >> > >> > > creating
>>> >> > >> > > >>> > > > >>>>>>>>>>> a
>>> >> > >> > > >>> > > > QCOW2 disk image,
>>> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
>>> performance
>>> >> > >> > killer.
>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi
>>> lun as a
>>> >> > >> disk
>>> >> > >> > to
>>> >> > >> > > the
>>> >> > >> > > >>> > VM,
>>> >> > >> > > >>> > > > and
>>> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via
>>> the
>>> >> > >> storage
>>> >> > >> > > >>> > > > >>>>>>>>>>> plugin
>>> >> > >> > > >>> > is
>>> >> > >> > > >>> > > > best. My
>>> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
>>> refactor
>>> >> > was
>>> >> > >> > that
>>> >> > >> > > >>> > > > >>>>>>>>>>> there
>>> >> > >> > > >>> > > was
>>> >> > >> > > >>> > > > a snapshot
>>> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
>>> handle
>>> >> > >> > snapshots.
>>> >> > >> > > >>> > > > >>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
>>> Sorensen" <
>>> >> > >> > > >>> > > > shadowsor@gmail.com>
>>> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be
>>> handled by
>>> >> > the
>>> >> > >> SAN
>>> >> > >> > > back
>>> >> > >> > > >>> > end,
>>> >> > >> > > >>> > > > if
>>> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack
>>> mgmt
>>> >> > server
>>> >> > >> > > could
>>> >> > >> > > >>> > > > >>>>>>>>>>>> call
>>> >> > >> > > >>> > > > your plugin for
>>> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
>>> hypervisor
>>> >> > >> > > agnostic. As
>>> >> > >> > > >>> > far
>>> >> > >> > > >>> > > > as space, that
>>> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles
>>> it.
>>> >> With
>>> >> > >> > ours,
>>> >> > >> > > we
>>> >> > >> > > >>> > carve
>>> >> > >> > > >>> > > > out luns from a
>>> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes
>>> from the
>>> >> > >> pool
>>> >> > >> > > and is
>>> >> > >> > > >>> > > > independent of the
>>> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
>>> Tutkowski"
>>> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool
>>> type
>>> >> for
>>> >> > >> > libvirt
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
>>> >> > >> > > >>> > > > work
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>>> >> hypervisor
>>> >> > >> > > snapshots?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
>>> hypervisor
>>> >> > >> > snapshot,
>>> >> > >> > > the
>>> >> > >> > > >>> > VDI
>>> >> > >> > > >>> > > > for
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
>>> storage
>>> >> > >> > > repository
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> as
>>> >> > >> > > >>> > > the
>>> >> > >> > > >>> > > > volume is on.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case
>>> (let's
>>> >> say
>>> >> > >> for
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>>> >> > >> > > >>> > > and
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't
>>> support
>>> >> > >> hypervisor
>>> >> > >> > > >>> > snapshots
>>> >> > >> > > >>> > > > in 4.2) is I'd
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger
>>> than
>>> >> > what
>>> >> > >> the
>>> >> > >> > > user
>>> >> > >> > > >>> > > > requested for the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
>>> because
>>> >> our
>>> >> > >> SAN
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>>> >> > >> > > >>> > > > provisions volumes,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
>>> unless
>>> >> it
>>> >> > >> needs
>>> >> > >> > > to
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
>>> >> > >> > > >>> > > > The CloudStack
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on
>>> the
>>> >> SAN
>>> >> > >> > volume
>>> >> > >> > > >>> > until a
>>> >> > >> > > >>> > > > hypervisor
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot
>>> would
>>> >> also
>>> >> > >> > reside
>>> >> > >> > > on
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> the
>>> >> > >> > > >>> > > > SAN volume.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and
>>> there
>>> >> is
>>> >> > >> no
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
>>> >> > >> > > >>> > of
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
>>> libvirt
>>> >> > >> (which,
>>> >> > >> > > even
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> if
>>> >> > >> > > >>> > > > there were support
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only
>>> allows
>>> >> one
>>> >> > >> LUN
>>> >> > >> > per
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>>> >> > >> > > >>> > > > target), then I
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will
>>> work.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
>>> >> > current
>>> >> > >> way
>>> >> > >> > > this
>>> >> > >> > > >>> > > works
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
>>> >> > >> Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com>
>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's
>>> used for
>>> >> > >> iSCSI
>>> >> > >> > > access
>>> >> > >> > > >>> > > today.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route,
>>> too,
>>> >> but I
>>> >> > >> > might
>>> >> > >> > > as
>>> >> > >> > > >>> > well
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
>>> >> > instead.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
>>> Marcus
>>> >> > >> Sorensen
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>>> >> SharedMountPoint, I
>>> >> > >> > > believe
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>>> >> > >> > > >>> > > just
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
>>> similar
>>> >> to
>>> >> > >> > that.
>>> >> > >> > > The
>>> >> > >> > > >>> > > > end-user
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
>>> system
>>> >> > that
>>> >> > >> all
>>> >> > >> > > KVM
>>> >> > >> > > >>> > hosts
>>> >> > >> > > >>> > > > can
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to
>>> what is
>>> >> > >> > providing
>>> >> > >> > > the
>>> >> > >> > > >>> > > > storage.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>>> >> clustered
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory
>>> path
>>> >> has
>>> >> > >> VM
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
>>> Marcus
>>> >> > >> > Sorensen
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and
>>> >> iSCSI
>>> >> > >> all
>>> >> > >> > at
>>> >> > >> > > the
>>> >> > >> > > >>> > same
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM,
>>> Mike
>>> >> > >> > Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
>>> >> > storage
>>> >> > >> > > pools:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
>>> pool-list
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>>> >> >  Autostart
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > -----------------------------------------
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
>>> yes
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active
>>> no
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12
>>> PM, Mike
>>> >> > >> > > Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
>>> >> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
>>> pointed
>>> >> > >> out.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI
>>> (libvirt)
>>> >> > >> storage
>>> >> > >> > > pool
>>> >> > >> > > >>> > based
>>> >> > >> > > >>> > > on
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
>>> would
>>> >> > only
>>> >> > >> > have
>>> >> > >> > > one
>>> >> > >> > > >>> > LUN,
>>> >> > >> > > >>> > > > so
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
>>> >> volume
>>> >> > in
>>> >> > >> > the
>>> >> > >> > > >>> > > (libvirt)
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates
>>> and
>>> >> > >> destroys
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>>> >> problem
>>> >> > >> that
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>>> >> > >> > > >>> > > does
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>>> >> targets/LUNs.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
>>> this a
>>> >> > bit
>>> >> > >> to
>>> >> > >> > > see
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>>> >> > >> > > >>> > > > libvirt
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools
>>> (as you
>>> >> > >> > > mentioned,
>>> >> > >> > > >>> > since
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one
>>> of my
>>> >> > >> iSCSI
>>> >> > >> > > >>> > > > targets/LUNs).
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58
>>> PM,
>>> >> Mike
>>> >> > >> > > Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com>
>>> >> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this
>>> >> type:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>>> >> > NETFS("netfs"),
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"),
>>> DIR("dir"),
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
>>> poolType) {
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
>>> poolType;
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
>>> toString() {
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the
>>> iSCSI type
>>> >> > is
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>>> >> > >> > > >>> > > being
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you
>>> were
>>> >> > >> getting
>>> >> > >> > at.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today
>>> (say,
>>> >> 4.2),
>>> >> > >> when
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
>>> uses it
>>> >> > >> with
>>> >> > >> > > iSCSI,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>>> >> > >> > > >>> > > > that
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50
>>> PM,
>>> >> > Marcus
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be
>>> pre-allocated on
>>> >> > the
>>> >> > >> > iSCSI
>>> >> > >> > > >>> > server,
>>> >> > >> > > >>> > > > and
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt
>>> APIs.",
>>> >> > which
>>> >> > >> I
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>>> >> > >> > > >>> > > your
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does
>>> the
>>> >> work
>>> >> > of
>>> >> > >> > > logging
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>> >> > >> > > >>> > > and
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen
>>> api does
>>> >> > >> that
>>> >> > >> > > work
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>>> >> > >> > > >>> > the
>>> >> > >> > > >>> > > > Xen
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
>>> whether
>>> >> > >> this
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>>> >> > >> > > >>> > a
>>> >> > >> > > >>> > > > 1:1
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to
>>> register 1
>>> >> > iscsi
>>> >> > >> > > device
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>>> >> > >> > > >>> > a
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or
>>> read
>>> >> up a
>>> >> > >> bit
>>> >> > >> > > more
>>> >> > >> > > >>> > about
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just
>>> have
>>> >> to
>>> >> > >> write
>>> >> > >> > > your
>>> >> > >> > > >>> > own
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>> >> > >> > > >>> > >  We
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>>> >> libvirt,
>>> >> > >> see
>>> >> > >> > the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made,
>>> then
>>> >> > calls
>>> >> > >> > made
>>> >> > >> > > to
>>> >> > >> > > >>> > that
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>>> >> > LibvirtStorageAdaptor
>>> >> > >> to
>>> >> > >> > > see
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>>> >> > >> > > >>> > > that
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe
>>> write
>>> >> > some
>>> >> > >> > test
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>>> >> > >> > > >>> > > code
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt
>>> and
>>> >> > >> register
>>> >> > >> > > iscsi
>>> >> > >> > > >>> > > storage
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31
>>> PM,
>>> >> > Mike
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <
>>> mike.tutkowski@solidfire.com>
>>> >> > wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>>> >> investigate
>>> >> > >> > libvirt
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>>> >> > >> > > >>> > > but
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting
>>> to/disconnecting from
>>> >> > >> iSCSI
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at
>>> 5:29 PM,
>>> >> > >> Mike
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
>>> mike.tutkowski@solidfire.com>
>>> >> > >> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
>>> through
>>> >> > >> some of
>>> >> > >> > > the
>>> >> > >> > > >>> > > classes
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at
>>> 5:26
>>> >> PM,
>>> >> > >> > Marcus
>>> >> > >> > > >>> > Sorensen
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that
>>> you will
>>> >> > >> need
>>> >> > >> > the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should
>>> be
>>> >> > >> standard
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>>> >> > >> > > >>> > > for
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor
>>> to do
>>> >> > the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>>> >> > >> > > >>> > > > login.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>> LibvirtStorageAdaptor.java
>>> >> > >> > > >>> > and
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your
>>> need.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM,
>>> "Mike
>>> >> > >> > > Tutkowski"
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
>>> mike.tutkowski@solidfire.com
>>> >> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
>>> during
>>> >> the
>>> >> > >> 4.2
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>>> >> > >> > > >>> > I
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>>> >> > CloudStack.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was
>>> invoked by
>>> >> the
>>> >> > >> > > storage
>>> >> > >> > > >>> > > > framework
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
>>> dynamically
>>> >> > >> create
>>> >> > >> > and
>>> >> > >> > > >>> > delete
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other
>>> activities).
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I
>>> can
>>> >> > >> > establish a
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>> >> > >> > > >>> > > > mapping
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire
>>> volume
>>> >> > for
>>> >> > >> > QoS.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
>>> >> always
>>> >> > >> > > expected
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> > >> > > >>> > > > admin
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time
>>> and
>>> >> those
>>> >> > >> > > volumes
>>> >> > >> > > >>> > would
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is
>>> not QoS
>>> >> > >> > > friendly).
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping
>>> >> scheme
>>> >> > >> > work,
>>> >> > >> > > I
>>> >> > >> > > >>> > needed
>>> >> > >> > > >>> > > > to
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
>>> plug-ins
>>> >> > so
>>> >> > >> > they
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores
>>> as
>>> >> > >> needed.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make
>>> this
>>> >> > >> happen
>>> >> > >> > > with
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed
>>> with
>>> >> how
>>> >> > >> this
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>>> >> > >> > > >>> > > work
>>> >> > >> > > >>> > > > on
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar
>>> with KVM
>>> >> > >> know
>>> >> > >> > > how I
>>> >> > >> > > >>> > will
>>> >> > >> > > >>> > > > need
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For
>>> example,
>>> >> > will I
>>> >> > >> > > have to
>>> >> > >> > > >>> > > expect
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM
>>> host and
>>> >> > >> use it
>>> >> > >> > > for
>>> >> > >> > > >>> > this
>>> >> > >> > > >>> > > to
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
>>> suggestions,
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
>>> Developer,
>>> >> > >> > SolidFire
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>>> >> > mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
>>> world
>>> >> > uses
>>> >> > >> the
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack
>>> Developer,
>>> >> > >> SolidFire
>>> >> > >> > > Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>>> >> mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the
>>> world
>>> >> uses
>>> >> > >> the
>>> >> > >> > > cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack
>>> Developer,
>>> >> > >> SolidFire
>>> >> > >> > > Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
>>> mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the
>>> world uses
>>> >> > the
>>> >> > >> > > cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>>> >> > SolidFire
>>> >> > >> > Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e:
>>> mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world
>>> uses
>>> >> the
>>> >> > >> > cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>>> >> SolidFire
>>> >> > >> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e:
>>> mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world
>>> uses the
>>> >> > >> cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>>> >> SolidFire
>>> >> > >> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world
>>> uses the
>>> >> > >> cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer,
>>> SolidFire
>>> >> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the
>>> >> cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> --
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer,
>>> SolidFire
>>> >> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
>>> >> cloud™
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>>
>>> >> > >> > > >>> > > > >>>>>>>>> --
>>> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire
>>> Inc.
>>> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the
>>> cloud™
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>>
>>> >> > >> > > >>> > > > >>>>>> --
>>> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>>
>>> >> > >> > > >>> > > > >>> --
>>> >> > >> > > >>> > > > >>> Mike Tutkowski
>>> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>>> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > >>> o: 303.746.7302
>>> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > >
>>> >> > >> > > >>> > > > > --
>>> >> > >> > > >>> > > > > Mike Tutkowski
>>> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>>> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > > > o: 303.746.7302
>>> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>>> >> > >> > > >>> > > >
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> > > --
>>> >> > >> > > >>> > > *Mike Tutkowski*
>>> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> > > o: 303.746.7302
>>> >> > >> > > >>> > > Advancing the way the world uses the
>>> >> > >> > > >>> > > cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>> >> > >> > > >>> > > *™*
>>> >> > >> > > >>> > >
>>> >> > >> > > >>> >
>>> >> > >> > > >>>
>>> >> > >> > > >>>
>>> >> > >> > > >>>
>>> >> > >> > > >>> --
>>> >> > >> > > >>> *Mike Tutkowski*
>>> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > >> > > >>> e: mike.tutkowski@solidfire.com
>>> >> > >> > > >>> o: 303.746.7302
>>> >> > >> > > >>> Advancing the way the world uses the
>>> >> > >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play
>>> >
>>> >> > >> > > >>> *™*
>>> >> > >> > >
>>> >> > >> >
>>> >> > >> >
>>> >> > >> >
>>> >> > >> > --
>>> >> > >> > *Mike Tutkowski*
>>> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > >> > e: mike.tutkowski@solidfire.com
>>> >> > >> > o: 303.746.7302
>>> >> > >> > Advancing the way the world uses the
>>> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> > >> > *™*
>>> >> > >> >
>>> >> > >>
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > > --
>>> >> > > *Mike Tutkowski*
>>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > > e: mike.tutkowski@solidfire.com
>>> >> > > o: 303.746.7302
>>> >> > > Advancing the way the world uses the cloud<
>>> >> > http://solidfire.com/solution/overview/?video=play>
>>> >> > > *™*
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > *Mike Tutkowski*
>>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> > e: mike.tutkowski@solidfire.com
>>> >> > o: 303.746.7302
>>> >> > Advancing the way the world uses the
>>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> > *™*
>>> >> >
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Since a KVMStoragePool returns a capacity, used, and available number of
bytes, I will probably need to look into having this information ignored if
the storage_pool in question is "managed" as - in my case - it wouldn't
really make any sense.


On Wed, Sep 18, 2013 at 10:53 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Sure, sounds good.
>
> Right now there are only two storage plug-ins: Edison's default plug-in
> and the SolidFire plug-in.
>
> As an example, when createAsync is called in the plug-in, mine creates a
> new volume (LUN) on the SAN with a capacity and number of Min, Max, and
> Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
> of preallocated storage for a new volume (like create a new VDI in an
> existing SR).
>
>
> On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> That wasn't my question, but I feel we're getting off in the weeds and
>> I can just look at the storage framework to see how it works and what
>> options it supports.
>>
>> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > At the time being, I am not aware of any other storage vendor with truly
>> > guaranteed QoS.
>> >
>> > Most implement QoS in a relative sense (like thread priorities).
>> >
>> >
>> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <shadowsor@gmail.com
>> >wrote:
>> >
>> >> Yeah, that's why I thought it was specific to your implementation.
>> Perhaps
>> >> that's true, then?
>> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> >> wrote:
>> >>
>> >> > I totally get where you're coming from with the tiered-pool approach,
>> >> > though.
>> >> >
>> >> > Prior to SolidFire, I worked at HP and the product I worked on
>> allowed a
>> >> > single, clustered SAN to host multiple pools of storage. One pool
>> might
>> >> be
>> >> > made up of all-SSD storage nodes while another pool might be made up
>> of
>> >> > slower HDDs.
>> >> >
>> >> > That kind of tiering is not what SolidFire QoS is about, though, as
>> that
>> >> > kind of tiering does not guarantee QoS.
>> >> >
>> >> > In the SolidFire SAN, QoS was designed in from the beginning and is
>> >> > extremely granular. Each volume has its own performance and
>> capacity. You
>> >> > do not have to worry about Noisy Neighbors.
>> >> >
>> >> > The idea is to encourage businesses to trust the cloud with their
>> most
>> >> > critical business applications at a price point on par with
>> traditional
>> >> > SANs.
>> >> >
>> >> >
>> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>> >> > mike.tutkowski@solidfire.com> wrote:
>> >> >
>> >> > > Ah, I think I see the miscommunication.
>> >> > >
>> >> > > I should have gone into a bit more detail about the SolidFire SAN.
>> >> > >
>> >> > > It is built from the ground up to support QoS on a LUN-by-LUN
>> basis.
>> >> > Every
>> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>> >> > >
>> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself is
>> not
>> >> > > over provisioned). Capacity and IOPS are provisioned independently.
>> >> > > Multiple volumes and multiple tenants using the same SAN do not
>> suffer
>> >> > from
>> >> > > the Noisy Neighbor effect.
>> >> > >
>> >> > > When you create a Disk Offering in CS that is storage tagged to use
>> >> > > SolidFire primary storage, you specify a Min, Max, and Burst
>> number of
>> >> > IOPS
>> >> > > to provision from the SAN for volumes created from that Disk
>> Offering.
>> >> > >
>> >> > > There is no notion of RAID groups that you see in more traditional
>> >> SANs.
>> >> > > The SAN is built from clusters of storage nodes and data is
>> replicated
>> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in
>> the
>> >> > > cluster to avoid hot spots and protect the data should a drives
>> and/or
>> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
>> >> > >
>> >> > > Data is compressed and de-duplicated inline across the cluster and
>> all
>> >> > > volumes are thinly provisioned.
>> >> > >
>> >> > >
>> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
>> shadowsor@gmail.com
>> >> > >wrote:
>> >> > >
>> >> > >> I'm surprised there's no mention of pool on the SAN in your
>> >> description
>> >> > of
>> >> > >> the framework. I had assumed this was specific to your
>> implementation,
>> >> > >> because normally SANs host multiple disk pools, maybe multiple
>> RAID
>> >> 50s
>> >> > >> and
>> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
>> >> > intended
>> >> > >> for root disks and a separate one for data disks. Or one pool for
>> >> > >> cloudstack and one dedicated to some other internal db
>> application.
>> >> But
>> >> > it
>> >> > >> sounds as though there's no place to specify which disks or pool
>> on
>> >> the
>> >> > >> SAN
>> >> > >> to use.
>> >> > >>
>> >> > >> We implemented our own internal storage SAN plugin based on 4.1.
>> We
>> >> used
>> >> > >> the 'path' attribute of the primary storage pool object to specify
>> >> which
>> >> > >> pool name on the back end SAN to use, so we could create all-ssd
>> pools
>> >> > and
>> >> > >> slower spindle pools, then differentiate between them based on
>> storage
>> >> > >> tags. Normally the path attribute would be the mount point for
>> NFS,
>> >> but
>> >> > >> its
>> >> > >> just a string. So when registering ours we enter San dns host
>> name,
>> >> the
>> >> > >> san's rest api port, and the pool name. Then luns created from
>> that
>> >> > >> primary
>> >> > >> storage come from the matching disk pool on the SAN. We can
>> create and
>> >> > >> register multiple pools of different types and purposes on the
>> same
>> >> SAN.
>> >> > >> We
>> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it
>> will be
>> >> > >> interesting to see what we can come up with to make it work
>> similarly.
>> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>> >> > mike.tutkowski@solidfire.com
>> >> > >> >
>> >> > >> wrote:
>> >> > >>
>> >> > >> > What you're saying here is definitely something we should talk
>> >> about.
>> >> > >> >
>> >> > >> > Hopefully my previous e-mail has clarified how this works a bit.
>> >> > >> >
>> >> > >> > It mainly comes down to this:
>> >> > >> >
>> >> > >> > For the first time in CS history, primary storage is no longer
>> >> > required
>> >> > >> to
>> >> > >> > be preallocated by the admin and then handed to CS. CS volumes
>> don't
>> >> > >> have
>> >> > >> > to share a preallocated volume anymore.
>> >> > >> >
>> >> > >> > As of 4.2, primary storage can be based on a SAN (or some other
>> >> > storage
>> >> > >> > device). You can tell CS how many bytes and IOPS to use from
>> this
>> >> > >> storage
>> >> > >> > device and CS invokes the appropriate plug-in to carve out LUNs
>> >> > >> > dynamically.
>> >> > >> >
>> >> > >> > Each LUN is home to one and only one data disk. Data disks - in
>> this
>> >> > >> model
>> >> > >> > - never share a LUN.
>> >> > >> >
>> >> > >> > The main use case for this is so a CS volume can deliver
>> guaranteed
>> >> > >> IOPS if
>> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS
>> on a
>> >> > >> > LUN-by-LUN basis.
>> >> > >> >
>> >> > >> >
>> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>> >> > shadowsor@gmail.com
>> >> > >> > >wrote:
>> >> > >> >
>> >> > >> > > I guess whether or not a solidfire device is capable of
>> hosting
>> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could
>> get the
>> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But
>> if
>> >> > these
>> >> > >> > > stats aren't collected, I can't as an admin define multiple
>> pools
>> >> > and
>> >> > >> > > expect cloudstack to allocate evenly from them or fill one up
>> and
>> >> > move
>> >> > >> > > to the next, because it doesn't know how big it is.
>> >> > >> > >
>> >> > >> > > Ultimately this discussion has nothing to do with the KVM
>> stuff
>> >> > >> > > itself, just a tangent, but something to think about.
>> >> > >> > >
>> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>> >> > >> shadowsor@gmail.com>
>> >> > >> > > wrote:
>> >> > >> > > > Ok, on most storage pools it shows how many GB free/used
>> when
>> >> > >> listing
>> >> > >> > > > the pool both via API and in the UI. I'm guessing those are
>> >> empty
>> >> > >> then
>> >> > >> > > > for the solid fire storage, but it seems like the user
>> should
>> >> have
>> >> > >> to
>> >> > >> > > > define some sort of pool that the luns get carved out of,
>> and
>> >> you
>> >> > >> > > > should be able to get the stats for that, right? Or is a
>> solid
>> >> > fire
>> >> > >> > > > appliance only one pool per appliance? This isn't about
>> billing,
>> >> > but
>> >> > >> > > > just so cloudstack itself knows whether or not there is
>> space
>> >> left
>> >> > >> on
>> >> > >> > > > the storage device, so cloudstack can go on allocating from
>> a
>> >> > >> > > > different primary storage as this one fills up. There are
>> also
>> >> > >> > > > notifications and things. It seems like there should be a
>> call
>> >> you
>> >> > >> can
>> >> > >> > > > handle for this, maybe Edison knows.
>> >> > >> > > >
>> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>> >> > >> shadowsor@gmail.com>
>> >> > >> > > wrote:
>> >> > >> > > >> You respond to more than attach and detach, right? Don't
>> you
>> >> > create
>> >> > >> > > luns as
>> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>> >> > >> > > >>
>> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>> >> > >> > mike.tutkowski@solidfire.com
>> >> > >> > > >
>> >> > >> > > >> wrote:
>> >> > >> > > >>>
>> >> > >> > > >>> Hi Marcus,
>> >> > >> > > >>>
>> >> > >> > > >>> I never need to respond to a CreateStoragePool call for
>> either
>> >> > >> > > XenServer
>> >> > >> > > >>> or
>> >> > >> > > >>> VMware.
>> >> > >> > > >>>
>> >> > >> > > >>> What happens is I respond only to the Attach- and
>> >> Detach-volume
>> >> > >> > > commands.
>> >> > >> > > >>>
>> >> > >> > > >>> Let's say an attach comes in:
>> >> > >> > > >>>
>> >> > >> > > >>> In this case, I check to see if the storage is "managed."
>> >> > Talking
>> >> > >> > > >>> XenServer
>> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk we
>> want
>> >> to
>> >> > >> > attach.
>> >> > >> > > >>> After, if this is the first time attaching this disk, I
>> create
>> >> > an
>> >> > >> SR
>> >> > >> > > and a
>> >> > >> > > >>> VDI within the SR. If it is not the first time attaching
>> this
>> >> > >> disk,
>> >> > >> > the
>> >> > >> > > >>> LUN
>> >> > >> > > >>> already has the SR and VDI on it.
>> >> > >> > > >>>
>> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
>> because
>> >> > >> this
>> >> > >> > > logic
>> >> > >> > > >>> expected an SR and a VDI and now it has it.
>> >> > >> > > >>>
>> >> > >> > > >>> It's the same thing for VMware: Just substitute datastore
>> for
>> >> SR
>> >> > >> and
>> >> > >> > > VMDK
>> >> > >> > > >>> for VDI.
>> >> > >> > > >>>
>> >> > >> > > >>> Does that make sense?
>> >> > >> > > >>>
>> >> > >> > > >>> Thanks!
>> >> > >> > > >>>
>> >> > >> > > >>>
>> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>> >> > >> > > >>> <sh...@gmail.com>wrote:
>> >> > >> > > >>>
>> >> > >> > > >>> > What do you do with Xen? I imagine the user enter the
>> SAN
>> >> > >> details
>> >> > >> > > when
>> >> > >> > > >>> > registering the pool? A the pool details are basically
>> just
>> >> > >> > > instructions
>> >> > >> > > >>> > on
>> >> > >> > > >>> > how to log into a target, correct?
>> >> > >> > > >>> >
>> >> > >> > > >>> > You can choose to log in a KVM host to the target during
>> >> > >> > > >>> > createStoragePool
>> >> > >> > > >>> > and save the pool in a map, or just save the pool info
>> in a
>> >> > map
>> >> > >> for
>> >> > >> > > >>> > future
>> >> > >> > > >>> > reference by uuid, for when you do need to log in. The
>> >> > >> > > createStoragePool
>> >> > >> > > >>> > then just becomes a way to save the pool info to the
>> agent.
>> >> > >> > > Personally,
>> >> > >> > > >>> > I'd
>> >> > >> > > >>> > log in on the pool create and look/scan for specific
>> luns
>> >> when
>> >> > >> > > they're
>> >> > >> > > >>> > needed, but I haven't thought it through thoroughly. I
>> just
>> >> > say
>> >> > >> > that
>> >> > >> > > >>> > mainly
>> >> > >> > > >>> > because login only happens once, the first time the
>> pool is
>> >> > >> used,
>> >> > >> > and
>> >> > >> > > >>> > every
>> >> > >> > > >>> > other storage command is about discovering new luns or
>> maybe
>> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
>> other
>> >> > hand,
>> >> > >> > you
>> >> > >> > > >>> > could
>> >> > >> > > >>> > do all of the above: log in on pool create, then also
>> check
>> >> if
>> >> > >> > you're
>> >> > >> > > >>> > logged in on other commands and log in if you've lost
>> >> > >> connection.
>> >> > >> > > >>> >
>> >> > >> > > >>> > With Xen, what does your registered pool   show in the
>> UI
>> >> for
>> >> > >> > > avail/used
>> >> > >> > > >>> > capacity, and how does it get that info? I assume there
>> is
>> >> > some
>> >> > >> > sort
>> >> > >> > > of
>> >> > >> > > >>> > disk pool that the luns are carved from, and that your
>> >> plugin
>> >> > is
>> >> > >> > > called
>> >> > >> > > >>> > to
>> >> > >> > > >>> > talk to the SAN and expose to the user how much of that
>> pool
>> >> > has
>> >> > >> > been
>> >> > >> > > >>> > allocated. Knowing how you already solves these problems
>> >> with
>> >> > >> Xen
>> >> > >> > > will
>> >> > >> > > >>> > help
>> >> > >> > > >>> > figure out what to do with KVM.
>> >> > >> > > >>> >
>> >> > >> > > >>> > If this is the case, I think the plugin can continue to
>> >> handle
>> >> > >> it
>> >> > >> > > rather
>> >> > >> > > >>> > than getting details from the agent. I'm not sure if
>> that
>> >> > means
>> >> > >> > nulls
>> >> > >> > > >>> > are
>> >> > >> > > >>> > OK for these on the agent side or what, I need to look
>> at
>> >> the
>> >> > >> > storage
>> >> > >> > > >>> > plugin arch more closely.
>> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>> >> > >> > > mike.tutkowski@solidfire.com>
>> >> > >> > > >>> > wrote:
>> >> > >> > > >>> >
>> >> > >> > > >>> > > Hey Marcus,
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the
>> necessary
>> >> > >> methods
>> >> > >> > in
>> >> > >> > > new
>> >> > >> > > >>> > > classes.
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > "So, referencing StorageAdaptor.java,
>> createStoragePool
>> >> > >> accepts
>> >> > >> > > all of
>> >> > >> > > >>> > > the pool data (host, port, name, path) which would be
>> used
>> >> > to
>> >> > >> log
>> >> > >> > > the
>> >> > >> > > >>> > > host into the initiator."
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
>> (primary
>> >> > >> > > storage) is
>> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>> >> anything
>> >> > >> at
>> >> > >> > > this
>> >> > >> > > >>> > point,
>> >> > >> > > >>> > > correct?
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > Also, what kind of capacity, available, and used bytes
>> >> make
>> >> > >> sense
>> >> > >> > > to
>> >> > >> > > >>> > report
>> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents
>> the
>> >> SAN
>> >> > >> in my
>> >> > >> > > case
>> >> > >> > > >>> > and
>> >> > >> > > >>> > > not an individual LUN)?
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > Thanks!
>> >> > >> > > >>> > >
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>> >> > >> > > shadowsor@gmail.com
>> >> > >> > > >>> > > >wrote:
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > > Ok, KVM will be close to that, of course, because
>> only
>> >> the
>> >> > >> > > >>> > > > hypervisor
>> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
>> Creating a
>> >> > >> volume
>> >> > >> > is
>> >> > >> > > >>> > > > just
>> >> > >> > > >>> > > > a db entry until it's deployed for the first time.
>> >> > >> > > >>> > > > AttachVolumeCommand
>> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>> >> analogous
>> >> > >> to
>> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
>> commands
>> >> > (via
>> >> > >> a
>> >> > >> > KVM
>> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target and
>> >> then
>> >> > >> you
>> >> > >> > > have a
>> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for you,
>> but
>> >> my
>> >> > >> quick
>> >> > >> > > read
>> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>> >> > actually a
>> >> > >> > > pool,
>> >> > >> > > >>> > > > not
>> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if
>> that
>> >> > works
>> >> > >> or
>> >> > >> > if
>> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>> >> (because
>> >> > >> > Libvirt
>> >> > >> > > >>> > > > doesn't really manage your pool the way you want),
>> >> you're
>> >> > >> going
>> >> > >> > > to
>> >> > >> > > >>> > > > have to create a version of KVMStoragePool class
>> and a
>> >> > >> > > >>> > > > StorageAdaptor
>> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>> >> > >> > > LibvirtStorageAdaptor.java),
>> >> > >> > > >>> > > > implementing all of the methods, then in
>> >> > >> KVMStorageManager.java
>> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
>> select
>> >> the
>> >> > >> > > correct
>> >> > >> > > >>> > > > adaptor, you can see in this file that every call
>> first
>> >> > >> pulls
>> >> > >> > the
>> >> > >> > > >>> > > > correct adaptor out of this map via
>> getStorageAdaptor.
>> >> So
>> >> > >> you
>> >> > >> > can
>> >> > >> > > >>> > > > see
>> >> > >> > > >>> > > > a comment in this file that says "add other storage
>> >> > adaptors
>> >> > >> > > here",
>> >> > >> > > >>> > > > where it puts to this map, this is where you'd
>> register
>> >> > your
>> >> > >> > > >>> > > > adaptor.
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
>> createStoragePool
>> >> > >> accepts
>> >> > >> > > all
>> >> > >> > > >>> > > > of
>> >> > >> > > >>> > > > the pool data (host, port, name, path) which would
>> be
>> >> used
>> >> > >> to
>> >> > >> > log
>> >> > >> > > >>> > > > the
>> >> > >> > > >>> > > > host into the initiator. I *believe* the method
>> >> > >> getPhysicalDisk
>> >> > >> > > will
>> >> > >> > > >>> > > > need to do the work of attaching the lun.
>> >> > >>  AttachVolumeCommand
>> >> > >> > > calls
>> >> > >> > > >>> > > > this and then creates the XML diskdef and attaches
>> it to
>> >> > the
>> >> > >> > VM.
>> >> > >> > > >>> > > > Now,
>> >> > >> > > >>> > > > one thing you need to know is that
>> createStoragePool is
>> >> > >> called
>> >> > >> > > >>> > > > often,
>> >> > >> > > >>> > > > sometimes just to make sure the pool is there. You
>> may
>> >> > want
>> >> > >> to
>> >> > >> > > >>> > > > create
>> >> > >> > > >>> > > > a map in your adaptor class and keep track of pools
>> that
>> >> > >> have
>> >> > >> > > been
>> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do
>> this
>> >> > >> because
>> >> > >> > it
>> >> > >> > > >>> > > > asks
>> >> > >> > > >>> > > > libvirt about which storage pools exist. There are
>> also
>> >> > >> calls
>> >> > >> > to
>> >> > >> > > >>> > > > refresh the pool stats, and all of the other calls
>> can
>> >> be
>> >> > >> seen
>> >> > >> > in
>> >> > >> > > >>> > > > the
>> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
>> disk,
>> >> > >> clone,
>> >> > >> > > etc,
>> >> > >> > > >>> > > > but
>> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the
>> vague
>> >> > idea
>> >> > >> > that
>> >> > >> > > >>> > > > volumes are created on the mgmt server via the
>> plugin
>> >> now,
>> >> > >> so
>> >> > >> > > >>> > > > whatever
>> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or optionally
>> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind the
>> hosts
>> >> > >> > talking
>> >> > >> > > to
>> >> > >> > > >>> > > > the san api).
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > There is a difference between attaching new volumes
>> and
>> >> > >> > > launching a
>> >> > >> > > >>> > > > VM
>> >> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
>> >> > >> definition
>> >> > >> > > that
>> >> > >> > > >>> > > > was
>> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
>> >> > (StartCommand).
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > I'd be interested in how your pool is defined for
>> Xen, I
>> >> > >> > imagine
>> >> > >> > > it
>> >> > >> > > >>> > > > would need to be kept the same. Is it just a
>> definition
>> >> to
>> >> > >> the
>> >> > >> > > SAN
>> >> > >> > > >>> > > > (ip address or some such, port number) and perhaps a
>> >> > volume
>> >> > >> > pool
>> >> > >> > > >>> > > > name?
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>> on the
>> >> > >> SAN to
>> >> > >> > > have
>> >> > >> > > >>> > > only a
>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>> would
>> >> be
>> >> > >> > ideal.
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
>> >> impression
>> >> > >> that
>> >> > >> > > the
>> >> > >> > > >>> > > > storage plugin framework allowed for acls, or for
>> you to
>> >> > do
>> >> > >> > > whatever
>> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
>> You'd
>> >> > just
>> >> > >> > call
>> >> > >> > > >>> > > > your
>> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to
>> when
>> >> the
>> >> > >> disk
>> >> > >> > is
>> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to look
>> more
>> >> at
>> >> > >> the
>> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
>> this in
>> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That
>> is a
>> >> > bit
>> >> > >> > > >>> > > > > different
>> >> > >> > > >>> > > from
>> >> > >> > > >>> > > > how
>> >> > >> > > >>> > > > > it works with XenServer and VMware.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
>> >> > >> XenServer:
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
>> recorded
>> >> in
>> >> > >> the
>> >> > >> > > >>> > > > cloud.volumes
>> >> > >> > > >>> > > > > table).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a VM
>> for
>> >> the
>> >> > >> > first
>> >> > >> > > >>> > > > > time
>> >> > >> > > >>> > (if
>> >> > >> > > >>> > > > the
>> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the
>> >> > storage
>> >> > >> > > >>> > > > > framework
>> >> > >> > > >>> > > > invokes
>> >> > >> > > >>> > > > > a method on the plug-in that creates a volume on
>> the
>> >> > >> > SAN...info
>> >> > >> > > >>> > > > > like
>> >> > >> > > >>> > > the
>> >> > >> > > >>> > > > IQN
>> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > * CitrixResourceBase's
>> execute(AttachVolumeCommand) is
>> >> > >> > > executed.
>> >> > >> > > >>> > > > > It
>> >> > >> > > >>> > > > > determines based on a flag passed in that the
>> storage
>> >> in
>> >> > >> > > question
>> >> > >> > > >>> > > > > is
>> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>> >> > "traditional"
>> >> > >> > > >>> > preallocated
>> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
>> target.
>> >> > Once
>> >> > >> > > >>> > > > > discovered
>> >> > >> > > >>> > > it
>> >> > >> > > >>> > > > > determines if the iSCSI target already contains a
>> >> > storage
>> >> > >> > > >>> > > > > repository
>> >> > >> > > >>> > > (it
>> >> > >> > > >>> > > > > would if this were a re-attach situation). If it
>> does
>> >> > >> contain
>> >> > >> > > an
>> >> > >> > > >>> > > > > SR
>> >> > >> > > >>> > > > already,
>> >> > >> > > >>> > > > > then there should already be one VDI, as well. If
>> >> there
>> >> > >> is no
>> >> > >> > > SR,
>> >> > >> > > >>> > > > > an
>> >> > >> > > >>> > SR
>> >> > >> > > >>> > > > is
>> >> > >> > > >>> > > > > created and a single VDI is created within it
>> (that
>> >> > takes
>> >> > >> up
>> >> > >> > > about
>> >> > >> > > >>> > > > > as
>> >> > >> > > >>> > > > much
>> >> > >> > > >>> > > > > space as was requested for the CloudStack volume).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
>> depends
>> >> > on
>> >> > >> the
>> >> > >> > > >>> > existence
>> >> > >> > > >>> > > > of
>> >> > >> > > >>> > > > > an SR and a VDI).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > The VMware case is essentially the same (mainly
>> just
>> >> > >> > substitute
>> >> > >> > > >>> > > datastore
>> >> > >> > > >>> > > > > for SR and VMDK for VDI).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>> >> discovered
>> >> > >> the
>> >> > >> > > iSCSI
>> >> > >> > > >>> > > target,
>> >> > >> > > >>> > > > > but only the host that is currently running the VM
>> >> that
>> >> > is
>> >> > >> > > using
>> >> > >> > > >>> > > > > the
>> >> > >> > > >>> > > VDI
>> >> > >> > > >>> > > > (or
>> >> > >> > > >>> > > > > VMKD) is actually using the disk.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > Live Migration should be OK because the
>> hypervisors
>> >> > >> > communicate
>> >> > >> > > >>> > > > > with
>> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
>> datastore).
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
>> >> > CloudStack's
>> >> > >> > > eyes.
>> >> > >> > > >>> > > > > CS
>> >> > >> > > >>> > > > controls
>> >> > >> > > >>> > > > > Live Migration. You don't really need a clustered
>> >> > >> filesystem
>> >> > >> > on
>> >> > >> > > >>> > > > > the
>> >> > >> > > >>> > > LUN.
>> >> > >> > > >>> > > > The
>> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > If there is a way for me to update the ACL list
>> on the
>> >> > >> SAN to
>> >> > >> > > have
>> >> > >> > > >>> > > only a
>> >> > >> > > >>> > > > > single KVM host have access to the volume, that
>> would
>> >> be
>> >> > >> > ideal.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to
>> discover
>> >> and
>> >> > >> log
>> >> > >> > in
>> >> > >> > > to
>> >> > >> > > >>> > > > > the
>> >> > >> > > >>> > > > iSCSI
>> >> > >> > > >>> > > > > target. I'll also need to take the resultant new
>> >> device
>> >> > >> and
>> >> > >> > > pass
>> >> > >> > > >>> > > > > it
>> >> > >> > > >>> > > into
>> >> > >> > > >>> > > > the
>> >> > >> > > >>> > > > > VM.
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out on
>> >> > >> anything I
>> >> > >> > > seem
>> >> > >> > > >>> > > > incorrect
>> >> > >> > > >>> > > > > about. :)
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> >> > >> > > >>> > shadowsor@gmail.com>
>> >> > >> > > >>> > > > > wrote:
>> >> > >> > > >>> > > > >>
>> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a
>> disk
>> >> > def,
>> >> > >> and
>> >> > >> > > the
>> >> > >> > > >>> > > attach
>> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do your
>> own
>> >> > >> > > >>> > > > >> StorageAdaptor
>> >> > >> > > >>> > and
>> >> > >> > > >>> > > > run
>> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending
>> on
>> >> how
>> >> > >> the
>> >> > >> > > >>> > > > >> libvirt
>> >> > >> > > >>> > > iscsi
>> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
>> pool/lun/volume
>> >> > >> isn't
>> >> > >> > > how it
>> >> > >> > > >>> > > works
>> >> > >> > > >>> > > > on
>> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>> >> > >> > > >>> > > > >>
>> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which
>> host can
>> >> > see
>> >> > >> > > which
>> >> > >> > > >>> > > > >> luns
>> >> > >> > > >>> > > as
>> >> > >> > > >>> > > > >> well, I remember discussing that months ago, so
>> that
>> >> a
>> >> > >> disk
>> >> > >> > > won't
>> >> > >> > > >>> > > > >> be
>> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
>> access,
>> >> so
>> >> > >> it
>> >> > >> > > will
>> >> > >> > > >>> > > > >> be
>> >> > >> > > >>> > > safe
>> >> > >> > > >>> > > > and
>> >> > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack
>> loses
>> >> > >> > > >>> > > > >> connectivity
>> >> > >> > > >>> > > > with. It
>> >> > >> > > >>> > > > >> should revoke access to everything but the target
>> >> > host...
>> >> > >> > > Except
>> >> > >> > > >>> > > > >> for
>> >> > >> > > >>> > > > during
>> >> > >> > > >>> > > > >> migration but we can discuss that later, there's
>> a
>> >> > >> migration
>> >> > >> > > prep
>> >> > >> > > >>> > > > process
>> >> > >> > > >>> > > > >> where the new host can be added to the acls, and
>> the
>> >> > old
>> >> > >> > host
>> >> > >> > > can
>> >> > >> > > >>> > > > >> be
>> >> > >> > > >>> > > > removed
>> >> > >> > > >>> > > > >> post migration.
>> >> > >> > > >>> > > > >>
>> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> >> > >> > > >>> > > mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >> wrote:
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
>> target,
>> >> > >> log in
>> >> > >> > > to
>> >> > >> > > >>> > > > >>> it,
>> >> > >> > > >>> > > then
>> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a result
>> >> (and
>> >> > >> leave
>> >> > >> > > it
>> >> > >> > > >>> > > > >>> as
>> >> > >> > > >>> > is
>> >> > >> > > >>> > > -
>> >> > >> > > >>> > > > do
>> >> > >> > > >>> > > > >>> not format it with any file system...clustered
>> or
>> >> > not).
>> >> > >> I
>> >> > >> > > would
>> >> > >> > > >>> > pass
>> >> > >> > > >>> > > > that
>> >> > >> > > >>> > > > >>> device into the VM.
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>> Kind of accurate?
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus
>> Sorensen <
>> >> > >> > > >>> > > shadowsor@gmail.com>
>> >> > >> > > >>> > > > >>> wrote:
>> >> > >> > > >>> > > > >>>>
>> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the
>> disk
>> >> > >> > > definitions.
>> >> > >> > > >>> > There
>> >> > >> > > >>> > > > are
>> >> > >> > > >>> > > > >>>> ones that work for block devices rather than
>> files.
>> >> > You
>> >> > >> > can
>> >> > >> > > >>> > > > >>>> piggy
>> >> > >> > > >>> > > > back off
>> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach it
>> to
>> >> the
>> >> > >> vm
>> >> > >> > as
>> >> > >> > > a
>> >> > >> > > >>> > block
>> >> > >> > > >>> > > > device.
>> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt XML
>> >> > format.
>> >> > >> > You
>> >> > >> > > may
>> >> > >> > > >>> > want
>> >> > >> > > >>> > > > to use
>> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than just
>> >> > /dev/sdx
>> >> > >> > > like I
>> >> > >> > > >>> > > > mentioned,
>> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices, as
>> well
>> >> > as
>> >> > >> > other
>> >> > >> > > >>> > > > >>>> ones
>> >> > >> > > >>> > > > that will
>> >> > >> > > >>> > > > >>>> be consistent and easier for management, not
>> sure
>> >> how
>> >> > >> > > familiar
>> >> > >> > > >>> > > > >>>> you
>> >> > >> > > >>> > > > are with
>> >> > >> > > >>> > > > >>>> device naming on Linux.
>> >> > >> > > >>> > > > >>>>
>> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
>> >> > >> > > >>> > > > wrote:
>> >> > >> > > >>> > > > >>>>>
>> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>> >> network/iscsi
>> >> > >> > > initiator
>> >> > >> > > >>> > > inside
>> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
>> /dev/sdx
>> >> > (your
>> >> > >> > lun
>> >> > >> > > on
>> >> > >> > > >>> > > > hypervisor) as
>> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some
>> image
>> >> > >> file
>> >> > >> > > that
>> >> > >> > > >>> > > resides
>> >> > >> > > >>> > > > on a
>> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
>> >> target.
>> >> > >> > > >>> > > > >>>>>
>> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage
>> supporting
>> >> live
>> >> > >> > > migration
>> >> > >> > > >>> > > > >>>>> I
>> >> > >> > > >>> > > > think
>> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
>> filesystem
>> >> on
>> >> > it
>> >> > >> > and
>> >> > >> > > >>> > > > >>>>> mount
>> >> > >> > > >>> > it
>> >> > >> > > >>> > > > in two
>> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>> >> > clustered
>> >> > >> > > >>> > > > >>>>> filesystem,
>> >> > >> > > >>> > > in
>> >> > >> > > >>> > > > which
>> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>> >> > >> > > >>> > > > >>>>>
>> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
>> >> > >> basically
>> >> > >> > > LVM
>> >> > >> > > >>> > with a
>> >> > >> > > >>> > > > xen
>> >> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM.
>> They
>> >> > don't
>> >> > >> > use
>> >> > >> > > a
>> >> > >> > > >>> > > > filesystem
>> >> > >> > > >>> > > > >>>>> either.
>> >> > >> > > >>> > > > >>>>>
>> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to
>> the
>> >> vm,"
>> >> > >> do
>> >> > >> > you
>> >> > >> > > >>> > > > >>>>>> mean
>> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think
>> we
>> >> > >> could do
>> >> > >> > > that
>> >> > >> > > >>> > > > >>>>>> in
>> >> > >> > > >>> > > CS.
>> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
>> circumvents
>> >> > the
>> >> > >> > > >>> > > > >>>>>> hypervisor,
>> >> > >> > > >>> > > as
>> >> > >> > > >>> > > > far as I
>> >> > >> > > >>> > > > >>>>>> know.
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
>> Sorensen
>> >> <
>> >> > >> > > >>> > > > shadowsor@gmail.com>
>> >> > >> > > >>> > > > >>>>>> wrote:
>> >> > >> > > >>> > > > >>>>>>>
>> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm
>> >> > unless
>> >> > >> > > there is
>> >> > >> > > >>> > > > >>>>>>> a
>> >> > >> > > >>> > > good
>> >> > >> > > >>> > > > >>>>>>> reason not to.
>> >> > >> > > >>> > > > >>>>>>>
>> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> >> > >> > > >>> > shadowsor@gmail.com>
>> >> > >> > > >>> > > > >>>>>>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think
>> >> its a
>> >> > >> > > mistake
>> >> > >> > > >>> > > > >>>>>>>> to
>> >> > >> > > >>> > go
>> >> > >> > > >>> > > to
>> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
>> >> > >> volumes to
>> >> > >> > > luns
>> >> > >> > > >>> > and
>> >> > >> > > >>> > > > then putting
>> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
>> >> > putting a
>> >> > >> > > QCOW2
>> >> > >> > > >>> > > > >>>>>>>> or
>> >> > >> > > >>> > > even
>> >> > >> > > >>> > > > RAW disk
>> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a
>> lot of
>> >> > iops
>> >> > >> > > along
>> >> > >> > > >>> > > > >>>>>>>> the
>> >> > >> > > >>> > > > way, and have
>> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
>> >> > >> journaling,
>> >> > >> > > etc.
>> >> > >> > > >>> > > > >>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new
>> ground
>> >> > in
>> >> > >> KVM
>> >> > >> > > with
>> >> > >> > > >>> > CS.
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM
>> and CS
>> >> > >> today
>> >> > >> > > is by
>> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying
>> the
>> >> > >> > location
>> >> > >> > > of
>> >> > >> > > >>> > > > >>>>>>>>> the
>> >> > >> > > >>> > > > share.
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
>> iSCSI
>> >> by
>> >> > >> > > >>> > > > >>>>>>>>> discovering
>> >> > >> > > >>> > > their
>> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
>> mounting
>> >> it
>> >> > >> > > somewhere
>> >> > >> > > >>> > > > >>>>>>>>> on
>> >> > >> > > >>> > > > their file
>> >> > >> > > >>> > > > >>>>>>>>> system.
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
>> >> > >> discovery,
>> >> > >> > > >>> > > > >>>>>>>>> logging
>> >> > >> > > >>> > > in,
>> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them
>> and
>> >> > >> letting
>> >> > >> > the
>> >> > >> > > >>> > current
>> >> > >> > > >>> > > > code manage
>> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>> >> Sorensen
>> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>> >> different. I
>> >> > >> need
>> >> > >> > > to
>> >> > >> > > >>> > catch
>> >> > >> > > >>> > > up
>> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>> >> basically
>> >> > >> just
>> >> > >> > > disk
>> >> > >> > > >>> > > > snapshots + memory
>> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
>> >> > >> preferably
>> >> > >> > be
>> >> > >> > > >>> > handled
>> >> > >> > > >>> > > > by the SAN,
>> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
>> >> > >> storage or
>> >> > >> > > >>> > something
>> >> > >> > > >>> > > > else. This is
>> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM,
>> so we
>> >> > will
>> >> > >> > > want to
>> >> > >> > > >>> > see
>> >> > >> > > >>> > > > how others are
>> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>> >> > >> > > >>> > > > >>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
>> Sorensen" <
>> >> > >> > > >>> > > shadowsor@gmail.com
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >>>>>>>>>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
>> you'd
>> >> > use a
>> >> > >> > vdi
>> >> > >> > > >>> > > > >>>>>>>>>>> style
>> >> > >> > > >>> > on
>> >> > >> > > >>> > > > an
>> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat
>> it
>> >> as a
>> >> > >> RAW
>> >> > >> > > >>> > > > >>>>>>>>>>> format.
>> >> > >> > > >>> > > > Otherwise you're
>> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
>> mounting
>> >> it,
>> >> > >> > > creating
>> >> > >> > > >>> > > > >>>>>>>>>>> a
>> >> > >> > > >>> > > > QCOW2 disk image,
>> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
>> performance
>> >> > >> > killer.
>> >> > >> > > >>> > > > >>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun
>> as a
>> >> > >> disk
>> >> > >> > to
>> >> > >> > > the
>> >> > >> > > >>> > VM,
>> >> > >> > > >>> > > > and
>> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via
>> the
>> >> > >> storage
>> >> > >> > > >>> > > > >>>>>>>>>>> plugin
>> >> > >> > > >>> > is
>> >> > >> > > >>> > > > best. My
>> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
>> refactor
>> >> > was
>> >> > >> > that
>> >> > >> > > >>> > > > >>>>>>>>>>> there
>> >> > >> > > >>> > > was
>> >> > >> > > >>> > > > a snapshot
>> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
>> handle
>> >> > >> > snapshots.
>> >> > >> > > >>> > > > >>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
>> Sorensen" <
>> >> > >> > > >>> > > > shadowsor@gmail.com>
>> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be
>> handled by
>> >> > the
>> >> > >> SAN
>> >> > >> > > back
>> >> > >> > > >>> > end,
>> >> > >> > > >>> > > > if
>> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack
>> mgmt
>> >> > server
>> >> > >> > > could
>> >> > >> > > >>> > > > >>>>>>>>>>>> call
>> >> > >> > > >>> > > > your plugin for
>> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
>> hypervisor
>> >> > >> > > agnostic. As
>> >> > >> > > >>> > far
>> >> > >> > > >>> > > > as space, that
>> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles
>> it.
>> >> With
>> >> > >> > ours,
>> >> > >> > > we
>> >> > >> > > >>> > carve
>> >> > >> > > >>> > > > out luns from a
>> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes
>> from the
>> >> > >> pool
>> >> > >> > > and is
>> >> > >> > > >>> > > > independent of the
>> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>> >> > >> > > >>> > > > >>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
>> Tutkowski"
>> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool
>> type
>> >> for
>> >> > >> > libvirt
>> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
>> >> > >> > > >>> > > > work
>> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>> >> hypervisor
>> >> > >> > > snapshots?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
>> hypervisor
>> >> > >> > snapshot,
>> >> > >> > > the
>> >> > >> > > >>> > VDI
>> >> > >> > > >>> > > > for
>> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
>> storage
>> >> > >> > > repository
>> >> > >> > > >>> > > > >>>>>>>>>>>>> as
>> >> > >> > > >>> > > the
>> >> > >> > > >>> > > > volume is on.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case
>> (let's
>> >> say
>> >> > >> for
>> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>> >> > >> > > >>> > > and
>> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
>> >> > >> hypervisor
>> >> > >> > > >>> > snapshots
>> >> > >> > > >>> > > > in 4.2) is I'd
>> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger
>> than
>> >> > what
>> >> > >> the
>> >> > >> > > user
>> >> > >> > > >>> > > > requested for the
>> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
>> because
>> >> our
>> >> > >> SAN
>> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>> >> > >> > > >>> > > > provisions volumes,
>> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
>> unless
>> >> it
>> >> > >> needs
>> >> > >> > > to
>> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
>> >> > >> > > >>> > > > The CloudStack
>> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on
>> the
>> >> SAN
>> >> > >> > volume
>> >> > >> > > >>> > until a
>> >> > >> > > >>> > > > hypervisor
>> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would
>> >> also
>> >> > >> > reside
>> >> > >> > > on
>> >> > >> > > >>> > > > >>>>>>>>>>>>> the
>> >> > >> > > >>> > > > SAN volume.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and
>> there
>> >> is
>> >> > >> no
>> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
>> >> > >> > > >>> > of
>> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
>> libvirt
>> >> > >> (which,
>> >> > >> > > even
>> >> > >> > > >>> > > > >>>>>>>>>>>>> if
>> >> > >> > > >>> > > > there were support
>> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only
>> allows
>> >> one
>> >> > >> LUN
>> >> > >> > per
>> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>> >> > >> > > >>> > > > target), then I
>> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will
>> work.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
>> >> > current
>> >> > >> way
>> >> > >> > > this
>> >> > >> > > >>> > > works
>> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
>> >> > >> Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used
>> for
>> >> > >> iSCSI
>> >> > >> > > access
>> >> > >> > > >>> > > today.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too,
>> >> but I
>> >> > >> > might
>> >> > >> > > as
>> >> > >> > > >>> > well
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
>> >> > instead.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
>> Marcus
>> >> > >> Sorensen
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>> >> SharedMountPoint, I
>> >> > >> > > believe
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>> >> > >> > > >>> > > just
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
>> similar
>> >> to
>> >> > >> > that.
>> >> > >> > > The
>> >> > >> > > >>> > > > end-user
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
>> system
>> >> > that
>> >> > >> all
>> >> > >> > > KVM
>> >> > >> > > >>> > hosts
>> >> > >> > > >>> > > > can
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what
>> is
>> >> > >> > providing
>> >> > >> > > the
>> >> > >> > > >>> > > > storage.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>> >> clustered
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory
>> path
>> >> has
>> >> > >> VM
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
>> Marcus
>> >> > >> > Sorensen
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and
>> >> iSCSI
>> >> > >> all
>> >> > >> > at
>> >> > >> > > the
>> >> > >> > > >>> > same
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM,
>> Mike
>> >> > >> > Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
>> >> > storage
>> >> > >> > > pools:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
>> pool-list
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>> >> >  Autostart
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > -----------------------------------------
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
>> yes
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active
>> no
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM,
>> Mike
>> >> > >> > > Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
>> >> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
>> pointed
>> >> > >> out.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI
>> (libvirt)
>> >> > >> storage
>> >> > >> > > pool
>> >> > >> > > >>> > based
>> >> > >> > > >>> > > on
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
>> would
>> >> > only
>> >> > >> > have
>> >> > >> > > one
>> >> > >> > > >>> > LUN,
>> >> > >> > > >>> > > > so
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
>> >> volume
>> >> > in
>> >> > >> > the
>> >> > >> > > >>> > > (libvirt)
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates
>> and
>> >> > >> destroys
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>> >> problem
>> >> > >> that
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>> >> > >> > > >>> > > does
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>> >> targets/LUNs.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
>> this a
>> >> > bit
>> >> > >> to
>> >> > >> > > see
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>> >> > >> > > >>> > > > libvirt
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools
>> (as you
>> >> > >> > > mentioned,
>> >> > >> > > >>> > since
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one
>> of my
>> >> > >> iSCSI
>> >> > >> > > >>> > > > targets/LUNs).
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM,
>> >> Mike
>> >> > >> > > Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com>
>> >> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this
>> >> type:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>> >> > NETFS("netfs"),
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
>> poolType) {
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
>> poolType;
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
>> toString() {
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI
>> type
>> >> > is
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>> >> > >> > > >>> > > being
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you
>> were
>> >> > >> getting
>> >> > >> > at.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say,
>> >> 4.2),
>> >> > >> when
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
>> uses it
>> >> > >> with
>> >> > >> > > iSCSI,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>> >> > >> > > >>> > > > that
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50
>> PM,
>> >> > Marcus
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be
>> pre-allocated on
>> >> > the
>> >> > >> > iSCSI
>> >> > >> > > >>> > server,
>> >> > >> > > >>> > > > and
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt
>> APIs.",
>> >> > which
>> >> > >> I
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>> >> > >> > > >>> > > your
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the
>> >> work
>> >> > of
>> >> > >> > > logging
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> >> > >> > > >>> > > and
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api
>> does
>> >> > >> that
>> >> > >> > > work
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> >> > >> > > >>> > the
>> >> > >> > > >>> > > > Xen
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
>> whether
>> >> > >> this
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>> >> > >> > > >>> > a
>> >> > >> > > >>> > > > 1:1
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to
>> register 1
>> >> > iscsi
>> >> > >> > > device
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>> >> > >> > > >>> > a
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or
>> read
>> >> up a
>> >> > >> bit
>> >> > >> > > more
>> >> > >> > > >>> > about
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just
>> have
>> >> to
>> >> > >> write
>> >> > >> > > your
>> >> > >> > > >>> > own
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> >> > >> > > >>> > >  We
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>> >> libvirt,
>> >> > >> see
>> >> > >> > the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made,
>> then
>> >> > calls
>> >> > >> > made
>> >> > >> > > to
>> >> > >> > > >>> > that
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>> >> > LibvirtStorageAdaptor
>> >> > >> to
>> >> > >> > > see
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>> >> > >> > > >>> > > that
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe
>> write
>> >> > some
>> >> > >> > test
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> >> > >> > > >>> > > code
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
>> >> > >> register
>> >> > >> > > iscsi
>> >> > >> > > >>> > > storage
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31
>> PM,
>> >> > Mike
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mike.tutkowski@solidfire.com
>> >
>> >> > wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>> >> investigate
>> >> > >> > libvirt
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>> >> > >> > > >>> > > but
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting
>> from
>> >> > >> iSCSI
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at
>> 5:29 PM,
>> >> > >> Mike
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
>> mike.tutkowski@solidfire.com>
>> >> > >> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
>> through
>> >> > >> some of
>> >> > >> > > the
>> >> > >> > > >>> > > classes
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at
>> 5:26
>> >> PM,
>> >> > >> > Marcus
>> >> > >> > > >>> > Sorensen
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you
>> will
>> >> > >> need
>> >> > >> > the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
>> >> > >> standard
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>> >> > >> > > >>> > > for
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor
>> to do
>> >> > the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>> >> > >> > > >>> > > > login.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> >> > >> > > >>> > and
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your
>> need.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM,
>> "Mike
>> >> > >> > > Tutkowski"
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
>> mike.tutkowski@solidfire.com
>> >> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
>> during
>> >> the
>> >> > >> 4.2
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>> >> > >> > > >>> > I
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>> >> > CloudStack.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked
>> by
>> >> the
>> >> > >> > > storage
>> >> > >> > > >>> > > > framework
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
>> dynamically
>> >> > >> create
>> >> > >> > and
>> >> > >> > > >>> > delete
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I
>> can
>> >> > >> > establish a
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> >> > >> > > >>> > > > mapping
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire
>> volume
>> >> > for
>> >> > >> > QoS.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
>> >> always
>> >> > >> > > expected
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >> > >> > > >>> > > > admin
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and
>> >> those
>> >> > >> > > volumes
>> >> > >> > > >>> > would
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not
>> QoS
>> >> > >> > > friendly).
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping
>> >> scheme
>> >> > >> > work,
>> >> > >> > > I
>> >> > >> > > >>> > needed
>> >> > >> > > >>> > > > to
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
>> plug-ins
>> >> > so
>> >> > >> > they
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores
>> as
>> >> > >> needed.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make
>> this
>> >> > >> happen
>> >> > >> > > with
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed
>> with
>> >> how
>> >> > >> this
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>> >> > >> > > >>> > > work
>> >> > >> > > >>> > > > on
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar
>> with KVM
>> >> > >> know
>> >> > >> > > how I
>> >> > >> > > >>> > will
>> >> > >> > > >>> > > > need
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For
>> example,
>> >> > will I
>> >> > >> > > have to
>> >> > >> > > >>> > > expect
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM
>> host and
>> >> > >> use it
>> >> > >> > > for
>> >> > >> > > >>> > this
>> >> > >> > > >>> > > to
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
>> suggestions,
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
>> Developer,
>> >> > >> > SolidFire
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>> >> > mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
>> world
>> >> > uses
>> >> > >> the
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack
>> Developer,
>> >> > >> SolidFire
>> >> > >> > > Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>> >> mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world
>> >> uses
>> >> > >> the
>> >> > >> > > cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
>> >> > >> SolidFire
>> >> > >> > > Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
>> mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world
>> uses
>> >> > the
>> >> > >> > > cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>> >> > SolidFire
>> >> > >> > Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e:
>> mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world
>> uses
>> >> the
>> >> > >> > cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>> >> SolidFire
>> >> > >> Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world
>> uses the
>> >> > >> cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>> >> SolidFire
>> >> > >> Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses
>> the
>> >> > >> cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer,
>> SolidFire
>> >> Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the
>> >> cloud™
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>>>>> --
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
>> >> Inc.
>> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
>> >> cloud™
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>>
>> >> > >> > > >>> > > > >>>>>>>>> --
>> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire
>> Inc.
>> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the
>> cloud™
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>>
>> >> > >> > > >>> > > > >>>>>> --
>> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
>> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
>> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>>
>> >> > >> > > >>> > > > >>> --
>> >> > >> > > >>> > > > >>> Mike Tutkowski
>> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > >>> o: 303.746.7302
>> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > >
>> >> > >> > > >>> > > > > --
>> >> > >> > > >>> > > > > Mike Tutkowski
>> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > > > o: 303.746.7302
>> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>> >> > >> > > >>> > > >
>> >> > >> > > >>> > >
>> >> > >> > > >>> > >
>> >> > >> > > >>> > >
>> >> > >> > > >>> > > --
>> >> > >> > > >>> > > *Mike Tutkowski*
>> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> > > o: 303.746.7302
>> >> > >> > > >>> > > Advancing the way the world uses the
>> >> > >> > > >>> > > cloud<
>> http://solidfire.com/solution/overview/?video=play>
>> >> > >> > > >>> > > *™*
>> >> > >> > > >>> > >
>> >> > >> > > >>> >
>> >> > >> > > >>>
>> >> > >> > > >>>
>> >> > >> > > >>>
>> >> > >> > > >>> --
>> >> > >> > > >>> *Mike Tutkowski*
>> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> >> > >> > > >>> e: mike.tutkowski@solidfire.com
>> >> > >> > > >>> o: 303.746.7302
>> >> > >> > > >>> Advancing the way the world uses the
>> >> > >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > >> > > >>> *™*
>> >> > >> > >
>> >> > >> >
>> >> > >> >
>> >> > >> >
>> >> > >> > --
>> >> > >> > *Mike Tutkowski*
>> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > >> > e: mike.tutkowski@solidfire.com
>> >> > >> > o: 303.746.7302
>> >> > >> > Advancing the way the world uses the
>> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > >> > *™*
>> >> > >> >
>> >> > >>
>> >> > >
>> >> > >
>> >> > >
>> >> > > --
>> >> > > *Mike Tutkowski*
>> >> > > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > > e: mike.tutkowski@solidfire.com
>> >> > > o: 303.746.7302
>> >> > > Advancing the way the world uses the cloud<
>> >> > http://solidfire.com/solution/overview/?video=play>
>> >> > > *™*
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > *Mike Tutkowski*
>> >> > *Senior CloudStack Developer, SolidFire Inc.*
>> >> > e: mike.tutkowski@solidfire.com
>> >> > o: 303.746.7302
>> >> > Advancing the way the world uses the
>> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> >> > *™*
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Sure, sounds good.

Right now there are only two storage plug-ins: Edison's default plug-in and
the SolidFire plug-in.

As an example, when createAsync is called in the plug-in, mine creates a
new volume (LUN) on the SAN with a capacity and number of Min, Max, and
Burst IOPS. Edison's sends a command to the hypervisor to take a chunk out
of preallocated storage for a new volume (like create a new VDI in an
existing SR).


On Wed, Sep 18, 2013 at 10:49 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> That wasn't my question, but I feel we're getting off in the weeds and
> I can just look at the storage framework to see how it works and what
> options it supports.
>
> On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > At the time being, I am not aware of any other storage vendor with truly
> > guaranteed QoS.
> >
> > Most implement QoS in a relative sense (like thread priorities).
> >
> >
> > On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> Yeah, that's why I thought it was specific to your implementation.
> Perhaps
> >> that's true, then?
> >> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> >> wrote:
> >>
> >> > I totally get where you're coming from with the tiered-pool approach,
> >> > though.
> >> >
> >> > Prior to SolidFire, I worked at HP and the product I worked on
> allowed a
> >> > single, clustered SAN to host multiple pools of storage. One pool
> might
> >> be
> >> > made up of all-SSD storage nodes while another pool might be made up
> of
> >> > slower HDDs.
> >> >
> >> > That kind of tiering is not what SolidFire QoS is about, though, as
> that
> >> > kind of tiering does not guarantee QoS.
> >> >
> >> > In the SolidFire SAN, QoS was designed in from the beginning and is
> >> > extremely granular. Each volume has its own performance and capacity.
> You
> >> > do not have to worry about Noisy Neighbors.
> >> >
> >> > The idea is to encourage businesses to trust the cloud with their most
> >> > critical business applications at a price point on par with
> traditional
> >> > SANs.
> >> >
> >> >
> >> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
> >> > mike.tutkowski@solidfire.com> wrote:
> >> >
> >> > > Ah, I think I see the miscommunication.
> >> > >
> >> > > I should have gone into a bit more detail about the SolidFire SAN.
> >> > >
> >> > > It is built from the ground up to support QoS on a LUN-by-LUN basis.
> >> > Every
> >> > > LUN is assigned a Min, Max, and Burst number of IOPS.
> >> > >
> >> > > The Min IOPS are a guaranteed number (as long as the SAN itself is
> not
> >> > > over provisioned). Capacity and IOPS are provisioned independently.
> >> > > Multiple volumes and multiple tenants using the same SAN do not
> suffer
> >> > from
> >> > > the Noisy Neighbor effect.
> >> > >
> >> > > When you create a Disk Offering in CS that is storage tagged to use
> >> > > SolidFire primary storage, you specify a Min, Max, and Burst number
> of
> >> > IOPS
> >> > > to provision from the SAN for volumes created from that Disk
> Offering.
> >> > >
> >> > > There is no notion of RAID groups that you see in more traditional
> >> SANs.
> >> > > The SAN is built from clusters of storage nodes and data is
> replicated
> >> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in
> the
> >> > > cluster to avoid hot spots and protect the data should a drives
> and/or
> >> > > nodes fail. You then scale the SAN by adding new storage nodes.
> >> > >
> >> > > Data is compressed and de-duplicated inline across the cluster and
> all
> >> > > volumes are thinly provisioned.
> >> > >
> >> > >
> >> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> > >wrote:
> >> > >
> >> > >> I'm surprised there's no mention of pool on the SAN in your
> >> description
> >> > of
> >> > >> the framework. I had assumed this was specific to your
> implementation,
> >> > >> because normally SANs host multiple disk pools, maybe multiple RAID
> >> 50s
> >> > >> and
> >> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
> >> > intended
> >> > >> for root disks and a separate one for data disks. Or one pool for
> >> > >> cloudstack and one dedicated to some other internal db application.
> >> But
> >> > it
> >> > >> sounds as though there's no place to specify which disks or pool on
> >> the
> >> > >> SAN
> >> > >> to use.
> >> > >>
> >> > >> We implemented our own internal storage SAN plugin based on 4.1. We
> >> used
> >> > >> the 'path' attribute of the primary storage pool object to specify
> >> which
> >> > >> pool name on the back end SAN to use, so we could create all-ssd
> pools
> >> > and
> >> > >> slower spindle pools, then differentiate between them based on
> storage
> >> > >> tags. Normally the path attribute would be the mount point for NFS,
> >> but
> >> > >> its
> >> > >> just a string. So when registering ours we enter San dns host name,
> >> the
> >> > >> san's rest api port, and the pool name. Then luns created from that
> >> > >> primary
> >> > >> storage come from the matching disk pool on the SAN. We can create
> and
> >> > >> register multiple pools of different types and purposes on the same
> >> SAN.
> >> > >> We
> >> > >> haven't yet gotten to porting it to the 4.2 frame work, so it will
> be
> >> > >> interesting to see what we can come up with to make it work
> similarly.
> >> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
> >> > mike.tutkowski@solidfire.com
> >> > >> >
> >> > >> wrote:
> >> > >>
> >> > >> > What you're saying here is definitely something we should talk
> >> about.
> >> > >> >
> >> > >> > Hopefully my previous e-mail has clarified how this works a bit.
> >> > >> >
> >> > >> > It mainly comes down to this:
> >> > >> >
> >> > >> > For the first time in CS history, primary storage is no longer
> >> > required
> >> > >> to
> >> > >> > be preallocated by the admin and then handed to CS. CS volumes
> don't
> >> > >> have
> >> > >> > to share a preallocated volume anymore.
> >> > >> >
> >> > >> > As of 4.2, primary storage can be based on a SAN (or some other
> >> > storage
> >> > >> > device). You can tell CS how many bytes and IOPS to use from this
> >> > >> storage
> >> > >> > device and CS invokes the appropriate plug-in to carve out LUNs
> >> > >> > dynamically.
> >> > >> >
> >> > >> > Each LUN is home to one and only one data disk. Data disks - in
> this
> >> > >> model
> >> > >> > - never share a LUN.
> >> > >> >
> >> > >> > The main use case for this is so a CS volume can deliver
> guaranteed
> >> > >> IOPS if
> >> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS
> on a
> >> > >> > LUN-by-LUN basis.
> >> > >> >
> >> > >> >
> >> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
> >> > shadowsor@gmail.com
> >> > >> > >wrote:
> >> > >> >
> >> > >> > > I guess whether or not a solidfire device is capable of hosting
> >> > >> > > multiple disk pools is irrelevant, we'd hope that we could get
> the
> >> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But
> if
> >> > these
> >> > >> > > stats aren't collected, I can't as an admin define multiple
> pools
> >> > and
> >> > >> > > expect cloudstack to allocate evenly from them or fill one up
> and
> >> > move
> >> > >> > > to the next, because it doesn't know how big it is.
> >> > >> > >
> >> > >> > > Ultimately this discussion has nothing to do with the KVM stuff
> >> > >> > > itself, just a tangent, but something to think about.
> >> > >> > >
> >> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
> >> > >> shadowsor@gmail.com>
> >> > >> > > wrote:
> >> > >> > > > Ok, on most storage pools it shows how many GB free/used when
> >> > >> listing
> >> > >> > > > the pool both via API and in the UI. I'm guessing those are
> >> empty
> >> > >> then
> >> > >> > > > for the solid fire storage, but it seems like the user should
> >> have
> >> > >> to
> >> > >> > > > define some sort of pool that the luns get carved out of, and
> >> you
> >> > >> > > > should be able to get the stats for that, right? Or is a
> solid
> >> > fire
> >> > >> > > > appliance only one pool per appliance? This isn't about
> billing,
> >> > but
> >> > >> > > > just so cloudstack itself knows whether or not there is space
> >> left
> >> > >> on
> >> > >> > > > the storage device, so cloudstack can go on allocating from a
> >> > >> > > > different primary storage as this one fills up. There are
> also
> >> > >> > > > notifications and things. It seems like there should be a
> call
> >> you
> >> > >> can
> >> > >> > > > handle for this, maybe Edison knows.
> >> > >> > > >
> >> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
> >> > >> shadowsor@gmail.com>
> >> > >> > > wrote:
> >> > >> > > >> You respond to more than attach and detach, right? Don't you
> >> > create
> >> > >> > > luns as
> >> > >> > > >> well? Or are you just referring to the hypervisor stuff?
> >> > >> > > >>
> >> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> >> > >> > mike.tutkowski@solidfire.com
> >> > >> > > >
> >> > >> > > >> wrote:
> >> > >> > > >>>
> >> > >> > > >>> Hi Marcus,
> >> > >> > > >>>
> >> > >> > > >>> I never need to respond to a CreateStoragePool call for
> either
> >> > >> > > XenServer
> >> > >> > > >>> or
> >> > >> > > >>> VMware.
> >> > >> > > >>>
> >> > >> > > >>> What happens is I respond only to the Attach- and
> >> Detach-volume
> >> > >> > > commands.
> >> > >> > > >>>
> >> > >> > > >>> Let's say an attach comes in:
> >> > >> > > >>>
> >> > >> > > >>> In this case, I check to see if the storage is "managed."
> >> > Talking
> >> > >> > > >>> XenServer
> >> > >> > > >>> here, if it is, I log in to the LUN that is the disk we
> want
> >> to
> >> > >> > attach.
> >> > >> > > >>> After, if this is the first time attaching this disk, I
> create
> >> > an
> >> > >> SR
> >> > >> > > and a
> >> > >> > > >>> VDI within the SR. If it is not the first time attaching
> this
> >> > >> disk,
> >> > >> > the
> >> > >> > > >>> LUN
> >> > >> > > >>> already has the SR and VDI on it.
> >> > >> > > >>>
> >> > >> > > >>> Once this is done, I let the normal "attach" logic run
> because
> >> > >> this
> >> > >> > > logic
> >> > >> > > >>> expected an SR and a VDI and now it has it.
> >> > >> > > >>>
> >> > >> > > >>> It's the same thing for VMware: Just substitute datastore
> for
> >> SR
> >> > >> and
> >> > >> > > VMDK
> >> > >> > > >>> for VDI.
> >> > >> > > >>>
> >> > >> > > >>> Does that make sense?
> >> > >> > > >>>
> >> > >> > > >>> Thanks!
> >> > >> > > >>>
> >> > >> > > >>>
> >> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> >> > >> > > >>> <sh...@gmail.com>wrote:
> >> > >> > > >>>
> >> > >> > > >>> > What do you do with Xen? I imagine the user enter the SAN
> >> > >> details
> >> > >> > > when
> >> > >> > > >>> > registering the pool? A the pool details are basically
> just
> >> > >> > > instructions
> >> > >> > > >>> > on
> >> > >> > > >>> > how to log into a target, correct?
> >> > >> > > >>> >
> >> > >> > > >>> > You can choose to log in a KVM host to the target during
> >> > >> > > >>> > createStoragePool
> >> > >> > > >>> > and save the pool in a map, or just save the pool info
> in a
> >> > map
> >> > >> for
> >> > >> > > >>> > future
> >> > >> > > >>> > reference by uuid, for when you do need to log in. The
> >> > >> > > createStoragePool
> >> > >> > > >>> > then just becomes a way to save the pool info to the
> agent.
> >> > >> > > Personally,
> >> > >> > > >>> > I'd
> >> > >> > > >>> > log in on the pool create and look/scan for specific luns
> >> when
> >> > >> > > they're
> >> > >> > > >>> > needed, but I haven't thought it through thoroughly. I
> just
> >> > say
> >> > >> > that
> >> > >> > > >>> > mainly
> >> > >> > > >>> > because login only happens once, the first time the pool
> is
> >> > >> used,
> >> > >> > and
> >> > >> > > >>> > every
> >> > >> > > >>> > other storage command is about discovering new luns or
> maybe
> >> > >> > > >>> > deleting/disconnecting luns no longer needed. On the
> other
> >> > hand,
> >> > >> > you
> >> > >> > > >>> > could
> >> > >> > > >>> > do all of the above: log in on pool create, then also
> check
> >> if
> >> > >> > you're
> >> > >> > > >>> > logged in on other commands and log in if you've lost
> >> > >> connection.
> >> > >> > > >>> >
> >> > >> > > >>> > With Xen, what does your registered pool   show in the UI
> >> for
> >> > >> > > avail/used
> >> > >> > > >>> > capacity, and how does it get that info? I assume there
> is
> >> > some
> >> > >> > sort
> >> > >> > > of
> >> > >> > > >>> > disk pool that the luns are carved from, and that your
> >> plugin
> >> > is
> >> > >> > > called
> >> > >> > > >>> > to
> >> > >> > > >>> > talk to the SAN and expose to the user how much of that
> pool
> >> > has
> >> > >> > been
> >> > >> > > >>> > allocated. Knowing how you already solves these problems
> >> with
> >> > >> Xen
> >> > >> > > will
> >> > >> > > >>> > help
> >> > >> > > >>> > figure out what to do with KVM.
> >> > >> > > >>> >
> >> > >> > > >>> > If this is the case, I think the plugin can continue to
> >> handle
> >> > >> it
> >> > >> > > rather
> >> > >> > > >>> > than getting details from the agent. I'm not sure if that
> >> > means
> >> > >> > nulls
> >> > >> > > >>> > are
> >> > >> > > >>> > OK for these on the agent side or what, I need to look at
> >> the
> >> > >> > storage
> >> > >> > > >>> > plugin arch more closely.
> >> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> >> > >> > > mike.tutkowski@solidfire.com>
> >> > >> > > >>> > wrote:
> >> > >> > > >>> >
> >> > >> > > >>> > > Hey Marcus,
> >> > >> > > >>> > >
> >> > >> > > >>> > > I'm reviewing your e-mails as I implement the necessary
> >> > >> methods
> >> > >> > in
> >> > >> > > new
> >> > >> > > >>> > > classes.
> >> > >> > > >>> > >
> >> > >> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool
> >> > >> accepts
> >> > >> > > all of
> >> > >> > > >>> > > the pool data (host, port, name, path) which would be
> used
> >> > to
> >> > >> log
> >> > >> > > the
> >> > >> > > >>> > > host into the initiator."
> >> > >> > > >>> > >
> >> > >> > > >>> > > Can you tell me, in my case, since a storage pool
> (primary
> >> > >> > > storage) is
> >> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
> >> anything
> >> > >> at
> >> > >> > > this
> >> > >> > > >>> > point,
> >> > >> > > >>> > > correct?
> >> > >> > > >>> > >
> >> > >> > > >>> > > Also, what kind of capacity, available, and used bytes
> >> make
> >> > >> sense
> >> > >> > > to
> >> > >> > > >>> > report
> >> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the
> >> SAN
> >> > >> in my
> >> > >> > > case
> >> > >> > > >>> > and
> >> > >> > > >>> > > not an individual LUN)?
> >> > >> > > >>> > >
> >> > >> > > >>> > > Thanks!
> >> > >> > > >>> > >
> >> > >> > > >>> > >
> >> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> >> > >> > > shadowsor@gmail.com
> >> > >> > > >>> > > >wrote:
> >> > >> > > >>> > >
> >> > >> > > >>> > > > Ok, KVM will be close to that, of course, because
> only
> >> the
> >> > >> > > >>> > > > hypervisor
> >> > >> > > >>> > > > classes differ, the rest is all mgmt server.
> Creating a
> >> > >> volume
> >> > >> > is
> >> > >> > > >>> > > > just
> >> > >> > > >>> > > > a db entry until it's deployed for the first time.
> >> > >> > > >>> > > > AttachVolumeCommand
> >> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
> >> analogous
> >> > >> to
> >> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm
> commands
> >> > (via
> >> > >> a
> >> > >> > KVM
> >> > >> > > >>> > > > StorageAdaptor) to log in the host to the target and
> >> then
> >> > >> you
> >> > >> > > have a
> >> > >> > > >>> > > > block device.  Maybe libvirt will do that for you,
> but
> >> my
> >> > >> quick
> >> > >> > > read
> >> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
> >> > actually a
> >> > >> > > pool,
> >> > >> > > >>> > > > not
> >> > >> > > >>> > > > a lun or volume, so you'll need to figure out if that
> >> > works
> >> > >> or
> >> > >> > if
> >> > >> > > >>> > > > you'll have to use iscsiadm commands.
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
> >> (because
> >> > >> > Libvirt
> >> > >> > > >>> > > > doesn't really manage your pool the way you want),
> >> you're
> >> > >> going
> >> > >> > > to
> >> > >> > > >>> > > > have to create a version of KVMStoragePool class and
> a
> >> > >> > > >>> > > > StorageAdaptor
> >> > >> > > >>> > > > class (see LibvirtStoragePool.java and
> >> > >> > > LibvirtStorageAdaptor.java),
> >> > >> > > >>> > > > implementing all of the methods, then in
> >> > >> KVMStorageManager.java
> >> > >> > > >>> > > > there's a "_storageMapper" map. This is used to
> select
> >> the
> >> > >> > > correct
> >> > >> > > >>> > > > adaptor, you can see in this file that every call
> first
> >> > >> pulls
> >> > >> > the
> >> > >> > > >>> > > > correct adaptor out of this map via
> getStorageAdaptor.
> >> So
> >> > >> you
> >> > >> > can
> >> > >> > > >>> > > > see
> >> > >> > > >>> > > > a comment in this file that says "add other storage
> >> > adaptors
> >> > >> > > here",
> >> > >> > > >>> > > > where it puts to this map, this is where you'd
> register
> >> > your
> >> > >> > > >>> > > > adaptor.
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > So, referencing StorageAdaptor.java,
> createStoragePool
> >> > >> accepts
> >> > >> > > all
> >> > >> > > >>> > > > of
> >> > >> > > >>> > > > the pool data (host, port, name, path) which would be
> >> used
> >> > >> to
> >> > >> > log
> >> > >> > > >>> > > > the
> >> > >> > > >>> > > > host into the initiator. I *believe* the method
> >> > >> getPhysicalDisk
> >> > >> > > will
> >> > >> > > >>> > > > need to do the work of attaching the lun.
> >> > >>  AttachVolumeCommand
> >> > >> > > calls
> >> > >> > > >>> > > > this and then creates the XML diskdef and attaches
> it to
> >> > the
> >> > >> > VM.
> >> > >> > > >>> > > > Now,
> >> > >> > > >>> > > > one thing you need to know is that createStoragePool
> is
> >> > >> called
> >> > >> > > >>> > > > often,
> >> > >> > > >>> > > > sometimes just to make sure the pool is there. You
> may
> >> > want
> >> > >> to
> >> > >> > > >>> > > > create
> >> > >> > > >>> > > > a map in your adaptor class and keep track of pools
> that
> >> > >> have
> >> > >> > > been
> >> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do
> this
> >> > >> because
> >> > >> > it
> >> > >> > > >>> > > > asks
> >> > >> > > >>> > > > libvirt about which storage pools exist. There are
> also
> >> > >> calls
> >> > >> > to
> >> > >> > > >>> > > > refresh the pool stats, and all of the other calls
> can
> >> be
> >> > >> seen
> >> > >> > in
> >> > >> > > >>> > > > the
> >> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical
> disk,
> >> > >> clone,
> >> > >> > > etc,
> >> > >> > > >>> > > > but
> >> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the
> vague
> >> > idea
> >> > >> > that
> >> > >> > > >>> > > > volumes are created on the mgmt server via the plugin
> >> now,
> >> > >> so
> >> > >> > > >>> > > > whatever
> >> > >> > > >>> > > > doesn't apply can just be stubbed out (or optionally
> >> > >> > > >>> > > > extended/reimplemented here, if you don't mind the
> hosts
> >> > >> > talking
> >> > >> > > to
> >> > >> > > >>> > > > the san api).
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > There is a difference between attaching new volumes
> and
> >> > >> > > launching a
> >> > >> > > >>> > > > VM
> >> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
> >> > >> definition
> >> > >> > > that
> >> > >> > > >>> > > > was
> >> > >> > > >>> > > > passed to the KVM agent includes the disks,
> >> > (StartCommand).
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > I'd be interested in how your pool is defined for
> Xen, I
> >> > >> > imagine
> >> > >> > > it
> >> > >> > > >>> > > > would need to be kept the same. Is it just a
> definition
> >> to
> >> > >> the
> >> > >> > > SAN
> >> > >> > > >>> > > > (ip address or some such, port number) and perhaps a
> >> > volume
> >> > >> > pool
> >> > >> > > >>> > > > name?
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > > If there is a way for me to update the ACL list on
> the
> >> > >> SAN to
> >> > >> > > have
> >> > >> > > >>> > > only a
> >> > >> > > >>> > > > > single KVM host have access to the volume, that
> would
> >> be
> >> > >> > ideal.
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > That depends on your SAN API.  I was under the
> >> impression
> >> > >> that
> >> > >> > > the
> >> > >> > > >>> > > > storage plugin framework allowed for acls, or for
> you to
> >> > do
> >> > >> > > whatever
> >> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc.
> You'd
> >> > just
> >> > >> > call
> >> > >> > > >>> > > > your
> >> > >> > > >>> > > > SAN API with the host info for the ACLs prior to when
> >> the
> >> > >> disk
> >> > >> > is
> >> > >> > > >>> > > > attached (or the VM is started).  I'd have to look
> more
> >> at
> >> > >> the
> >> > >> > > >>> > > > framework to know the details, in 4.1 I would do
> this in
> >> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> >> > >> > > >>> > > >
> >> > >> > > >>> > > >
> >> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >> > >> > > >>> > > > <mi...@solidfire.com> wrote:
> >> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That
> is a
> >> > bit
> >> > >> > > >>> > > > > different
> >> > >> > > >>> > > from
> >> > >> > > >>> > > > how
> >> > >> > > >>> > > > > it works with XenServer and VMware.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
> >> > >> XenServer:
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > * The user creates a CS volume (this is just
> recorded
> >> in
> >> > >> the
> >> > >> > > >>> > > > cloud.volumes
> >> > >> > > >>> > > > > table).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > * The user attaches the volume as a disk to a VM
> for
> >> the
> >> > >> > first
> >> > >> > > >>> > > > > time
> >> > >> > > >>> > (if
> >> > >> > > >>> > > > the
> >> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the
> >> > storage
> >> > >> > > >>> > > > > framework
> >> > >> > > >>> > > > invokes
> >> > >> > > >>> > > > > a method on the plug-in that creates a volume on
> the
> >> > >> > SAN...info
> >> > >> > > >>> > > > > like
> >> > >> > > >>> > > the
> >> > >> > > >>> > > > IQN
> >> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > * CitrixResourceBase's
> execute(AttachVolumeCommand) is
> >> > >> > > executed.
> >> > >> > > >>> > > > > It
> >> > >> > > >>> > > > > determines based on a flag passed in that the
> storage
> >> in
> >> > >> > > question
> >> > >> > > >>> > > > > is
> >> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
> >> > "traditional"
> >> > >> > > >>> > preallocated
> >> > >> > > >>> > > > > storage). This tells it to discover the iSCSI
> target.
> >> > Once
> >> > >> > > >>> > > > > discovered
> >> > >> > > >>> > > it
> >> > >> > > >>> > > > > determines if the iSCSI target already contains a
> >> > storage
> >> > >> > > >>> > > > > repository
> >> > >> > > >>> > > (it
> >> > >> > > >>> > > > > would if this were a re-attach situation). If it
> does
> >> > >> contain
> >> > >> > > an
> >> > >> > > >>> > > > > SR
> >> > >> > > >>> > > > already,
> >> > >> > > >>> > > > > then there should already be one VDI, as well. If
> >> there
> >> > >> is no
> >> > >> > > SR,
> >> > >> > > >>> > > > > an
> >> > >> > > >>> > SR
> >> > >> > > >>> > > > is
> >> > >> > > >>> > > > > created and a single VDI is created within it (that
> >> > takes
> >> > >> up
> >> > >> > > about
> >> > >> > > >>> > > > > as
> >> > >> > > >>> > > > much
> >> > >> > > >>> > > > > space as was requested for the CloudStack volume).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > * The normal attach-volume logic continues (it
> depends
> >> > on
> >> > >> the
> >> > >> > > >>> > existence
> >> > >> > > >>> > > > of
> >> > >> > > >>> > > > > an SR and a VDI).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > The VMware case is essentially the same (mainly
> just
> >> > >> > substitute
> >> > >> > > >>> > > datastore
> >> > >> > > >>> > > > > for SR and VMDK for VDI).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > In both cases, all hosts in the cluster have
> >> discovered
> >> > >> the
> >> > >> > > iSCSI
> >> > >> > > >>> > > target,
> >> > >> > > >>> > > > > but only the host that is currently running the VM
> >> that
> >> > is
> >> > >> > > using
> >> > >> > > >>> > > > > the
> >> > >> > > >>> > > VDI
> >> > >> > > >>> > > > (or
> >> > >> > > >>> > > > > VMKD) is actually using the disk.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > Live Migration should be OK because the hypervisors
> >> > >> > communicate
> >> > >> > > >>> > > > > with
> >> > >> > > >>> > > > > whatever metadata they have on the SR (or
> datastore).
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > I see what you're saying with KVM, though.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > In that case, the hosts are clustered only in
> >> > CloudStack's
> >> > >> > > eyes.
> >> > >> > > >>> > > > > CS
> >> > >> > > >>> > > > controls
> >> > >> > > >>> > > > > Live Migration. You don't really need a clustered
> >> > >> filesystem
> >> > >> > on
> >> > >> > > >>> > > > > the
> >> > >> > > >>> > > LUN.
> >> > >> > > >>> > > > The
> >> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > If there is a way for me to update the ACL list on
> the
> >> > >> SAN to
> >> > >> > > have
> >> > >> > > >>> > > only a
> >> > >> > > >>> > > > > single KVM host have access to the volume, that
> would
> >> be
> >> > >> > ideal.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover
> >> and
> >> > >> log
> >> > >> > in
> >> > >> > > to
> >> > >> > > >>> > > > > the
> >> > >> > > >>> > > > iSCSI
> >> > >> > > >>> > > > > target. I'll also need to take the resultant new
> >> device
> >> > >> and
> >> > >> > > pass
> >> > >> > > >>> > > > > it
> >> > >> > > >>> > > into
> >> > >> > > >>> > > > the
> >> > >> > > >>> > > > > VM.
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > Does this sound reasonable? Please call me out on
> >> > >> anything I
> >> > >> > > seem
> >> > >> > > >>> > > > incorrect
> >> > >> > > >>> > > > > about. :)
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> >> > >> > > >>> > shadowsor@gmail.com>
> >> > >> > > >>> > > > > wrote:
> >> > >> > > >>> > > > >>
> >> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a
> disk
> >> > def,
> >> > >> and
> >> > >> > > the
> >> > >> > > >>> > > attach
> >> > >> > > >>> > > > >> the disk def to the vm. You may need to do your
> own
> >> > >> > > >>> > > > >> StorageAdaptor
> >> > >> > > >>> > and
> >> > >> > > >>> > > > run
> >> > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending on
> >> how
> >> > >> the
> >> > >> > > >>> > > > >> libvirt
> >> > >> > > >>> > > iscsi
> >> > >> > > >>> > > > >> works. My impression is that a 1:1:1
> pool/lun/volume
> >> > >> isn't
> >> > >> > > how it
> >> > >> > > >>> > > works
> >> > >> > > >>> > > > on
> >> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
> >> > >> > > >>> > > > >>
> >> > >> > > >>> > > > >> Your plugin will handle acls as far as which host
> can
> >> > see
> >> > >> > > which
> >> > >> > > >>> > > > >> luns
> >> > >> > > >>> > > as
> >> > >> > > >>> > > > >> well, I remember discussing that months ago, so
> that
> >> a
> >> > >> disk
> >> > >> > > won't
> >> > >> > > >>> > > > >> be
> >> > >> > > >>> > > > >> connected until the hypervisor has exclusive
> access,
> >> so
> >> > >> it
> >> > >> > > will
> >> > >> > > >>> > > > >> be
> >> > >> > > >>> > > safe
> >> > >> > > >>> > > > and
> >> > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack
> loses
> >> > >> > > >>> > > > >> connectivity
> >> > >> > > >>> > > > with. It
> >> > >> > > >>> > > > >> should revoke access to everything but the target
> >> > host...
> >> > >> > > Except
> >> > >> > > >>> > > > >> for
> >> > >> > > >>> > > > during
> >> > >> > > >>> > > > >> migration but we can discuss that later, there's a
> >> > >> migration
> >> > >> > > prep
> >> > >> > > >>> > > > process
> >> > >> > > >>> > > > >> where the new host can be added to the acls, and
> the
> >> > old
> >> > >> > host
> >> > >> > > can
> >> > >> > > >>> > > > >> be
> >> > >> > > >>> > > > removed
> >> > >> > > >>> > > > >> post migration.
> >> > >> > > >>> > > > >>
> >> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> >> > >> > > >>> > > mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >> wrote:
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>> Yeah, that would be ideal.
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI
> target,
> >> > >> log in
> >> > >> > > to
> >> > >> > > >>> > > > >>> it,
> >> > >> > > >>> > > then
> >> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a result
> >> (and
> >> > >> leave
> >> > >> > > it
> >> > >> > > >>> > > > >>> as
> >> > >> > > >>> > is
> >> > >> > > >>> > > -
> >> > >> > > >>> > > > do
> >> > >> > > >>> > > > >>> not format it with any file system...clustered or
> >> > not).
> >> > >> I
> >> > >> > > would
> >> > >> > > >>> > pass
> >> > >> > > >>> > > > that
> >> > >> > > >>> > > > >>> device into the VM.
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>> Kind of accurate?
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
> <
> >> > >> > > >>> > > shadowsor@gmail.com>
> >> > >> > > >>> > > > >>> wrote:
> >> > >> > > >>> > > > >>>>
> >> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> >> > >> > > definitions.
> >> > >> > > >>> > There
> >> > >> > > >>> > > > are
> >> > >> > > >>> > > > >>>> ones that work for block devices rather than
> files.
> >> > You
> >> > >> > can
> >> > >> > > >>> > > > >>>> piggy
> >> > >> > > >>> > > > back off
> >> > >> > > >>> > > > >>>> of the existing disk definitions and attach it
> to
> >> the
> >> > >> vm
> >> > >> > as
> >> > >> > > a
> >> > >> > > >>> > block
> >> > >> > > >>> > > > device.
> >> > >> > > >>> > > > >>>> The definition is an XML string per libvirt XML
> >> > format.
> >> > >> > You
> >> > >> > > may
> >> > >> > > >>> > want
> >> > >> > > >>> > > > to use
> >> > >> > > >>> > > > >>>> an alternate path to the disk rather than just
> >> > /dev/sdx
> >> > >> > > like I
> >> > >> > > >>> > > > mentioned,
> >> > >> > > >>> > > > >>>> there are by-id paths to the block devices, as
> well
> >> > as
> >> > >> > other
> >> > >> > > >>> > > > >>>> ones
> >> > >> > > >>> > > > that will
> >> > >> > > >>> > > > >>>> be consistent and easier for management, not
> sure
> >> how
> >> > >> > > familiar
> >> > >> > > >>> > > > >>>> you
> >> > >> > > >>> > > > are with
> >> > >> > > >>> > > > >>>> device naming on Linux.
> >> > >> > > >>> > > > >>>>
> >> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> >> > >> > > >>> > > > >>>> <sh...@gmail.com>
> >> > >> > > >>> > > > wrote:
> >> > >> > > >>> > > > >>>>>
> >> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
> >> network/iscsi
> >> > >> > > initiator
> >> > >> > > >>> > > inside
> >> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach
> /dev/sdx
> >> > (your
> >> > >> > lun
> >> > >> > > on
> >> > >> > > >>> > > > hypervisor) as
> >> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some
> image
> >> > >> file
> >> > >> > > that
> >> > >> > > >>> > > resides
> >> > >> > > >>> > > > on a
> >> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
> >> target.
> >> > >> > > >>> > > > >>>>>
> >> > >> > > >>> > > > >>>>> Actually, if you plan on the storage supporting
> >> live
> >> > >> > > migration
> >> > >> > > >>> > > > >>>>> I
> >> > >> > > >>> > > > think
> >> > >> > > >>> > > > >>>>> this is the only way. You can't put a
> filesystem
> >> on
> >> > it
> >> > >> > and
> >> > >> > > >>> > > > >>>>> mount
> >> > >> > > >>> > it
> >> > >> > > >>> > > > in two
> >> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
> >> > clustered
> >> > >> > > >>> > > > >>>>> filesystem,
> >> > >> > > >>> > > in
> >> > >> > > >>> > > > which
> >> > >> > > >>> > > > >>>>> case you're back to shared mount point.
> >> > >> > > >>> > > > >>>>>
> >> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
> >> > >> basically
> >> > >> > > LVM
> >> > >> > > >>> > with a
> >> > >> > > >>> > > > xen
> >> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM.
> They
> >> > don't
> >> > >> > use
> >> > >> > > a
> >> > >> > > >>> > > > filesystem
> >> > >> > > >>> > > > >>>>> either.
> >> > >> > > >>> > > > >>>>>
> >> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the
> >> vm,"
> >> > >> do
> >> > >> > you
> >> > >> > > >>> > > > >>>>>> mean
> >> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think
> we
> >> > >> could do
> >> > >> > > that
> >> > >> > > >>> > > > >>>>>> in
> >> > >> > > >>> > > CS.
> >> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always
> circumvents
> >> > the
> >> > >> > > >>> > > > >>>>>> hypervisor,
> >> > >> > > >>> > > as
> >> > >> > > >>> > > > far as I
> >> > >> > > >>> > > > >>>>>> know.
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus
> Sorensen
> >> <
> >> > >> > > >>> > > > shadowsor@gmail.com>
> >> > >> > > >>> > > > >>>>>> wrote:
> >> > >> > > >>> > > > >>>>>>>
> >> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm
> >> > unless
> >> > >> > > there is
> >> > >> > > >>> > > > >>>>>>> a
> >> > >> > > >>> > > good
> >> > >> > > >>> > > > >>>>>>> reason not to.
> >> > >> > > >>> > > > >>>>>>>
> >> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> >> > >> > > >>> > shadowsor@gmail.com>
> >> > >> > > >>> > > > >>>>>>> wrote:
> >> > >> > > >>> > > > >>>>>>>>
> >> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think
> >> its a
> >> > >> > > mistake
> >> > >> > > >>> > > > >>>>>>>> to
> >> > >> > > >>> > go
> >> > >> > > >>> > > to
> >> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
> >> > >> volumes to
> >> > >> > > luns
> >> > >> > > >>> > and
> >> > >> > > >>> > > > then putting
> >> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
> >> > putting a
> >> > >> > > QCOW2
> >> > >> > > >>> > > > >>>>>>>> or
> >> > >> > > >>> > > even
> >> > >> > > >>> > > > RAW disk
> >> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot
> of
> >> > iops
> >> > >> > > along
> >> > >> > > >>> > > > >>>>>>>> the
> >> > >> > > >>> > > > way, and have
> >> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
> >> > >> journaling,
> >> > >> > > etc.
> >> > >> > > >>> > > > >>>>>>>>
> >> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new
> ground
> >> > in
> >> > >> KVM
> >> > >> > > with
> >> > >> > > >>> > CS.
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM
> and CS
> >> > >> today
> >> > >> > > is by
> >> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying
> the
> >> > >> > location
> >> > >> > > of
> >> > >> > > >>> > > > >>>>>>>>> the
> >> > >> > > >>> > > > share.
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open
> iSCSI
> >> by
> >> > >> > > >>> > > > >>>>>>>>> discovering
> >> > >> > > >>> > > their
> >> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then
> mounting
> >> it
> >> > >> > > somewhere
> >> > >> > > >>> > > > >>>>>>>>> on
> >> > >> > > >>> > > > their file
> >> > >> > > >>> > > > >>>>>>>>> system.
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
> >> > >> discovery,
> >> > >> > > >>> > > > >>>>>>>>> logging
> >> > >> > > >>> > > in,
> >> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and
> >> > >> letting
> >> > >> > the
> >> > >> > > >>> > current
> >> > >> > > >>> > > > code manage
> >> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
> >> Sorensen
> >> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
> >> different. I
> >> > >> need
> >> > >> > > to
> >> > >> > > >>> > catch
> >> > >> > > >>> > > up
> >> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
> >> basically
> >> > >> just
> >> > >> > > disk
> >> > >> > > >>> > > > snapshots + memory
> >> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
> >> > >> preferably
> >> > >> > be
> >> > >> > > >>> > handled
> >> > >> > > >>> > > > by the SAN,
> >> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
> >> > >> storage or
> >> > >> > > >>> > something
> >> > >> > > >>> > > > else. This is
> >> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so
> we
> >> > will
> >> > >> > > want to
> >> > >> > > >>> > see
> >> > >> > > >>> > > > how others are
> >> > >> > > >>> > > > >>>>>>>>>> planning theirs.
> >> > >> > > >>> > > > >>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus
> Sorensen" <
> >> > >> > > >>> > > shadowsor@gmail.com
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >>>>>>>>>> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think
> you'd
> >> > use a
> >> > >> > vdi
> >> > >> > > >>> > > > >>>>>>>>>>> style
> >> > >> > > >>> > on
> >> > >> > > >>> > > > an
> >> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it
> >> as a
> >> > >> RAW
> >> > >> > > >>> > > > >>>>>>>>>>> format.
> >> > >> > > >>> > > > Otherwise you're
> >> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun,
> mounting
> >> it,
> >> > >> > > creating
> >> > >> > > >>> > > > >>>>>>>>>>> a
> >> > >> > > >>> > > > QCOW2 disk image,
> >> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a
> performance
> >> > >> > killer.
> >> > >> > > >>> > > > >>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun
> as a
> >> > >> disk
> >> > >> > to
> >> > >> > > the
> >> > >> > > >>> > VM,
> >> > >> > > >>> > > > and
> >> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via
> the
> >> > >> storage
> >> > >> > > >>> > > > >>>>>>>>>>> plugin
> >> > >> > > >>> > is
> >> > >> > > >>> > > > best. My
> >> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin
> refactor
> >> > was
> >> > >> > that
> >> > >> > > >>> > > > >>>>>>>>>>> there
> >> > >> > > >>> > > was
> >> > >> > > >>> > > > a snapshot
> >> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to
> handle
> >> > >> > snapshots.
> >> > >> > > >>> > > > >>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus
> Sorensen" <
> >> > >> > > >>> > > > shadowsor@gmail.com>
> >> > >> > > >>> > > > >>>>>>>>>>> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled
> by
> >> > the
> >> > >> SAN
> >> > >> > > back
> >> > >> > > >>> > end,
> >> > >> > > >>> > > > if
> >> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt
> >> > server
> >> > >> > > could
> >> > >> > > >>> > > > >>>>>>>>>>>> call
> >> > >> > > >>> > > > your plugin for
> >> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be
> hypervisor
> >> > >> > > agnostic. As
> >> > >> > > >>> > far
> >> > >> > > >>> > > > as space, that
> >> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it.
> >> With
> >> > >> > ours,
> >> > >> > > we
> >> > >> > > >>> > carve
> >> > >> > > >>> > > > out luns from a
> >> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from
> the
> >> > >> pool
> >> > >> > > and is
> >> > >> > > >>> > > > independent of the
> >> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
> >> > >> > > >>> > > > >>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike
> Tutkowski"
> >> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type
> >> for
> >> > >> > libvirt
> >> > >> > > >>> > > > >>>>>>>>>>>>> won't
> >> > >> > > >>> > > > work
> >> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
> >> hypervisor
> >> > >> > > snapshots?
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a
> hypervisor
> >> > >> > snapshot,
> >> > >> > > the
> >> > >> > > >>> > VDI
> >> > >> > > >>> > > > for
> >> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same
> storage
> >> > >> > > repository
> >> > >> > > >>> > > > >>>>>>>>>>>>> as
> >> > >> > > >>> > > the
> >> > >> > > >>> > > > volume is on.
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's
> >> say
> >> > >> for
> >> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
> >> > >> > > >>> > > and
> >> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> >> > >> hypervisor
> >> > >> > > >>> > snapshots
> >> > >> > > >>> > > > in 4.2) is I'd
> >> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger
> than
> >> > what
> >> > >> the
> >> > >> > > user
> >> > >> > > >>> > > > requested for the
> >> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine
> because
> >> our
> >> > >> SAN
> >> > >> > > >>> > > > >>>>>>>>>>>>> thinly
> >> > >> > > >>> > > > provisions volumes,
> >> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used
> unless
> >> it
> >> > >> needs
> >> > >> > > to
> >> > >> > > >>> > > > >>>>>>>>>>>>> be).
> >> > >> > > >>> > > > The CloudStack
> >> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on
> the
> >> SAN
> >> > >> > volume
> >> > >> > > >>> > until a
> >> > >> > > >>> > > > hypervisor
> >> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would
> >> also
> >> > >> > reside
> >> > >> > > on
> >> > >> > > >>> > > > >>>>>>>>>>>>> the
> >> > >> > > >>> > > > SAN volume.
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and
> there
> >> is
> >> > >> no
> >> > >> > > >>> > > > >>>>>>>>>>>>> creation
> >> > >> > > >>> > of
> >> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from
> libvirt
> >> > >> (which,
> >> > >> > > even
> >> > >> > > >>> > > > >>>>>>>>>>>>> if
> >> > >> > > >>> > > > there were support
> >> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows
> >> one
> >> > >> LUN
> >> > >> > per
> >> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
> >> > >> > > >>> > > > target), then I
> >> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will
> work.
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
> >> > current
> >> > >> way
> >> > >> > > this
> >> > >> > > >>> > > works
> >> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
> >> > >> Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used
> for
> >> > >> iSCSI
> >> > >> > > access
> >> > >> > > >>> > > today.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too,
> >> but I
> >> > >> > might
> >> > >> > > as
> >> > >> > > >>> > well
> >> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
> >> > instead.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM,
> Marcus
> >> > >> Sorensen
> >> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
> >> SharedMountPoint, I
> >> > >> > > believe
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> it
> >> > >> > > >>> > > just
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something
> similar
> >> to
> >> > >> > that.
> >> > >> > > The
> >> > >> > > >>> > > > end-user
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> is
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file
> system
> >> > that
> >> > >> all
> >> > >> > > KVM
> >> > >> > > >>> > hosts
> >> > >> > > >>> > > > can
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what
> is
> >> > >> > providing
> >> > >> > > the
> >> > >> > > >>> > > > storage.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
> >> clustered
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory
> path
> >> has
> >> > >> VM
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM,
> Marcus
> >> > >> > Sorensen
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and
> >> iSCSI
> >> > >> all
> >> > >> > at
> >> > >> > > the
> >> > >> > > >>> > same
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM,
> Mike
> >> > >> > Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com>
> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
> >> > storage
> >> > >> > > pools:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh
> pool-list
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
> >> >  Autostart
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > -----------------------------------------
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active
> yes
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM,
> Mike
> >> > >> > > Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
> >> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you
> pointed
> >> > >> out.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
> >> > >> storage
> >> > >> > > pool
> >> > >> > > >>> > based
> >> > >> > > >>> > > on
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target
> would
> >> > only
> >> > >> > have
> >> > >> > > one
> >> > >> > > >>> > LUN,
> >> > >> > > >>> > > > so
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
> >> volume
> >> > in
> >> > >> > the
> >> > >> > > >>> > > (libvirt)
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates
> and
> >> > >> destroys
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
> >> problem
> >> > >> that
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> >> > >> > > >>> > > does
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
> >> targets/LUNs.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test
> this a
> >> > bit
> >> > >> to
> >> > >> > > see
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
> >> > >> > > >>> > > > libvirt
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as
> you
> >> > >> > > mentioned,
> >> > >> > > >>> > since
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one
> of my
> >> > >> iSCSI
> >> > >> > > >>> > > > targets/LUNs).
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM,
> >> Mike
> >> > >> > > Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com>
> >> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this
> >> type:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
> >> > NETFS("netfs"),
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String
> poolType) {
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType =
> poolType;
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String
> toString() {
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI
> type
> >> > is
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
> >> > >> > > >>> > > being
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were
> >> > >> getting
> >> > >> > at.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say,
> >> 4.2),
> >> > >> when
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and
> uses it
> >> > >> with
> >> > >> > > iSCSI,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
> >> > >> > > >>> > > > that
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM,
> >> > Marcus
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated
> on
> >> > the
> >> > >> > iSCSI
> >> > >> > > >>> > server,
> >> > >> > > >>> > > > and
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.",
> >> > which
> >> > >> I
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> >> > >> > > >>> > > your
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the
> >> work
> >> > of
> >> > >> > > logging
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > >> > > >>> > > and
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api
> does
> >> > >> that
> >> > >> > > work
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > >> > > >>> > the
> >> > >> > > >>> > > > Xen
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is
> whether
> >> > >> this
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> >> > >> > > >>> > a
> >> > >> > > >>> > > > 1:1
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register
> 1
> >> > iscsi
> >> > >> > > device
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
> >> > >> > > >>> > a
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read
> >> up a
> >> > >> bit
> >> > >> > > more
> >> > >> > > >>> > about
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just
> have
> >> to
> >> > >> write
> >> > >> > > your
> >> > >> > > >>> > own
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> >> > >> > > >>> > >  We
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
> >> libvirt,
> >> > >> see
> >> > >> > the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then
> >> > calls
> >> > >> > made
> >> > >> > > to
> >> > >> > > >>> > that
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
> >> > LibvirtStorageAdaptor
> >> > >> to
> >> > >> > > see
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
> >> > >> > > >>> > > that
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe
> write
> >> > some
> >> > >> > test
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > >> > > >>> > > code
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
> >> > >> register
> >> > >> > > iscsi
> >> > >> > > >>> > > storage
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31
> PM,
> >> > Mike
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com>
> >> > wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
> >> investigate
> >> > >> > libvirt
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> >> > >> > > >>> > > but
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting
> from
> >> > >> iSCSI
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29
> PM,
> >> > >> Mike
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <
> mike.tutkowski@solidfire.com>
> >> > >> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking
> through
> >> > >> some of
> >> > >> > > the
> >> > >> > > >>> > > classes
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26
> >> PM,
> >> > >> > Marcus
> >> > >> > > >>> > Sorensen
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you
> will
> >> > >> need
> >> > >> > the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
> >> > >> standard
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> >> > >> > > >>> > > for
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor
> to do
> >> > the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> >> > >> > > >>> > > > login.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> >> > >> > > >>> > and
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your
> need.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM,
> "Mike
> >> > >> > > Tutkowski"
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <
> mike.tutkowski@solidfire.com
> >> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember,
> during
> >> the
> >> > >> 4.2
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> >> > >> > > >>> > I
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
> >> > CloudStack.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked
> by
> >> the
> >> > >> > > storage
> >> > >> > > >>> > > > framework
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could
> dynamically
> >> > >> create
> >> > >> > and
> >> > >> > > >>> > delete
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> >> > >> > establish a
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> >> > >> > > >>> > > > mapping
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire
> volume
> >> > for
> >> > >> > QoS.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
> >> always
> >> > >> > > expected
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > >> > > >>> > > > admin
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and
> >> those
> >> > >> > > volumes
> >> > >> > > >>> > would
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not
> QoS
> >> > >> > > friendly).
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping
> >> scheme
> >> > >> > work,
> >> > >> > > I
> >> > >> > > >>> > needed
> >> > >> > > >>> > > > to
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware
> plug-ins
> >> > so
> >> > >> > they
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as
> >> > >> needed.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make
> this
> >> > >> happen
> >> > >> > > with
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed
> with
> >> how
> >> > >> this
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> >> > >> > > >>> > > work
> >> > >> > > >>> > > > on
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with
> KVM
> >> > >> know
> >> > >> > > how I
> >> > >> > > >>> > will
> >> > >> > > >>> > > > need
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example,
> >> > will I
> >> > >> > > have to
> >> > >> > > >>> > > expect
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host
> and
> >> > >> use it
> >> > >> > > for
> >> > >> > > >>> > this
> >> > >> > > >>> > > to
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any
> suggestions,
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack
> Developer,
> >> > >> > SolidFire
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
> >> > mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the
> world
> >> > uses
> >> > >> the
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> >> > >> SolidFire
> >> > >> > > Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
> >> mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world
> >> uses
> >> > >> the
> >> > >> > > cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
> >> > >> SolidFire
> >> > >> > > Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e:
> mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world
> uses
> >> > the
> >> > >> > > cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
> >> > SolidFire
> >> > >> > Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses
> >> the
> >> > >> > cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
> >> SolidFire
> >> > >> Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses
> the
> >> > >> cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
> >> SolidFire
> >> > >> Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses
> the
> >> > >> cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>> --
> >> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
> >> Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the
> >> cloud™
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>>>>> --
> >> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
> >> Inc.
> >> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
> >> cloud™
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>>
> >> > >> > > >>> > > > >>>>>>>>> --
> >> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>>
> >> > >> > > >>> > > > >>>>>> --
> >> > >> > > >>> > > > >>>>>> Mike Tutkowski
> >> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>>>>> o: 303.746.7302
> >> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>>
> >> > >> > > >>> > > > >>> --
> >> > >> > > >>> > > > >>> Mike Tutkowski
> >> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> >> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > >>> o: 303.746.7302
> >> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > >
> >> > >> > > >>> > > > > --
> >> > >> > > >>> > > > > Mike Tutkowski
> >> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> >> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > > > o: 303.746.7302
> >> > >> > > >>> > > > > Advancing the way the world uses the cloud™
> >> > >> > > >>> > > >
> >> > >> > > >>> > >
> >> > >> > > >>> > >
> >> > >> > > >>> > >
> >> > >> > > >>> > > --
> >> > >> > > >>> > > *Mike Tutkowski*
> >> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > >> > > >>> > > e: mike.tutkowski@solidfire.com
> >> > >> > > >>> > > o: 303.746.7302
> >> > >> > > >>> > > Advancing the way the world uses the
> >> > >> > > >>> > > cloud<
> http://solidfire.com/solution/overview/?video=play>
> >> > >> > > >>> > > *™*
> >> > >> > > >>> > >
> >> > >> > > >>> >
> >> > >> > > >>>
> >> > >> > > >>>
> >> > >> > > >>>
> >> > >> > > >>> --
> >> > >> > > >>> *Mike Tutkowski*
> >> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > >> > > >>> e: mike.tutkowski@solidfire.com
> >> > >> > > >>> o: 303.746.7302
> >> > >> > > >>> Advancing the way the world uses the
> >> > >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
> >> > >> > > >>> *™*
> >> > >> > >
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> > --
> >> > >> > *Mike Tutkowski*
> >> > >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > >> > e: mike.tutkowski@solidfire.com
> >> > >> > o: 303.746.7302
> >> > >> > Advancing the way the world uses the
> >> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > >> > *™*
> >> > >> >
> >> > >>
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > *Mike Tutkowski*
> >> > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > e: mike.tutkowski@solidfire.com
> >> > > o: 303.746.7302
> >> > > Advancing the way the world uses the cloud<
> >> > http://solidfire.com/solution/overview/?video=play>
> >> > > *™*
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >> >
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
That wasn't my question, but I feel we're getting off in the weeds and
I can just look at the storage framework to see how it works and what
options it supports.

On Wed, Sep 18, 2013 at 10:44 AM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> At the time being, I am not aware of any other storage vendor with truly
> guaranteed QoS.
>
> Most implement QoS in a relative sense (like thread priorities).
>
>
> On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Yeah, that's why I thought it was specific to your implementation. Perhaps
>> that's true, then?
>> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > I totally get where you're coming from with the tiered-pool approach,
>> > though.
>> >
>> > Prior to SolidFire, I worked at HP and the product I worked on allowed a
>> > single, clustered SAN to host multiple pools of storage. One pool might
>> be
>> > made up of all-SSD storage nodes while another pool might be made up of
>> > slower HDDs.
>> >
>> > That kind of tiering is not what SolidFire QoS is about, though, as that
>> > kind of tiering does not guarantee QoS.
>> >
>> > In the SolidFire SAN, QoS was designed in from the beginning and is
>> > extremely granular. Each volume has its own performance and capacity. You
>> > do not have to worry about Noisy Neighbors.
>> >
>> > The idea is to encourage businesses to trust the cloud with their most
>> > critical business applications at a price point on par with traditional
>> > SANs.
>> >
>> >
>> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
>> > mike.tutkowski@solidfire.com> wrote:
>> >
>> > > Ah, I think I see the miscommunication.
>> > >
>> > > I should have gone into a bit more detail about the SolidFire SAN.
>> > >
>> > > It is built from the ground up to support QoS on a LUN-by-LUN basis.
>> > Every
>> > > LUN is assigned a Min, Max, and Burst number of IOPS.
>> > >
>> > > The Min IOPS are a guaranteed number (as long as the SAN itself is not
>> > > over provisioned). Capacity and IOPS are provisioned independently.
>> > > Multiple volumes and multiple tenants using the same SAN do not suffer
>> > from
>> > > the Noisy Neighbor effect.
>> > >
>> > > When you create a Disk Offering in CS that is storage tagged to use
>> > > SolidFire primary storage, you specify a Min, Max, and Burst number of
>> > IOPS
>> > > to provision from the SAN for volumes created from that Disk Offering.
>> > >
>> > > There is no notion of RAID groups that you see in more traditional
>> SANs.
>> > > The SAN is built from clusters of storage nodes and data is replicated
>> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the
>> > > cluster to avoid hot spots and protect the data should a drives and/or
>> > > nodes fail. You then scale the SAN by adding new storage nodes.
>> > >
>> > > Data is compressed and de-duplicated inline across the cluster and all
>> > > volumes are thinly provisioned.
>> > >
>> > >
>> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> > >
>> > >> I'm surprised there's no mention of pool on the SAN in your
>> description
>> > of
>> > >> the framework. I had assumed this was specific to your implementation,
>> > >> because normally SANs host multiple disk pools, maybe multiple RAID
>> 50s
>> > >> and
>> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
>> > intended
>> > >> for root disks and a separate one for data disks. Or one pool for
>> > >> cloudstack and one dedicated to some other internal db application.
>> But
>> > it
>> > >> sounds as though there's no place to specify which disks or pool on
>> the
>> > >> SAN
>> > >> to use.
>> > >>
>> > >> We implemented our own internal storage SAN plugin based on 4.1. We
>> used
>> > >> the 'path' attribute of the primary storage pool object to specify
>> which
>> > >> pool name on the back end SAN to use, so we could create all-ssd pools
>> > and
>> > >> slower spindle pools, then differentiate between them based on storage
>> > >> tags. Normally the path attribute would be the mount point for NFS,
>> but
>> > >> its
>> > >> just a string. So when registering ours we enter San dns host name,
>> the
>> > >> san's rest api port, and the pool name. Then luns created from that
>> > >> primary
>> > >> storage come from the matching disk pool on the SAN. We can create and
>> > >> register multiple pools of different types and purposes on the same
>> SAN.
>> > >> We
>> > >> haven't yet gotten to porting it to the 4.2 frame work, so it will be
>> > >> interesting to see what we can come up with to make it work similarly.
>> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
>> > mike.tutkowski@solidfire.com
>> > >> >
>> > >> wrote:
>> > >>
>> > >> > What you're saying here is definitely something we should talk
>> about.
>> > >> >
>> > >> > Hopefully my previous e-mail has clarified how this works a bit.
>> > >> >
>> > >> > It mainly comes down to this:
>> > >> >
>> > >> > For the first time in CS history, primary storage is no longer
>> > required
>> > >> to
>> > >> > be preallocated by the admin and then handed to CS. CS volumes don't
>> > >> have
>> > >> > to share a preallocated volume anymore.
>> > >> >
>> > >> > As of 4.2, primary storage can be based on a SAN (or some other
>> > storage
>> > >> > device). You can tell CS how many bytes and IOPS to use from this
>> > >> storage
>> > >> > device and CS invokes the appropriate plug-in to carve out LUNs
>> > >> > dynamically.
>> > >> >
>> > >> > Each LUN is home to one and only one data disk. Data disks - in this
>> > >> model
>> > >> > - never share a LUN.
>> > >> >
>> > >> > The main use case for this is so a CS volume can deliver guaranteed
>> > >> IOPS if
>> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
>> > >> > LUN-by-LUN basis.
>> > >> >
>> > >> >
>> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
>> > shadowsor@gmail.com
>> > >> > >wrote:
>> > >> >
>> > >> > > I guess whether or not a solidfire device is capable of hosting
>> > >> > > multiple disk pools is irrelevant, we'd hope that we could get the
>> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if
>> > these
>> > >> > > stats aren't collected, I can't as an admin define multiple pools
>> > and
>> > >> > > expect cloudstack to allocate evenly from them or fill one up and
>> > move
>> > >> > > to the next, because it doesn't know how big it is.
>> > >> > >
>> > >> > > Ultimately this discussion has nothing to do with the KVM stuff
>> > >> > > itself, just a tangent, but something to think about.
>> > >> > >
>> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>> > >> shadowsor@gmail.com>
>> > >> > > wrote:
>> > >> > > > Ok, on most storage pools it shows how many GB free/used when
>> > >> listing
>> > >> > > > the pool both via API and in the UI. I'm guessing those are
>> empty
>> > >> then
>> > >> > > > for the solid fire storage, but it seems like the user should
>> have
>> > >> to
>> > >> > > > define some sort of pool that the luns get carved out of, and
>> you
>> > >> > > > should be able to get the stats for that, right? Or is a solid
>> > fire
>> > >> > > > appliance only one pool per appliance? This isn't about billing,
>> > but
>> > >> > > > just so cloudstack itself knows whether or not there is space
>> left
>> > >> on
>> > >> > > > the storage device, so cloudstack can go on allocating from a
>> > >> > > > different primary storage as this one fills up. There are also
>> > >> > > > notifications and things. It seems like there should be a call
>> you
>> > >> can
>> > >> > > > handle for this, maybe Edison knows.
>> > >> > > >
>> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>> > >> shadowsor@gmail.com>
>> > >> > > wrote:
>> > >> > > >> You respond to more than attach and detach, right? Don't you
>> > create
>> > >> > > luns as
>> > >> > > >> well? Or are you just referring to the hypervisor stuff?
>> > >> > > >>
>> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>> > >> > mike.tutkowski@solidfire.com
>> > >> > > >
>> > >> > > >> wrote:
>> > >> > > >>>
>> > >> > > >>> Hi Marcus,
>> > >> > > >>>
>> > >> > > >>> I never need to respond to a CreateStoragePool call for either
>> > >> > > XenServer
>> > >> > > >>> or
>> > >> > > >>> VMware.
>> > >> > > >>>
>> > >> > > >>> What happens is I respond only to the Attach- and
>> Detach-volume
>> > >> > > commands.
>> > >> > > >>>
>> > >> > > >>> Let's say an attach comes in:
>> > >> > > >>>
>> > >> > > >>> In this case, I check to see if the storage is "managed."
>> > Talking
>> > >> > > >>> XenServer
>> > >> > > >>> here, if it is, I log in to the LUN that is the disk we want
>> to
>> > >> > attach.
>> > >> > > >>> After, if this is the first time attaching this disk, I create
>> > an
>> > >> SR
>> > >> > > and a
>> > >> > > >>> VDI within the SR. If it is not the first time attaching this
>> > >> disk,
>> > >> > the
>> > >> > > >>> LUN
>> > >> > > >>> already has the SR and VDI on it.
>> > >> > > >>>
>> > >> > > >>> Once this is done, I let the normal "attach" logic run because
>> > >> this
>> > >> > > logic
>> > >> > > >>> expected an SR and a VDI and now it has it.
>> > >> > > >>>
>> > >> > > >>> It's the same thing for VMware: Just substitute datastore for
>> SR
>> > >> and
>> > >> > > VMDK
>> > >> > > >>> for VDI.
>> > >> > > >>>
>> > >> > > >>> Does that make sense?
>> > >> > > >>>
>> > >> > > >>> Thanks!
>> > >> > > >>>
>> > >> > > >>>
>> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>> > >> > > >>> <sh...@gmail.com>wrote:
>> > >> > > >>>
>> > >> > > >>> > What do you do with Xen? I imagine the user enter the SAN
>> > >> details
>> > >> > > when
>> > >> > > >>> > registering the pool? A the pool details are basically just
>> > >> > > instructions
>> > >> > > >>> > on
>> > >> > > >>> > how to log into a target, correct?
>> > >> > > >>> >
>> > >> > > >>> > You can choose to log in a KVM host to the target during
>> > >> > > >>> > createStoragePool
>> > >> > > >>> > and save the pool in a map, or just save the pool info in a
>> > map
>> > >> for
>> > >> > > >>> > future
>> > >> > > >>> > reference by uuid, for when you do need to log in. The
>> > >> > > createStoragePool
>> > >> > > >>> > then just becomes a way to save the pool info to the agent.
>> > >> > > Personally,
>> > >> > > >>> > I'd
>> > >> > > >>> > log in on the pool create and look/scan for specific luns
>> when
>> > >> > > they're
>> > >> > > >>> > needed, but I haven't thought it through thoroughly. I just
>> > say
>> > >> > that
>> > >> > > >>> > mainly
>> > >> > > >>> > because login only happens once, the first time the pool is
>> > >> used,
>> > >> > and
>> > >> > > >>> > every
>> > >> > > >>> > other storage command is about discovering new luns or maybe
>> > >> > > >>> > deleting/disconnecting luns no longer needed. On the other
>> > hand,
>> > >> > you
>> > >> > > >>> > could
>> > >> > > >>> > do all of the above: log in on pool create, then also check
>> if
>> > >> > you're
>> > >> > > >>> > logged in on other commands and log in if you've lost
>> > >> connection.
>> > >> > > >>> >
>> > >> > > >>> > With Xen, what does your registered pool   show in the UI
>> for
>> > >> > > avail/used
>> > >> > > >>> > capacity, and how does it get that info? I assume there is
>> > some
>> > >> > sort
>> > >> > > of
>> > >> > > >>> > disk pool that the luns are carved from, and that your
>> plugin
>> > is
>> > >> > > called
>> > >> > > >>> > to
>> > >> > > >>> > talk to the SAN and expose to the user how much of that pool
>> > has
>> > >> > been
>> > >> > > >>> > allocated. Knowing how you already solves these problems
>> with
>> > >> Xen
>> > >> > > will
>> > >> > > >>> > help
>> > >> > > >>> > figure out what to do with KVM.
>> > >> > > >>> >
>> > >> > > >>> > If this is the case, I think the plugin can continue to
>> handle
>> > >> it
>> > >> > > rather
>> > >> > > >>> > than getting details from the agent. I'm not sure if that
>> > means
>> > >> > nulls
>> > >> > > >>> > are
>> > >> > > >>> > OK for these on the agent side or what, I need to look at
>> the
>> > >> > storage
>> > >> > > >>> > plugin arch more closely.
>> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>> > >> > > mike.tutkowski@solidfire.com>
>> > >> > > >>> > wrote:
>> > >> > > >>> >
>> > >> > > >>> > > Hey Marcus,
>> > >> > > >>> > >
>> > >> > > >>> > > I'm reviewing your e-mails as I implement the necessary
>> > >> methods
>> > >> > in
>> > >> > > new
>> > >> > > >>> > > classes.
>> > >> > > >>> > >
>> > >> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool
>> > >> accepts
>> > >> > > all of
>> > >> > > >>> > > the pool data (host, port, name, path) which would be used
>> > to
>> > >> log
>> > >> > > the
>> > >> > > >>> > > host into the initiator."
>> > >> > > >>> > >
>> > >> > > >>> > > Can you tell me, in my case, since a storage pool (primary
>> > >> > > storage) is
>> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
>> anything
>> > >> at
>> > >> > > this
>> > >> > > >>> > point,
>> > >> > > >>> > > correct?
>> > >> > > >>> > >
>> > >> > > >>> > > Also, what kind of capacity, available, and used bytes
>> make
>> > >> sense
>> > >> > > to
>> > >> > > >>> > report
>> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the
>> SAN
>> > >> in my
>> > >> > > case
>> > >> > > >>> > and
>> > >> > > >>> > > not an individual LUN)?
>> > >> > > >>> > >
>> > >> > > >>> > > Thanks!
>> > >> > > >>> > >
>> > >> > > >>> > >
>> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>> > >> > > shadowsor@gmail.com
>> > >> > > >>> > > >wrote:
>> > >> > > >>> > >
>> > >> > > >>> > > > Ok, KVM will be close to that, of course, because only
>> the
>> > >> > > >>> > > > hypervisor
>> > >> > > >>> > > > classes differ, the rest is all mgmt server. Creating a
>> > >> volume
>> > >> > is
>> > >> > > >>> > > > just
>> > >> > > >>> > > > a db entry until it's deployed for the first time.
>> > >> > > >>> > > > AttachVolumeCommand
>> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
>> analogous
>> > >> to
>> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands
>> > (via
>> > >> a
>> > >> > KVM
>> > >> > > >>> > > > StorageAdaptor) to log in the host to the target and
>> then
>> > >> you
>> > >> > > have a
>> > >> > > >>> > > > block device.  Maybe libvirt will do that for you, but
>> my
>> > >> quick
>> > >> > > read
>> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
>> > actually a
>> > >> > > pool,
>> > >> > > >>> > > > not
>> > >> > > >>> > > > a lun or volume, so you'll need to figure out if that
>> > works
>> > >> or
>> > >> > if
>> > >> > > >>> > > > you'll have to use iscsiadm commands.
>> > >> > > >>> > > >
>> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
>> (because
>> > >> > Libvirt
>> > >> > > >>> > > > doesn't really manage your pool the way you want),
>> you're
>> > >> going
>> > >> > > to
>> > >> > > >>> > > > have to create a version of KVMStoragePool class and a
>> > >> > > >>> > > > StorageAdaptor
>> > >> > > >>> > > > class (see LibvirtStoragePool.java and
>> > >> > > LibvirtStorageAdaptor.java),
>> > >> > > >>> > > > implementing all of the methods, then in
>> > >> KVMStorageManager.java
>> > >> > > >>> > > > there's a "_storageMapper" map. This is used to select
>> the
>> > >> > > correct
>> > >> > > >>> > > > adaptor, you can see in this file that every call first
>> > >> pulls
>> > >> > the
>> > >> > > >>> > > > correct adaptor out of this map via getStorageAdaptor.
>> So
>> > >> you
>> > >> > can
>> > >> > > >>> > > > see
>> > >> > > >>> > > > a comment in this file that says "add other storage
>> > adaptors
>> > >> > > here",
>> > >> > > >>> > > > where it puts to this map, this is where you'd register
>> > your
>> > >> > > >>> > > > adaptor.
>> > >> > > >>> > > >
>> > >> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool
>> > >> accepts
>> > >> > > all
>> > >> > > >>> > > > of
>> > >> > > >>> > > > the pool data (host, port, name, path) which would be
>> used
>> > >> to
>> > >> > log
>> > >> > > >>> > > > the
>> > >> > > >>> > > > host into the initiator. I *believe* the method
>> > >> getPhysicalDisk
>> > >> > > will
>> > >> > > >>> > > > need to do the work of attaching the lun.
>> > >>  AttachVolumeCommand
>> > >> > > calls
>> > >> > > >>> > > > this and then creates the XML diskdef and attaches it to
>> > the
>> > >> > VM.
>> > >> > > >>> > > > Now,
>> > >> > > >>> > > > one thing you need to know is that createStoragePool is
>> > >> called
>> > >> > > >>> > > > often,
>> > >> > > >>> > > > sometimes just to make sure the pool is there. You may
>> > want
>> > >> to
>> > >> > > >>> > > > create
>> > >> > > >>> > > > a map in your adaptor class and keep track of pools that
>> > >> have
>> > >> > > been
>> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this
>> > >> because
>> > >> > it
>> > >> > > >>> > > > asks
>> > >> > > >>> > > > libvirt about which storage pools exist. There are also
>> > >> calls
>> > >> > to
>> > >> > > >>> > > > refresh the pool stats, and all of the other calls can
>> be
>> > >> seen
>> > >> > in
>> > >> > > >>> > > > the
>> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk,
>> > >> clone,
>> > >> > > etc,
>> > >> > > >>> > > > but
>> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague
>> > idea
>> > >> > that
>> > >> > > >>> > > > volumes are created on the mgmt server via the plugin
>> now,
>> > >> so
>> > >> > > >>> > > > whatever
>> > >> > > >>> > > > doesn't apply can just be stubbed out (or optionally
>> > >> > > >>> > > > extended/reimplemented here, if you don't mind the hosts
>> > >> > talking
>> > >> > > to
>> > >> > > >>> > > > the san api).
>> > >> > > >>> > > >
>> > >> > > >>> > > > There is a difference between attaching new volumes and
>> > >> > > launching a
>> > >> > > >>> > > > VM
>> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
>> > >> definition
>> > >> > > that
>> > >> > > >>> > > > was
>> > >> > > >>> > > > passed to the KVM agent includes the disks,
>> > (StartCommand).
>> > >> > > >>> > > >
>> > >> > > >>> > > > I'd be interested in how your pool is defined for Xen, I
>> > >> > imagine
>> > >> > > it
>> > >> > > >>> > > > would need to be kept the same. Is it just a definition
>> to
>> > >> the
>> > >> > > SAN
>> > >> > > >>> > > > (ip address or some such, port number) and perhaps a
>> > volume
>> > >> > pool
>> > >> > > >>> > > > name?
>> > >> > > >>> > > >
>> > >> > > >>> > > > > If there is a way for me to update the ACL list on the
>> > >> SAN to
>> > >> > > have
>> > >> > > >>> > > only a
>> > >> > > >>> > > > > single KVM host have access to the volume, that would
>> be
>> > >> > ideal.
>> > >> > > >>> > > >
>> > >> > > >>> > > > That depends on your SAN API.  I was under the
>> impression
>> > >> that
>> > >> > > the
>> > >> > > >>> > > > storage plugin framework allowed for acls, or for you to
>> > do
>> > >> > > whatever
>> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd
>> > just
>> > >> > call
>> > >> > > >>> > > > your
>> > >> > > >>> > > > SAN API with the host info for the ACLs prior to when
>> the
>> > >> disk
>> > >> > is
>> > >> > > >>> > > > attached (or the VM is started).  I'd have to look more
>> at
>> > >> the
>> > >> > > >>> > > > framework to know the details, in 4.1 I would do this in
>> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
>> > >> > > >>> > > >
>> > >> > > >>> > > >
>> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > >> > > >>> > > > <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a
>> > bit
>> > >> > > >>> > > > > different
>> > >> > > >>> > > from
>> > >> > > >>> > > > how
>> > >> > > >>> > > > > it works with XenServer and VMware.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
>> > >> XenServer:
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > * The user creates a CS volume (this is just recorded
>> in
>> > >> the
>> > >> > > >>> > > > cloud.volumes
>> > >> > > >>> > > > > table).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > * The user attaches the volume as a disk to a VM for
>> the
>> > >> > first
>> > >> > > >>> > > > > time
>> > >> > > >>> > (if
>> > >> > > >>> > > > the
>> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the
>> > storage
>> > >> > > >>> > > > > framework
>> > >> > > >>> > > > invokes
>> > >> > > >>> > > > > a method on the plug-in that creates a volume on the
>> > >> > SAN...info
>> > >> > > >>> > > > > like
>> > >> > > >>> > > the
>> > >> > > >>> > > > IQN
>> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
>> > >> > > executed.
>> > >> > > >>> > > > > It
>> > >> > > >>> > > > > determines based on a flag passed in that the storage
>> in
>> > >> > > question
>> > >> > > >>> > > > > is
>> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
>> > "traditional"
>> > >> > > >>> > preallocated
>> > >> > > >>> > > > > storage). This tells it to discover the iSCSI target.
>> > Once
>> > >> > > >>> > > > > discovered
>> > >> > > >>> > > it
>> > >> > > >>> > > > > determines if the iSCSI target already contains a
>> > storage
>> > >> > > >>> > > > > repository
>> > >> > > >>> > > (it
>> > >> > > >>> > > > > would if this were a re-attach situation). If it does
>> > >> contain
>> > >> > > an
>> > >> > > >>> > > > > SR
>> > >> > > >>> > > > already,
>> > >> > > >>> > > > > then there should already be one VDI, as well. If
>> there
>> > >> is no
>> > >> > > SR,
>> > >> > > >>> > > > > an
>> > >> > > >>> > SR
>> > >> > > >>> > > > is
>> > >> > > >>> > > > > created and a single VDI is created within it (that
>> > takes
>> > >> up
>> > >> > > about
>> > >> > > >>> > > > > as
>> > >> > > >>> > > > much
>> > >> > > >>> > > > > space as was requested for the CloudStack volume).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > * The normal attach-volume logic continues (it depends
>> > on
>> > >> the
>> > >> > > >>> > existence
>> > >> > > >>> > > > of
>> > >> > > >>> > > > > an SR and a VDI).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > The VMware case is essentially the same (mainly just
>> > >> > substitute
>> > >> > > >>> > > datastore
>> > >> > > >>> > > > > for SR and VMDK for VDI).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > In both cases, all hosts in the cluster have
>> discovered
>> > >> the
>> > >> > > iSCSI
>> > >> > > >>> > > target,
>> > >> > > >>> > > > > but only the host that is currently running the VM
>> that
>> > is
>> > >> > > using
>> > >> > > >>> > > > > the
>> > >> > > >>> > > VDI
>> > >> > > >>> > > > (or
>> > >> > > >>> > > > > VMKD) is actually using the disk.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > Live Migration should be OK because the hypervisors
>> > >> > communicate
>> > >> > > >>> > > > > with
>> > >> > > >>> > > > > whatever metadata they have on the SR (or datastore).
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > I see what you're saying with KVM, though.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > In that case, the hosts are clustered only in
>> > CloudStack's
>> > >> > > eyes.
>> > >> > > >>> > > > > CS
>> > >> > > >>> > > > controls
>> > >> > > >>> > > > > Live Migration. You don't really need a clustered
>> > >> filesystem
>> > >> > on
>> > >> > > >>> > > > > the
>> > >> > > >>> > > LUN.
>> > >> > > >>> > > > The
>> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > If there is a way for me to update the ACL list on the
>> > >> SAN to
>> > >> > > have
>> > >> > > >>> > > only a
>> > >> > > >>> > > > > single KVM host have access to the volume, that would
>> be
>> > >> > ideal.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover
>> and
>> > >> log
>> > >> > in
>> > >> > > to
>> > >> > > >>> > > > > the
>> > >> > > >>> > > > iSCSI
>> > >> > > >>> > > > > target. I'll also need to take the resultant new
>> device
>> > >> and
>> > >> > > pass
>> > >> > > >>> > > > > it
>> > >> > > >>> > > into
>> > >> > > >>> > > > the
>> > >> > > >>> > > > > VM.
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > Does this sound reasonable? Please call me out on
>> > >> anything I
>> > >> > > seem
>> > >> > > >>> > > > incorrect
>> > >> > > >>> > > > > about. :)
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> > >> > > >>> > shadowsor@gmail.com>
>> > >> > > >>> > > > > wrote:
>> > >> > > >>> > > > >>
>> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk
>> > def,
>> > >> and
>> > >> > > the
>> > >> > > >>> > > attach
>> > >> > > >>> > > > >> the disk def to the vm. You may need to do your own
>> > >> > > >>> > > > >> StorageAdaptor
>> > >> > > >>> > and
>> > >> > > >>> > > > run
>> > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending on
>> how
>> > >> the
>> > >> > > >>> > > > >> libvirt
>> > >> > > >>> > > iscsi
>> > >> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume
>> > >> isn't
>> > >> > > how it
>> > >> > > >>> > > works
>> > >> > > >>> > > > on
>> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
>> > >> > > >>> > > > >>
>> > >> > > >>> > > > >> Your plugin will handle acls as far as which host can
>> > see
>> > >> > > which
>> > >> > > >>> > > > >> luns
>> > >> > > >>> > > as
>> > >> > > >>> > > > >> well, I remember discussing that months ago, so that
>> a
>> > >> disk
>> > >> > > won't
>> > >> > > >>> > > > >> be
>> > >> > > >>> > > > >> connected until the hypervisor has exclusive access,
>> so
>> > >> it
>> > >> > > will
>> > >> > > >>> > > > >> be
>> > >> > > >>> > > safe
>> > >> > > >>> > > > and
>> > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
>> > >> > > >>> > > > >> connectivity
>> > >> > > >>> > > > with. It
>> > >> > > >>> > > > >> should revoke access to everything but the target
>> > host...
>> > >> > > Except
>> > >> > > >>> > > > >> for
>> > >> > > >>> > > > during
>> > >> > > >>> > > > >> migration but we can discuss that later, there's a
>> > >> migration
>> > >> > > prep
>> > >> > > >>> > > > process
>> > >> > > >>> > > > >> where the new host can be added to the acls, and the
>> > old
>> > >> > host
>> > >> > > can
>> > >> > > >>> > > > >> be
>> > >> > > >>> > > > removed
>> > >> > > >>> > > > >> post migration.
>> > >> > > >>> > > > >>
>> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> > >> > > >>> > > mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >> wrote:
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>> Yeah, that would be ideal.
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI target,
>> > >> log in
>> > >> > > to
>> > >> > > >>> > > > >>> it,
>> > >> > > >>> > > then
>> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a result
>> (and
>> > >> leave
>> > >> > > it
>> > >> > > >>> > > > >>> as
>> > >> > > >>> > is
>> > >> > > >>> > > -
>> > >> > > >>> > > > do
>> > >> > > >>> > > > >>> not format it with any file system...clustered or
>> > not).
>> > >> I
>> > >> > > would
>> > >> > > >>> > pass
>> > >> > > >>> > > > that
>> > >> > > >>> > > > >>> device into the VM.
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>> Kind of accurate?
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> > >> > > >>> > > shadowsor@gmail.com>
>> > >> > > >>> > > > >>> wrote:
>> > >> > > >>> > > > >>>>
>> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
>> > >> > > definitions.
>> > >> > > >>> > There
>> > >> > > >>> > > > are
>> > >> > > >>> > > > >>>> ones that work for block devices rather than files.
>> > You
>> > >> > can
>> > >> > > >>> > > > >>>> piggy
>> > >> > > >>> > > > back off
>> > >> > > >>> > > > >>>> of the existing disk definitions and attach it to
>> the
>> > >> vm
>> > >> > as
>> > >> > > a
>> > >> > > >>> > block
>> > >> > > >>> > > > device.
>> > >> > > >>> > > > >>>> The definition is an XML string per libvirt XML
>> > format.
>> > >> > You
>> > >> > > may
>> > >> > > >>> > want
>> > >> > > >>> > > > to use
>> > >> > > >>> > > > >>>> an alternate path to the disk rather than just
>> > /dev/sdx
>> > >> > > like I
>> > >> > > >>> > > > mentioned,
>> > >> > > >>> > > > >>>> there are by-id paths to the block devices, as well
>> > as
>> > >> > other
>> > >> > > >>> > > > >>>> ones
>> > >> > > >>> > > > that will
>> > >> > > >>> > > > >>>> be consistent and easier for management, not sure
>> how
>> > >> > > familiar
>> > >> > > >>> > > > >>>> you
>> > >> > > >>> > > > are with
>> > >> > > >>> > > > >>>> device naming on Linux.
>> > >> > > >>> > > > >>>>
>> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> > >> > > >>> > > > >>>> <sh...@gmail.com>
>> > >> > > >>> > > > wrote:
>> > >> > > >>> > > > >>>>>
>> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
>> network/iscsi
>> > >> > > initiator
>> > >> > > >>> > > inside
>> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx
>> > (your
>> > >> > lun
>> > >> > > on
>> > >> > > >>> > > > hypervisor) as
>> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image
>> > >> file
>> > >> > > that
>> > >> > > >>> > > resides
>> > >> > > >>> > > > on a
>> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
>> target.
>> > >> > > >>> > > > >>>>>
>> > >> > > >>> > > > >>>>> Actually, if you plan on the storage supporting
>> live
>> > >> > > migration
>> > >> > > >>> > > > >>>>> I
>> > >> > > >>> > > > think
>> > >> > > >>> > > > >>>>> this is the only way. You can't put a filesystem
>> on
>> > it
>> > >> > and
>> > >> > > >>> > > > >>>>> mount
>> > >> > > >>> > it
>> > >> > > >>> > > > in two
>> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
>> > clustered
>> > >> > > >>> > > > >>>>> filesystem,
>> > >> > > >>> > > in
>> > >> > > >>> > > > which
>> > >> > > >>> > > > >>>>> case you're back to shared mount point.
>> > >> > > >>> > > > >>>>>
>> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
>> > >> basically
>> > >> > > LVM
>> > >> > > >>> > with a
>> > >> > > >>> > > > xen
>> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They
>> > don't
>> > >> > use
>> > >> > > a
>> > >> > > >>> > > > filesystem
>> > >> > > >>> > > > >>>>> either.
>> > >> > > >>> > > > >>>>>
>> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the
>> vm,"
>> > >> do
>> > >> > you
>> > >> > > >>> > > > >>>>>> mean
>> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we
>> > >> could do
>> > >> > > that
>> > >> > > >>> > > > >>>>>> in
>> > >> > > >>> > > CS.
>> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents
>> > the
>> > >> > > >>> > > > >>>>>> hypervisor,
>> > >> > > >>> > > as
>> > >> > > >>> > > > far as I
>> > >> > > >>> > > > >>>>>> know.
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>> <
>> > >> > > >>> > > > shadowsor@gmail.com>
>> > >> > > >>> > > > >>>>>> wrote:
>> > >> > > >>> > > > >>>>>>>
>> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm
>> > unless
>> > >> > > there is
>> > >> > > >>> > > > >>>>>>> a
>> > >> > > >>> > > good
>> > >> > > >>> > > > >>>>>>> reason not to.
>> > >> > > >>> > > > >>>>>>>
>> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> > >> > > >>> > shadowsor@gmail.com>
>> > >> > > >>> > > > >>>>>>> wrote:
>> > >> > > >>> > > > >>>>>>>>
>> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think
>> its a
>> > >> > > mistake
>> > >> > > >>> > > > >>>>>>>> to
>> > >> > > >>> > go
>> > >> > > >>> > > to
>> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
>> > >> volumes to
>> > >> > > luns
>> > >> > > >>> > and
>> > >> > > >>> > > > then putting
>> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
>> > putting a
>> > >> > > QCOW2
>> > >> > > >>> > > > >>>>>>>> or
>> > >> > > >>> > > even
>> > >> > > >>> > > > RAW disk
>> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of
>> > iops
>> > >> > > along
>> > >> > > >>> > > > >>>>>>>> the
>> > >> > > >>> > > > way, and have
>> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
>> > >> journaling,
>> > >> > > etc.
>> > >> > > >>> > > > >>>>>>>>
>> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground
>> > in
>> > >> KVM
>> > >> > > with
>> > >> > > >>> > CS.
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS
>> > >> today
>> > >> > > is by
>> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
>> > >> > location
>> > >> > > of
>> > >> > > >>> > > > >>>>>>>>> the
>> > >> > > >>> > > > share.
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI
>> by
>> > >> > > >>> > > > >>>>>>>>> discovering
>> > >> > > >>> > > their
>> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting
>> it
>> > >> > > somewhere
>> > >> > > >>> > > > >>>>>>>>> on
>> > >> > > >>> > > > their file
>> > >> > > >>> > > > >>>>>>>>> system.
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
>> > >> discovery,
>> > >> > > >>> > > > >>>>>>>>> logging
>> > >> > > >>> > > in,
>> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and
>> > >> letting
>> > >> > the
>> > >> > > >>> > current
>> > >> > > >>> > > > code manage
>> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
>> Sorensen
>> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
>> different. I
>> > >> need
>> > >> > > to
>> > >> > > >>> > catch
>> > >> > > >>> > > up
>> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
>> basically
>> > >> just
>> > >> > > disk
>> > >> > > >>> > > > snapshots + memory
>> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
>> > >> preferably
>> > >> > be
>> > >> > > >>> > handled
>> > >> > > >>> > > > by the SAN,
>> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
>> > >> storage or
>> > >> > > >>> > something
>> > >> > > >>> > > > else. This is
>> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we
>> > will
>> > >> > > want to
>> > >> > > >>> > see
>> > >> > > >>> > > > how others are
>> > >> > > >>> > > > >>>>>>>>>> planning theirs.
>> > >> > > >>> > > > >>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> > >> > > >>> > > shadowsor@gmail.com
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >>>>>>>>>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd
>> > use a
>> > >> > vdi
>> > >> > > >>> > > > >>>>>>>>>>> style
>> > >> > > >>> > on
>> > >> > > >>> > > > an
>> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it
>> as a
>> > >> RAW
>> > >> > > >>> > > > >>>>>>>>>>> format.
>> > >> > > >>> > > > Otherwise you're
>> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting
>> it,
>> > >> > > creating
>> > >> > > >>> > > > >>>>>>>>>>> a
>> > >> > > >>> > > > QCOW2 disk image,
>> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
>> > >> > killer.
>> > >> > > >>> > > > >>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a
>> > >> disk
>> > >> > to
>> > >> > > the
>> > >> > > >>> > VM,
>> > >> > > >>> > > > and
>> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the
>> > >> storage
>> > >> > > >>> > > > >>>>>>>>>>> plugin
>> > >> > > >>> > is
>> > >> > > >>> > > > best. My
>> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor
>> > was
>> > >> > that
>> > >> > > >>> > > > >>>>>>>>>>> there
>> > >> > > >>> > > was
>> > >> > > >>> > > > a snapshot
>> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle
>> > >> > snapshots.
>> > >> > > >>> > > > >>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> > >> > > >>> > > > shadowsor@gmail.com>
>> > >> > > >>> > > > >>>>>>>>>>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by
>> > the
>> > >> SAN
>> > >> > > back
>> > >> > > >>> > end,
>> > >> > > >>> > > > if
>> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt
>> > server
>> > >> > > could
>> > >> > > >>> > > > >>>>>>>>>>>> call
>> > >> > > >>> > > > your plugin for
>> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
>> > >> > > agnostic. As
>> > >> > > >>> > far
>> > >> > > >>> > > > as space, that
>> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it.
>> With
>> > >> > ours,
>> > >> > > we
>> > >> > > >>> > carve
>> > >> > > >>> > > > out luns from a
>> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the
>> > >> pool
>> > >> > > and is
>> > >> > > >>> > > > independent of the
>> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>> > >> > > >>> > > > >>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type
>> for
>> > >> > libvirt
>> > >> > > >>> > > > >>>>>>>>>>>>> won't
>> > >> > > >>> > > > work
>> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
>> hypervisor
>> > >> > > snapshots?
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
>> > >> > snapshot,
>> > >> > > the
>> > >> > > >>> > VDI
>> > >> > > >>> > > > for
>> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
>> > >> > > repository
>> > >> > > >>> > > > >>>>>>>>>>>>> as
>> > >> > > >>> > > the
>> > >> > > >>> > > > volume is on.
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's
>> say
>> > >> for
>> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
>> > >> > > >>> > > and
>> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
>> > >> hypervisor
>> > >> > > >>> > snapshots
>> > >> > > >>> > > > in 4.2) is I'd
>> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than
>> > what
>> > >> the
>> > >> > > user
>> > >> > > >>> > > > requested for the
>> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because
>> our
>> > >> SAN
>> > >> > > >>> > > > >>>>>>>>>>>>> thinly
>> > >> > > >>> > > > provisions volumes,
>> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless
>> it
>> > >> needs
>> > >> > > to
>> > >> > > >>> > > > >>>>>>>>>>>>> be).
>> > >> > > >>> > > > The CloudStack
>> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the
>> SAN
>> > >> > volume
>> > >> > > >>> > until a
>> > >> > > >>> > > > hypervisor
>> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would
>> also
>> > >> > reside
>> > >> > > on
>> > >> > > >>> > > > >>>>>>>>>>>>> the
>> > >> > > >>> > > > SAN volume.
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there
>> is
>> > >> no
>> > >> > > >>> > > > >>>>>>>>>>>>> creation
>> > >> > > >>> > of
>> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
>> > >> (which,
>> > >> > > even
>> > >> > > >>> > > > >>>>>>>>>>>>> if
>> > >> > > >>> > > > there were support
>> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows
>> one
>> > >> LUN
>> > >> > per
>> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
>> > >> > > >>> > > > target), then I
>> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
>> > current
>> > >> way
>> > >> > > this
>> > >> > > >>> > > works
>> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
>> > >> Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for
>> > >> iSCSI
>> > >> > > access
>> > >> > > >>> > > today.
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too,
>> but I
>> > >> > might
>> > >> > > as
>> > >> > > >>> > well
>> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
>> > instead.
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
>> > >> Sorensen
>> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
>> SharedMountPoint, I
>> > >> > > believe
>> > >> > > >>> > > > >>>>>>>>>>>>>>> it
>> > >> > > >>> > > just
>> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar
>> to
>> > >> > that.
>> > >> > > The
>> > >> > > >>> > > > end-user
>> > >> > > >>> > > > >>>>>>>>>>>>>>> is
>> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system
>> > that
>> > >> all
>> > >> > > KVM
>> > >> > > >>> > hosts
>> > >> > > >>> > > > can
>> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>> > >> > providing
>> > >> > > the
>> > >> > > >>> > > > storage.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
>> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
>> clustered
>> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path
>> has
>> > >> VM
>> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
>> > >> > > >>> > > > >>>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>> > >> > Sorensen
>> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and
>> iSCSI
>> > >> all
>> > >> > at
>> > >> > > the
>> > >> > > >>> > same
>> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>> > >> > Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
>> > storage
>> > >> > > pools:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>> >  Autostart
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > -----------------------------------------
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>> > >> > > Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed
>> > >> out.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
>> > >> storage
>> > >> > > pool
>> > >> > > >>> > based
>> > >> > > >>> > > on
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would
>> > only
>> > >> > have
>> > >> > > one
>> > >> > > >>> > LUN,
>> > >> > > >>> > > > so
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
>> volume
>> > in
>> > >> > the
>> > >> > > >>> > > (libvirt)
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
>> > >> destroys
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
>> problem
>> > >> that
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>> > >> > > >>> > > does
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
>> targets/LUNs.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a
>> > bit
>> > >> to
>> > >> > > see
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>> > >> > > >>> > > > libvirt
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>> > >> > > mentioned,
>> > >> > > >>> > since
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my
>> > >> iSCSI
>> > >> > > >>> > > > targets/LUNs).
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM,
>> Mike
>> > >> > > Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com>
>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this
>> type:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
>> > NETFS("netfs"),
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type
>> > is
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>> > >> > > >>> > > being
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were
>> > >> getting
>> > >> > at.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say,
>> 4.2),
>> > >> when
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it
>> > >> with
>> > >> > > iSCSI,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>> > >> > > >>> > > > that
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM,
>> > Marcus
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on
>> > the
>> > >> > iSCSI
>> > >> > > >>> > server,
>> > >> > > >>> > > > and
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.",
>> > which
>> > >> I
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>> > >> > > >>> > > your
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the
>> work
>> > of
>> > >> > > logging
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > >> > > >>> > > and
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does
>> > >> that
>> > >> > > work
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > >> > > >>> > the
>> > >> > > >>> > > > Xen
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether
>> > >> this
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>> > >> > > >>> > a
>> > >> > > >>> > > > 1:1
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1
>> > iscsi
>> > >> > > device
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>> > >> > > >>> > a
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read
>> up a
>> > >> bit
>> > >> > > more
>> > >> > > >>> > about
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have
>> to
>> > >> write
>> > >> > > your
>> > >> > > >>> > own
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> > >> > > >>> > >  We
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
>> libvirt,
>> > >> see
>> > >> > the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then
>> > calls
>> > >> > made
>> > >> > > to
>> > >> > > >>> > that
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
>> > LibvirtStorageAdaptor
>> > >> to
>> > >> > > see
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>> > >> > > >>> > > that
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write
>> > some
>> > >> > test
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > >> > > >>> > > code
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
>> > >> register
>> > >> > > iscsi
>> > >> > > >>> > > storage
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM,
>> > Mike
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com>
>> > wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
>> investigate
>> > >> > libvirt
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>> > >> > > >>> > > but
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
>> > >> iSCSI
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM,
>> > >> Mike
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com>
>> > >> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through
>> > >> some of
>> > >> > > the
>> > >> > > >>> > > classes
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26
>> PM,
>> > >> > Marcus
>> > >> > > >>> > Sorensen
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will
>> > >> need
>> > >> > the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
>> > >> standard
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>> > >> > > >>> > > for
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do
>> > the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>> > >> > > >>> > > > login.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> > >> > > >>> > and
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>> > >> > > Tutkowski"
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mike.tutkowski@solidfire.com
>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during
>> the
>> > >> 4.2
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>> > >> > > >>> > I
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
>> > CloudStack.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by
>> the
>> > >> > > storage
>> > >> > > >>> > > > framework
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically
>> > >> create
>> > >> > and
>> > >> > > >>> > delete
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>> > >> > establish a
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> > >> > > >>> > > > mapping
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume
>> > for
>> > >> > QoS.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
>> always
>> > >> > > expected
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > >> > > >>> > > > admin
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and
>> those
>> > >> > > volumes
>> > >> > > >>> > would
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>> > >> > > friendly).
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping
>> scheme
>> > >> > work,
>> > >> > > I
>> > >> > > >>> > needed
>> > >> > > >>> > > > to
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins
>> > so
>> > >> > they
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as
>> > >> needed.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this
>> > >> happen
>> > >> > > with
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with
>> how
>> > >> this
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>> > >> > > >>> > > work
>> > >> > > >>> > > > on
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM
>> > >> know
>> > >> > > how I
>> > >> > > >>> > will
>> > >> > > >>> > > > need
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example,
>> > will I
>> > >> > > have to
>> > >> > > >>> > > expect
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and
>> > >> use it
>> > >> > > for
>> > >> > > >>> > this
>> > >> > > >>> > > to
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>> > >> > SolidFire
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
>> > mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world
>> > uses
>> > >> the
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
>> > >> SolidFire
>> > >> > > Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
>> mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world
>> uses
>> > >> the
>> > >> > > cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
>> > >> SolidFire
>> > >> > > Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses
>> > the
>> > >> > > cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
>> > SolidFire
>> > >> > Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses
>> the
>> > >> > cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
>> SolidFire
>> > >> Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
>> > >> cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
>> SolidFire
>> > >> Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the
>> > >> cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>> --
>> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
>> Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the
>> cloud™
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>>>>> --
>> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
>> Inc.
>> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
>> cloud™
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>>
>> > >> > > >>> > > > >>>>>>>>> --
>> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>>
>> > >> > > >>> > > > >>>>>> --
>> > >> > > >>> > > > >>>>>> Mike Tutkowski
>> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>>>>> o: 303.746.7302
>> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>>
>> > >> > > >>> > > > >>> --
>> > >> > > >>> > > > >>> Mike Tutkowski
>> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > >>> o: 303.746.7302
>> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >
>> > >> > > >>> > > > >
>> > >> > > >>> > > > > --
>> > >> > > >>> > > > > Mike Tutkowski
>> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > > > o: 303.746.7302
>> > >> > > >>> > > > > Advancing the way the world uses the cloud™
>> > >> > > >>> > > >
>> > >> > > >>> > >
>> > >> > > >>> > >
>> > >> > > >>> > >
>> > >> > > >>> > > --
>> > >> > > >>> > > *Mike Tutkowski*
>> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> > >> > > >>> > > e: mike.tutkowski@solidfire.com
>> > >> > > >>> > > o: 303.746.7302
>> > >> > > >>> > > Advancing the way the world uses the
>> > >> > > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > >> > > >>> > > *™*
>> > >> > > >>> > >
>> > >> > > >>> >
>> > >> > > >>>
>> > >> > > >>>
>> > >> > > >>>
>> > >> > > >>> --
>> > >> > > >>> *Mike Tutkowski*
>> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> > >> > > >>> e: mike.tutkowski@solidfire.com
>> > >> > > >>> o: 303.746.7302
>> > >> > > >>> Advancing the way the world uses the
>> > >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
>> > >> > > >>> *™*
>> > >> > >
>> > >> >
>> > >> >
>> > >> >
>> > >> > --
>> > >> > *Mike Tutkowski*
>> > >> > *Senior CloudStack Developer, SolidFire Inc.*
>> > >> > e: mike.tutkowski@solidfire.com
>> > >> > o: 303.746.7302
>> > >> > Advancing the way the world uses the
>> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > >> > *™*
>> > >> >
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > > *Mike Tutkowski*
>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > e: mike.tutkowski@solidfire.com
>> > > o: 303.746.7302
>> > > Advancing the way the world uses the cloud<
>> > http://solidfire.com/solution/overview/?video=play>
>> > > *™*
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
At the time being, I am not aware of any other storage vendor with truly
guaranteed QoS.

Most implement QoS in a relative sense (like thread priorities).


On Wed, Sep 18, 2013 at 7:57 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> Yeah, that's why I thought it was specific to your implementation. Perhaps
> that's true, then?
> On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > I totally get where you're coming from with the tiered-pool approach,
> > though.
> >
> > Prior to SolidFire, I worked at HP and the product I worked on allowed a
> > single, clustered SAN to host multiple pools of storage. One pool might
> be
> > made up of all-SSD storage nodes while another pool might be made up of
> > slower HDDs.
> >
> > That kind of tiering is not what SolidFire QoS is about, though, as that
> > kind of tiering does not guarantee QoS.
> >
> > In the SolidFire SAN, QoS was designed in from the beginning and is
> > extremely granular. Each volume has its own performance and capacity. You
> > do not have to worry about Noisy Neighbors.
> >
> > The idea is to encourage businesses to trust the cloud with their most
> > critical business applications at a price point on par with traditional
> > SANs.
> >
> >
> > On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
> > mike.tutkowski@solidfire.com> wrote:
> >
> > > Ah, I think I see the miscommunication.
> > >
> > > I should have gone into a bit more detail about the SolidFire SAN.
> > >
> > > It is built from the ground up to support QoS on a LUN-by-LUN basis.
> > Every
> > > LUN is assigned a Min, Max, and Burst number of IOPS.
> > >
> > > The Min IOPS are a guaranteed number (as long as the SAN itself is not
> > > over provisioned). Capacity and IOPS are provisioned independently.
> > > Multiple volumes and multiple tenants using the same SAN do not suffer
> > from
> > > the Noisy Neighbor effect.
> > >
> > > When you create a Disk Offering in CS that is storage tagged to use
> > > SolidFire primary storage, you specify a Min, Max, and Burst number of
> > IOPS
> > > to provision from the SAN for volumes created from that Disk Offering.
> > >
> > > There is no notion of RAID groups that you see in more traditional
> SANs.
> > > The SAN is built from clusters of storage nodes and data is replicated
> > > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the
> > > cluster to avoid hot spots and protect the data should a drives and/or
> > > nodes fail. You then scale the SAN by adding new storage nodes.
> > >
> > > Data is compressed and de-duplicated inline across the cluster and all
> > > volumes are thinly provisioned.
> > >
> > >
> > > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> > >
> > >> I'm surprised there's no mention of pool on the SAN in your
> description
> > of
> > >> the framework. I had assumed this was specific to your implementation,
> > >> because normally SANs host multiple disk pools, maybe multiple RAID
> 50s
> > >> and
> > >> 10s, or however the SAN admin wants to split it up. Maybe a pool
> > intended
> > >> for root disks and a separate one for data disks. Or one pool for
> > >> cloudstack and one dedicated to some other internal db application.
> But
> > it
> > >> sounds as though there's no place to specify which disks or pool on
> the
> > >> SAN
> > >> to use.
> > >>
> > >> We implemented our own internal storage SAN plugin based on 4.1. We
> used
> > >> the 'path' attribute of the primary storage pool object to specify
> which
> > >> pool name on the back end SAN to use, so we could create all-ssd pools
> > and
> > >> slower spindle pools, then differentiate between them based on storage
> > >> tags. Normally the path attribute would be the mount point for NFS,
> but
> > >> its
> > >> just a string. So when registering ours we enter San dns host name,
> the
> > >> san's rest api port, and the pool name. Then luns created from that
> > >> primary
> > >> storage come from the matching disk pool on the SAN. We can create and
> > >> register multiple pools of different types and purposes on the same
> SAN.
> > >> We
> > >> haven't yet gotten to porting it to the 4.2 frame work, so it will be
> > >> interesting to see what we can come up with to make it work similarly.
> > >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com
> > >> >
> > >> wrote:
> > >>
> > >> > What you're saying here is definitely something we should talk
> about.
> > >> >
> > >> > Hopefully my previous e-mail has clarified how this works a bit.
> > >> >
> > >> > It mainly comes down to this:
> > >> >
> > >> > For the first time in CS history, primary storage is no longer
> > required
> > >> to
> > >> > be preallocated by the admin and then handed to CS. CS volumes don't
> > >> have
> > >> > to share a preallocated volume anymore.
> > >> >
> > >> > As of 4.2, primary storage can be based on a SAN (or some other
> > storage
> > >> > device). You can tell CS how many bytes and IOPS to use from this
> > >> storage
> > >> > device and CS invokes the appropriate plug-in to carve out LUNs
> > >> > dynamically.
> > >> >
> > >> > Each LUN is home to one and only one data disk. Data disks - in this
> > >> model
> > >> > - never share a LUN.
> > >> >
> > >> > The main use case for this is so a CS volume can deliver guaranteed
> > >> IOPS if
> > >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
> > >> > LUN-by-LUN basis.
> > >> >
> > >> >
> > >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
> > shadowsor@gmail.com
> > >> > >wrote:
> > >> >
> > >> > > I guess whether or not a solidfire device is capable of hosting
> > >> > > multiple disk pools is irrelevant, we'd hope that we could get the
> > >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if
> > these
> > >> > > stats aren't collected, I can't as an admin define multiple pools
> > and
> > >> > > expect cloudstack to allocate evenly from them or fill one up and
> > move
> > >> > > to the next, because it doesn't know how big it is.
> > >> > >
> > >> > > Ultimately this discussion has nothing to do with the KVM stuff
> > >> > > itself, just a tangent, but something to think about.
> > >> > >
> > >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
> > >> shadowsor@gmail.com>
> > >> > > wrote:
> > >> > > > Ok, on most storage pools it shows how many GB free/used when
> > >> listing
> > >> > > > the pool both via API and in the UI. I'm guessing those are
> empty
> > >> then
> > >> > > > for the solid fire storage, but it seems like the user should
> have
> > >> to
> > >> > > > define some sort of pool that the luns get carved out of, and
> you
> > >> > > > should be able to get the stats for that, right? Or is a solid
> > fire
> > >> > > > appliance only one pool per appliance? This isn't about billing,
> > but
> > >> > > > just so cloudstack itself knows whether or not there is space
> left
> > >> on
> > >> > > > the storage device, so cloudstack can go on allocating from a
> > >> > > > different primary storage as this one fills up. There are also
> > >> > > > notifications and things. It seems like there should be a call
> you
> > >> can
> > >> > > > handle for this, maybe Edison knows.
> > >> > > >
> > >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
> > >> shadowsor@gmail.com>
> > >> > > wrote:
> > >> > > >> You respond to more than attach and detach, right? Don't you
> > create
> > >> > > luns as
> > >> > > >> well? Or are you just referring to the hypervisor stuff?
> > >> > > >>
> > >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> > >> > mike.tutkowski@solidfire.com
> > >> > > >
> > >> > > >> wrote:
> > >> > > >>>
> > >> > > >>> Hi Marcus,
> > >> > > >>>
> > >> > > >>> I never need to respond to a CreateStoragePool call for either
> > >> > > XenServer
> > >> > > >>> or
> > >> > > >>> VMware.
> > >> > > >>>
> > >> > > >>> What happens is I respond only to the Attach- and
> Detach-volume
> > >> > > commands.
> > >> > > >>>
> > >> > > >>> Let's say an attach comes in:
> > >> > > >>>
> > >> > > >>> In this case, I check to see if the storage is "managed."
> > Talking
> > >> > > >>> XenServer
> > >> > > >>> here, if it is, I log in to the LUN that is the disk we want
> to
> > >> > attach.
> > >> > > >>> After, if this is the first time attaching this disk, I create
> > an
> > >> SR
> > >> > > and a
> > >> > > >>> VDI within the SR. If it is not the first time attaching this
> > >> disk,
> > >> > the
> > >> > > >>> LUN
> > >> > > >>> already has the SR and VDI on it.
> > >> > > >>>
> > >> > > >>> Once this is done, I let the normal "attach" logic run because
> > >> this
> > >> > > logic
> > >> > > >>> expected an SR and a VDI and now it has it.
> > >> > > >>>
> > >> > > >>> It's the same thing for VMware: Just substitute datastore for
> SR
> > >> and
> > >> > > VMDK
> > >> > > >>> for VDI.
> > >> > > >>>
> > >> > > >>> Does that make sense?
> > >> > > >>>
> > >> > > >>> Thanks!
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > >> > > >>> <sh...@gmail.com>wrote:
> > >> > > >>>
> > >> > > >>> > What do you do with Xen? I imagine the user enter the SAN
> > >> details
> > >> > > when
> > >> > > >>> > registering the pool? A the pool details are basically just
> > >> > > instructions
> > >> > > >>> > on
> > >> > > >>> > how to log into a target, correct?
> > >> > > >>> >
> > >> > > >>> > You can choose to log in a KVM host to the target during
> > >> > > >>> > createStoragePool
> > >> > > >>> > and save the pool in a map, or just save the pool info in a
> > map
> > >> for
> > >> > > >>> > future
> > >> > > >>> > reference by uuid, for when you do need to log in. The
> > >> > > createStoragePool
> > >> > > >>> > then just becomes a way to save the pool info to the agent.
> > >> > > Personally,
> > >> > > >>> > I'd
> > >> > > >>> > log in on the pool create and look/scan for specific luns
> when
> > >> > > they're
> > >> > > >>> > needed, but I haven't thought it through thoroughly. I just
> > say
> > >> > that
> > >> > > >>> > mainly
> > >> > > >>> > because login only happens once, the first time the pool is
> > >> used,
> > >> > and
> > >> > > >>> > every
> > >> > > >>> > other storage command is about discovering new luns or maybe
> > >> > > >>> > deleting/disconnecting luns no longer needed. On the other
> > hand,
> > >> > you
> > >> > > >>> > could
> > >> > > >>> > do all of the above: log in on pool create, then also check
> if
> > >> > you're
> > >> > > >>> > logged in on other commands and log in if you've lost
> > >> connection.
> > >> > > >>> >
> > >> > > >>> > With Xen, what does your registered pool   show in the UI
> for
> > >> > > avail/used
> > >> > > >>> > capacity, and how does it get that info? I assume there is
> > some
> > >> > sort
> > >> > > of
> > >> > > >>> > disk pool that the luns are carved from, and that your
> plugin
> > is
> > >> > > called
> > >> > > >>> > to
> > >> > > >>> > talk to the SAN and expose to the user how much of that pool
> > has
> > >> > been
> > >> > > >>> > allocated. Knowing how you already solves these problems
> with
> > >> Xen
> > >> > > will
> > >> > > >>> > help
> > >> > > >>> > figure out what to do with KVM.
> > >> > > >>> >
> > >> > > >>> > If this is the case, I think the plugin can continue to
> handle
> > >> it
> > >> > > rather
> > >> > > >>> > than getting details from the agent. I'm not sure if that
> > means
> > >> > nulls
> > >> > > >>> > are
> > >> > > >>> > OK for these on the agent side or what, I need to look at
> the
> > >> > storage
> > >> > > >>> > plugin arch more closely.
> > >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > >> > > mike.tutkowski@solidfire.com>
> > >> > > >>> > wrote:
> > >> > > >>> >
> > >> > > >>> > > Hey Marcus,
> > >> > > >>> > >
> > >> > > >>> > > I'm reviewing your e-mails as I implement the necessary
> > >> methods
> > >> > in
> > >> > > new
> > >> > > >>> > > classes.
> > >> > > >>> > >
> > >> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool
> > >> accepts
> > >> > > all of
> > >> > > >>> > > the pool data (host, port, name, path) which would be used
> > to
> > >> log
> > >> > > the
> > >> > > >>> > > host into the initiator."
> > >> > > >>> > >
> > >> > > >>> > > Can you tell me, in my case, since a storage pool (primary
> > >> > > storage) is
> > >> > > >>> > > actually the SAN, I wouldn't really be logging into
> anything
> > >> at
> > >> > > this
> > >> > > >>> > point,
> > >> > > >>> > > correct?
> > >> > > >>> > >
> > >> > > >>> > > Also, what kind of capacity, available, and used bytes
> make
> > >> sense
> > >> > > to
> > >> > > >>> > report
> > >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the
> SAN
> > >> in my
> > >> > > case
> > >> > > >>> > and
> > >> > > >>> > > not an individual LUN)?
> > >> > > >>> > >
> > >> > > >>> > > Thanks!
> > >> > > >>> > >
> > >> > > >>> > >
> > >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > >> > > shadowsor@gmail.com
> > >> > > >>> > > >wrote:
> > >> > > >>> > >
> > >> > > >>> > > > Ok, KVM will be close to that, of course, because only
> the
> > >> > > >>> > > > hypervisor
> > >> > > >>> > > > classes differ, the rest is all mgmt server. Creating a
> > >> volume
> > >> > is
> > >> > > >>> > > > just
> > >> > > >>> > > > a db entry until it's deployed for the first time.
> > >> > > >>> > > > AttachVolumeCommand
> > >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is
> analogous
> > >> to
> > >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands
> > (via
> > >> a
> > >> > KVM
> > >> > > >>> > > > StorageAdaptor) to log in the host to the target and
> then
> > >> you
> > >> > > have a
> > >> > > >>> > > > block device.  Maybe libvirt will do that for you, but
> my
> > >> quick
> > >> > > read
> > >> > > >>> > > > made it sound like the iscsi libvirt pool type is
> > actually a
> > >> > > pool,
> > >> > > >>> > > > not
> > >> > > >>> > > > a lun or volume, so you'll need to figure out if that
> > works
> > >> or
> > >> > if
> > >> > > >>> > > > you'll have to use iscsiadm commands.
> > >> > > >>> > > >
> > >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor
> (because
> > >> > Libvirt
> > >> > > >>> > > > doesn't really manage your pool the way you want),
> you're
> > >> going
> > >> > > to
> > >> > > >>> > > > have to create a version of KVMStoragePool class and a
> > >> > > >>> > > > StorageAdaptor
> > >> > > >>> > > > class (see LibvirtStoragePool.java and
> > >> > > LibvirtStorageAdaptor.java),
> > >> > > >>> > > > implementing all of the methods, then in
> > >> KVMStorageManager.java
> > >> > > >>> > > > there's a "_storageMapper" map. This is used to select
> the
> > >> > > correct
> > >> > > >>> > > > adaptor, you can see in this file that every call first
> > >> pulls
> > >> > the
> > >> > > >>> > > > correct adaptor out of this map via getStorageAdaptor.
> So
> > >> you
> > >> > can
> > >> > > >>> > > > see
> > >> > > >>> > > > a comment in this file that says "add other storage
> > adaptors
> > >> > > here",
> > >> > > >>> > > > where it puts to this map, this is where you'd register
> > your
> > >> > > >>> > > > adaptor.
> > >> > > >>> > > >
> > >> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool
> > >> accepts
> > >> > > all
> > >> > > >>> > > > of
> > >> > > >>> > > > the pool data (host, port, name, path) which would be
> used
> > >> to
> > >> > log
> > >> > > >>> > > > the
> > >> > > >>> > > > host into the initiator. I *believe* the method
> > >> getPhysicalDisk
> > >> > > will
> > >> > > >>> > > > need to do the work of attaching the lun.
> > >>  AttachVolumeCommand
> > >> > > calls
> > >> > > >>> > > > this and then creates the XML diskdef and attaches it to
> > the
> > >> > VM.
> > >> > > >>> > > > Now,
> > >> > > >>> > > > one thing you need to know is that createStoragePool is
> > >> called
> > >> > > >>> > > > often,
> > >> > > >>> > > > sometimes just to make sure the pool is there. You may
> > want
> > >> to
> > >> > > >>> > > > create
> > >> > > >>> > > > a map in your adaptor class and keep track of pools that
> > >> have
> > >> > > been
> > >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this
> > >> because
> > >> > it
> > >> > > >>> > > > asks
> > >> > > >>> > > > libvirt about which storage pools exist. There are also
> > >> calls
> > >> > to
> > >> > > >>> > > > refresh the pool stats, and all of the other calls can
> be
> > >> seen
> > >> > in
> > >> > > >>> > > > the
> > >> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk,
> > >> clone,
> > >> > > etc,
> > >> > > >>> > > > but
> > >> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague
> > idea
> > >> > that
> > >> > > >>> > > > volumes are created on the mgmt server via the plugin
> now,
> > >> so
> > >> > > >>> > > > whatever
> > >> > > >>> > > > doesn't apply can just be stubbed out (or optionally
> > >> > > >>> > > > extended/reimplemented here, if you don't mind the hosts
> > >> > talking
> > >> > > to
> > >> > > >>> > > > the san api).
> > >> > > >>> > > >
> > >> > > >>> > > > There is a difference between attaching new volumes and
> > >> > > launching a
> > >> > > >>> > > > VM
> > >> > > >>> > > > with existing volumes.  In the latter case, the VM
> > >> definition
> > >> > > that
> > >> > > >>> > > > was
> > >> > > >>> > > > passed to the KVM agent includes the disks,
> > (StartCommand).
> > >> > > >>> > > >
> > >> > > >>> > > > I'd be interested in how your pool is defined for Xen, I
> > >> > imagine
> > >> > > it
> > >> > > >>> > > > would need to be kept the same. Is it just a definition
> to
> > >> the
> > >> > > SAN
> > >> > > >>> > > > (ip address or some such, port number) and perhaps a
> > volume
> > >> > pool
> > >> > > >>> > > > name?
> > >> > > >>> > > >
> > >> > > >>> > > > > If there is a way for me to update the ACL list on the
> > >> SAN to
> > >> > > have
> > >> > > >>> > > only a
> > >> > > >>> > > > > single KVM host have access to the volume, that would
> be
> > >> > ideal.
> > >> > > >>> > > >
> > >> > > >>> > > > That depends on your SAN API.  I was under the
> impression
> > >> that
> > >> > > the
> > >> > > >>> > > > storage plugin framework allowed for acls, or for you to
> > do
> > >> > > whatever
> > >> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd
> > just
> > >> > call
> > >> > > >>> > > > your
> > >> > > >>> > > > SAN API with the host info for the ACLs prior to when
> the
> > >> disk
> > >> > is
> > >> > > >>> > > > attached (or the VM is started).  I'd have to look more
> at
> > >> the
> > >> > > >>> > > > framework to know the details, in 4.1 I would do this in
> > >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> > >> > > >>> > > >
> > >> > > >>> > > >
> > >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > >> > > >>> > > > <mi...@solidfire.com> wrote:
> > >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a
> > bit
> > >> > > >>> > > > > different
> > >> > > >>> > > from
> > >> > > >>> > > > how
> > >> > > >>> > > > > it works with XenServer and VMware.
> > >> > > >>> > > > >
> > >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
> > >> XenServer:
> > >> > > >>> > > > >
> > >> > > >>> > > > > * The user creates a CS volume (this is just recorded
> in
> > >> the
> > >> > > >>> > > > cloud.volumes
> > >> > > >>> > > > > table).
> > >> > > >>> > > > >
> > >> > > >>> > > > > * The user attaches the volume as a disk to a VM for
> the
> > >> > first
> > >> > > >>> > > > > time
> > >> > > >>> > (if
> > >> > > >>> > > > the
> > >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the
> > storage
> > >> > > >>> > > > > framework
> > >> > > >>> > > > invokes
> > >> > > >>> > > > > a method on the plug-in that creates a volume on the
> > >> > SAN...info
> > >> > > >>> > > > > like
> > >> > > >>> > > the
> > >> > > >>> > > > IQN
> > >> > > >>> > > > > of the SAN volume is recorded in the DB).
> > >> > > >>> > > > >
> > >> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> > >> > > executed.
> > >> > > >>> > > > > It
> > >> > > >>> > > > > determines based on a flag passed in that the storage
> in
> > >> > > question
> > >> > > >>> > > > > is
> > >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
> > "traditional"
> > >> > > >>> > preallocated
> > >> > > >>> > > > > storage). This tells it to discover the iSCSI target.
> > Once
> > >> > > >>> > > > > discovered
> > >> > > >>> > > it
> > >> > > >>> > > > > determines if the iSCSI target already contains a
> > storage
> > >> > > >>> > > > > repository
> > >> > > >>> > > (it
> > >> > > >>> > > > > would if this were a re-attach situation). If it does
> > >> contain
> > >> > > an
> > >> > > >>> > > > > SR
> > >> > > >>> > > > already,
> > >> > > >>> > > > > then there should already be one VDI, as well. If
> there
> > >> is no
> > >> > > SR,
> > >> > > >>> > > > > an
> > >> > > >>> > SR
> > >> > > >>> > > > is
> > >> > > >>> > > > > created and a single VDI is created within it (that
> > takes
> > >> up
> > >> > > about
> > >> > > >>> > > > > as
> > >> > > >>> > > > much
> > >> > > >>> > > > > space as was requested for the CloudStack volume).
> > >> > > >>> > > > >
> > >> > > >>> > > > > * The normal attach-volume logic continues (it depends
> > on
> > >> the
> > >> > > >>> > existence
> > >> > > >>> > > > of
> > >> > > >>> > > > > an SR and a VDI).
> > >> > > >>> > > > >
> > >> > > >>> > > > > The VMware case is essentially the same (mainly just
> > >> > substitute
> > >> > > >>> > > datastore
> > >> > > >>> > > > > for SR and VMDK for VDI).
> > >> > > >>> > > > >
> > >> > > >>> > > > > In both cases, all hosts in the cluster have
> discovered
> > >> the
> > >> > > iSCSI
> > >> > > >>> > > target,
> > >> > > >>> > > > > but only the host that is currently running the VM
> that
> > is
> > >> > > using
> > >> > > >>> > > > > the
> > >> > > >>> > > VDI
> > >> > > >>> > > > (or
> > >> > > >>> > > > > VMKD) is actually using the disk.
> > >> > > >>> > > > >
> > >> > > >>> > > > > Live Migration should be OK because the hypervisors
> > >> > communicate
> > >> > > >>> > > > > with
> > >> > > >>> > > > > whatever metadata they have on the SR (or datastore).
> > >> > > >>> > > > >
> > >> > > >>> > > > > I see what you're saying with KVM, though.
> > >> > > >>> > > > >
> > >> > > >>> > > > > In that case, the hosts are clustered only in
> > CloudStack's
> > >> > > eyes.
> > >> > > >>> > > > > CS
> > >> > > >>> > > > controls
> > >> > > >>> > > > > Live Migration. You don't really need a clustered
> > >> filesystem
> > >> > on
> > >> > > >>> > > > > the
> > >> > > >>> > > LUN.
> > >> > > >>> > > > The
> > >> > > >>> > > > > LUN could be handed over raw to the VM using it.
> > >> > > >>> > > > >
> > >> > > >>> > > > > If there is a way for me to update the ACL list on the
> > >> SAN to
> > >> > > have
> > >> > > >>> > > only a
> > >> > > >>> > > > > single KVM host have access to the volume, that would
> be
> > >> > ideal.
> > >> > > >>> > > > >
> > >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover
> and
> > >> log
> > >> > in
> > >> > > to
> > >> > > >>> > > > > the
> > >> > > >>> > > > iSCSI
> > >> > > >>> > > > > target. I'll also need to take the resultant new
> device
> > >> and
> > >> > > pass
> > >> > > >>> > > > > it
> > >> > > >>> > > into
> > >> > > >>> > > > the
> > >> > > >>> > > > > VM.
> > >> > > >>> > > > >
> > >> > > >>> > > > > Does this sound reasonable? Please call me out on
> > >> anything I
> > >> > > seem
> > >> > > >>> > > > incorrect
> > >> > > >>> > > > > about. :)
> > >> > > >>> > > > >
> > >> > > >>> > > > > Thanks for all the thought on this, Marcus!
> > >> > > >>> > > > >
> > >> > > >>> > > > >
> > >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > >> > > >>> > shadowsor@gmail.com>
> > >> > > >>> > > > > wrote:
> > >> > > >>> > > > >>
> > >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk
> > def,
> > >> and
> > >> > > the
> > >> > > >>> > > attach
> > >> > > >>> > > > >> the disk def to the vm. You may need to do your own
> > >> > > >>> > > > >> StorageAdaptor
> > >> > > >>> > and
> > >> > > >>> > > > run
> > >> > > >>> > > > >> iscsiadm commands to accomplish that, depending on
> how
> > >> the
> > >> > > >>> > > > >> libvirt
> > >> > > >>> > > iscsi
> > >> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume
> > >> isn't
> > >> > > how it
> > >> > > >>> > > works
> > >> > > >>> > > > on
> > >> > > >>> > > > >> xen at the momen., nor is it ideal.
> > >> > > >>> > > > >>
> > >> > > >>> > > > >> Your plugin will handle acls as far as which host can
> > see
> > >> > > which
> > >> > > >>> > > > >> luns
> > >> > > >>> > > as
> > >> > > >>> > > > >> well, I remember discussing that months ago, so that
> a
> > >> disk
> > >> > > won't
> > >> > > >>> > > > >> be
> > >> > > >>> > > > >> connected until the hypervisor has exclusive access,
> so
> > >> it
> > >> > > will
> > >> > > >>> > > > >> be
> > >> > > >>> > > safe
> > >> > > >>> > > > and
> > >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
> > >> > > >>> > > > >> connectivity
> > >> > > >>> > > > with. It
> > >> > > >>> > > > >> should revoke access to everything but the target
> > host...
> > >> > > Except
> > >> > > >>> > > > >> for
> > >> > > >>> > > > during
> > >> > > >>> > > > >> migration but we can discuss that later, there's a
> > >> migration
> > >> > > prep
> > >> > > >>> > > > process
> > >> > > >>> > > > >> where the new host can be added to the acls, and the
> > old
> > >> > host
> > >> > > can
> > >> > > >>> > > > >> be
> > >> > > >>> > > > removed
> > >> > > >>> > > > >> post migration.
> > >> > > >>> > > > >>
> > >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > >> > > >>> > > mike.tutkowski@solidfire.com
> > >> > > >>> > > > >
> > >> > > >>> > > > >> wrote:
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>> Yeah, that would be ideal.
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>> So, I would still need to discover the iSCSI target,
> > >> log in
> > >> > > to
> > >> > > >>> > > > >>> it,
> > >> > > >>> > > then
> > >> > > >>> > > > >>> figure out what /dev/sdX was created as a result
> (and
> > >> leave
> > >> > > it
> > >> > > >>> > > > >>> as
> > >> > > >>> > is
> > >> > > >>> > > -
> > >> > > >>> > > > do
> > >> > > >>> > > > >>> not format it with any file system...clustered or
> > not).
> > >> I
> > >> > > would
> > >> > > >>> > pass
> > >> > > >>> > > > that
> > >> > > >>> > > > >>> device into the VM.
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>> Kind of accurate?
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > >> > > >>> > > shadowsor@gmail.com>
> > >> > > >>> > > > >>> wrote:
> > >> > > >>> > > > >>>>
> > >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> > >> > > definitions.
> > >> > > >>> > There
> > >> > > >>> > > > are
> > >> > > >>> > > > >>>> ones that work for block devices rather than files.
> > You
> > >> > can
> > >> > > >>> > > > >>>> piggy
> > >> > > >>> > > > back off
> > >> > > >>> > > > >>>> of the existing disk definitions and attach it to
> the
> > >> vm
> > >> > as
> > >> > > a
> > >> > > >>> > block
> > >> > > >>> > > > device.
> > >> > > >>> > > > >>>> The definition is an XML string per libvirt XML
> > format.
> > >> > You
> > >> > > may
> > >> > > >>> > want
> > >> > > >>> > > > to use
> > >> > > >>> > > > >>>> an alternate path to the disk rather than just
> > /dev/sdx
> > >> > > like I
> > >> > > >>> > > > mentioned,
> > >> > > >>> > > > >>>> there are by-id paths to the block devices, as well
> > as
> > >> > other
> > >> > > >>> > > > >>>> ones
> > >> > > >>> > > > that will
> > >> > > >>> > > > >>>> be consistent and easier for management, not sure
> how
> > >> > > familiar
> > >> > > >>> > > > >>>> you
> > >> > > >>> > > > are with
> > >> > > >>> > > > >>>> device naming on Linux.
> > >> > > >>> > > > >>>>
> > >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > >> > > >>> > > > >>>> <sh...@gmail.com>
> > >> > > >>> > > > wrote:
> > >> > > >>> > > > >>>>>
> > >> > > >>> > > > >>>>> No, as that would rely on virtualized
> network/iscsi
> > >> > > initiator
> > >> > > >>> > > inside
> > >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx
> > (your
> > >> > lun
> > >> > > on
> > >> > > >>> > > > hypervisor) as
> > >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image
> > >> file
> > >> > > that
> > >> > > >>> > > resides
> > >> > > >>> > > > on a
> > >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a
> target.
> > >> > > >>> > > > >>>>>
> > >> > > >>> > > > >>>>> Actually, if you plan on the storage supporting
> live
> > >> > > migration
> > >> > > >>> > > > >>>>> I
> > >> > > >>> > > > think
> > >> > > >>> > > > >>>>> this is the only way. You can't put a filesystem
> on
> > it
> > >> > and
> > >> > > >>> > > > >>>>> mount
> > >> > > >>> > it
> > >> > > >>> > > > in two
> > >> > > >>> > > > >>>>> places to facilitate migration unless its a
> > clustered
> > >> > > >>> > > > >>>>> filesystem,
> > >> > > >>> > > in
> > >> > > >>> > > > which
> > >> > > >>> > > > >>>>> case you're back to shared mount point.
> > >> > > >>> > > > >>>>>
> > >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
> > >> basically
> > >> > > LVM
> > >> > > >>> > with a
> > >> > > >>> > > > xen
> > >> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They
> > don't
> > >> > use
> > >> > > a
> > >> > > >>> > > > filesystem
> > >> > > >>> > > > >>>>> either.
> > >> > > >>> > > > >>>>>
> > >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the
> vm,"
> > >> do
> > >> > you
> > >> > > >>> > > > >>>>>> mean
> > >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we
> > >> could do
> > >> > > that
> > >> > > >>> > > > >>>>>> in
> > >> > > >>> > > CS.
> > >> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents
> > the
> > >> > > >>> > > > >>>>>> hypervisor,
> > >> > > >>> > > as
> > >> > > >>> > > > far as I
> > >> > > >>> > > > >>>>>> know.
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
> <
> > >> > > >>> > > > shadowsor@gmail.com>
> > >> > > >>> > > > >>>>>> wrote:
> > >> > > >>> > > > >>>>>>>
> > >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm
> > unless
> > >> > > there is
> > >> > > >>> > > > >>>>>>> a
> > >> > > >>> > > good
> > >> > > >>> > > > >>>>>>> reason not to.
> > >> > > >>> > > > >>>>>>>
> > >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > >> > > >>> > shadowsor@gmail.com>
> > >> > > >>> > > > >>>>>>> wrote:
> > >> > > >>> > > > >>>>>>>>
> > >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think
> its a
> > >> > > mistake
> > >> > > >>> > > > >>>>>>>> to
> > >> > > >>> > go
> > >> > > >>> > > to
> > >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
> > >> volumes to
> > >> > > luns
> > >> > > >>> > and
> > >> > > >>> > > > then putting
> > >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
> > putting a
> > >> > > QCOW2
> > >> > > >>> > > > >>>>>>>> or
> > >> > > >>> > > even
> > >> > > >>> > > > RAW disk
> > >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of
> > iops
> > >> > > along
> > >> > > >>> > > > >>>>>>>> the
> > >> > > >>> > > > way, and have
> > >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
> > >> journaling,
> > >> > > etc.
> > >> > > >>> > > > >>>>>>>>
> > >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground
> > in
> > >> KVM
> > >> > > with
> > >> > > >>> > CS.
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS
> > >> today
> > >> > > is by
> > >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> > >> > location
> > >> > > of
> > >> > > >>> > > > >>>>>>>>> the
> > >> > > >>> > > > share.
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI
> by
> > >> > > >>> > > > >>>>>>>>> discovering
> > >> > > >>> > > their
> > >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting
> it
> > >> > > somewhere
> > >> > > >>> > > > >>>>>>>>> on
> > >> > > >>> > > > their file
> > >> > > >>> > > > >>>>>>>>> system.
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
> > >> discovery,
> > >> > > >>> > > > >>>>>>>>> logging
> > >> > > >>> > > in,
> > >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and
> > >> letting
> > >> > the
> > >> > > >>> > current
> > >> > > >>> > > > code manage
> > >> > > >>> > > > >>>>>>>>> the rest as it currently does?
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus
> Sorensen
> > >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit
> different. I
> > >> need
> > >> > > to
> > >> > > >>> > catch
> > >> > > >>> > > up
> > >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is
> basically
> > >> just
> > >> > > disk
> > >> > > >>> > > > snapshots + memory
> > >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
> > >> preferably
> > >> > be
> > >> > > >>> > handled
> > >> > > >>> > > > by the SAN,
> > >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
> > >> storage or
> > >> > > >>> > something
> > >> > > >>> > > > else. This is
> > >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we
> > will
> > >> > > want to
> > >> > > >>> > see
> > >> > > >>> > > > how others are
> > >> > > >>> > > > >>>>>>>>>> planning theirs.
> > >> > > >>> > > > >>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > >> > > >>> > > shadowsor@gmail.com
> > >> > > >>> > > > >
> > >> > > >>> > > > >>>>>>>>>> wrote:
> > >> > > >>> > > > >>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd
> > use a
> > >> > vdi
> > >> > > >>> > > > >>>>>>>>>>> style
> > >> > > >>> > on
> > >> > > >>> > > > an
> > >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it
> as a
> > >> RAW
> > >> > > >>> > > > >>>>>>>>>>> format.
> > >> > > >>> > > > Otherwise you're
> > >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting
> it,
> > >> > > creating
> > >> > > >>> > > > >>>>>>>>>>> a
> > >> > > >>> > > > QCOW2 disk image,
> > >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> > >> > killer.
> > >> > > >>> > > > >>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a
> > >> disk
> > >> > to
> > >> > > the
> > >> > > >>> > VM,
> > >> > > >>> > > > and
> > >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the
> > >> storage
> > >> > > >>> > > > >>>>>>>>>>> plugin
> > >> > > >>> > is
> > >> > > >>> > > > best. My
> > >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor
> > was
> > >> > that
> > >> > > >>> > > > >>>>>>>>>>> there
> > >> > > >>> > > was
> > >> > > >>> > > > a snapshot
> > >> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle
> > >> > snapshots.
> > >> > > >>> > > > >>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > >> > > >>> > > > shadowsor@gmail.com>
> > >> > > >>> > > > >>>>>>>>>>> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by
> > the
> > >> SAN
> > >> > > back
> > >> > > >>> > end,
> > >> > > >>> > > > if
> > >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt
> > server
> > >> > > could
> > >> > > >>> > > > >>>>>>>>>>>> call
> > >> > > >>> > > > your plugin for
> > >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> > >> > > agnostic. As
> > >> > > >>> > far
> > >> > > >>> > > > as space, that
> > >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it.
> With
> > >> > ours,
> > >> > > we
> > >> > > >>> > carve
> > >> > > >>> > > > out luns from a
> > >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the
> > >> pool
> > >> > > and is
> > >> > > >>> > > > independent of the
> > >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
> > >> > > >>> > > > >>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type
> for
> > >> > libvirt
> > >> > > >>> > > > >>>>>>>>>>>>> won't
> > >> > > >>> > > > work
> > >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration
> hypervisor
> > >> > > snapshots?
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> > >> > snapshot,
> > >> > > the
> > >> > > >>> > VDI
> > >> > > >>> > > > for
> > >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > >> > > repository
> > >> > > >>> > > > >>>>>>>>>>>>> as
> > >> > > >>> > > the
> > >> > > >>> > > > volume is on.
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's
> say
> > >> for
> > >> > > >>> > > > >>>>>>>>>>>>> XenServer
> > >> > > >>> > > and
> > >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> > >> hypervisor
> > >> > > >>> > snapshots
> > >> > > >>> > > > in 4.2) is I'd
> > >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than
> > what
> > >> the
> > >> > > user
> > >> > > >>> > > > requested for the
> > >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because
> our
> > >> SAN
> > >> > > >>> > > > >>>>>>>>>>>>> thinly
> > >> > > >>> > > > provisions volumes,
> > >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless
> it
> > >> needs
> > >> > > to
> > >> > > >>> > > > >>>>>>>>>>>>> be).
> > >> > > >>> > > > The CloudStack
> > >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the
> SAN
> > >> > volume
> > >> > > >>> > until a
> > >> > > >>> > > > hypervisor
> > >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would
> also
> > >> > reside
> > >> > > on
> > >> > > >>> > > > >>>>>>>>>>>>> the
> > >> > > >>> > > > SAN volume.
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there
> is
> > >> no
> > >> > > >>> > > > >>>>>>>>>>>>> creation
> > >> > > >>> > of
> > >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
> > >> (which,
> > >> > > even
> > >> > > >>> > > > >>>>>>>>>>>>> if
> > >> > > >>> > > > there were support
> > >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows
> one
> > >> LUN
> > >> > per
> > >> > > >>> > > > >>>>>>>>>>>>> iSCSI
> > >> > > >>> > > > target), then I
> > >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
> > current
> > >> way
> > >> > > this
> > >> > > >>> > > works
> > >> > > >>> > > > >>>>>>>>>>>>> with DIR?
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> What do you think?
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> Thanks
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
> > >> Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for
> > >> iSCSI
> > >> > > access
> > >> > > >>> > > today.
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too,
> but I
> > >> > might
> > >> > > as
> > >> > > >>> > well
> > >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
> > instead.
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
> > >> Sorensen
> > >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> To your question about
> SharedMountPoint, I
> > >> > > believe
> > >> > > >>> > > > >>>>>>>>>>>>>>> it
> > >> > > >>> > > just
> > >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
> > >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar
> to
> > >> > that.
> > >> > > The
> > >> > > >>> > > > end-user
> > >> > > >>> > > > >>>>>>>>>>>>>>> is
> > >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system
> > that
> > >> all
> > >> > > KVM
> > >> > > >>> > hosts
> > >> > > >>> > > > can
> > >> > > >>> > > > >>>>>>>>>>>>>>> access,
> > >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> > >> > providing
> > >> > > the
> > >> > > >>> > > > storage.
> > >> > > >>> > > > >>>>>>>>>>>>>>> It could
> > >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other
> clustered
> > >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
> > >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
> > >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path
> has
> > >> VM
> > >> > > >>> > > > >>>>>>>>>>>>>>> images.
> > >> > > >>> > > > >>>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> > >> > Sorensen
> > >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and
> iSCSI
> > >> all
> > >> > at
> > >> > > the
> > >> > > >>> > same
> > >> > > >>> > > > >>>>>>>>>>>>>>> > time.
> > >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> > >> > Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
> > storage
> > >> > > pools:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
> >  Autostart
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > -----------------------------------------
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> > >> > > Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com>
> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed
> > >> out.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
> > >> storage
> > >> > > pool
> > >> > > >>> > based
> > >> > > >>> > > on
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would
> > only
> > >> > have
> > >> > > one
> > >> > > >>> > LUN,
> > >> > > >>> > > > so
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage
> volume
> > in
> > >> > the
> > >> > > >>> > > (libvirt)
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
> > >> destroys
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a
> problem
> > >> that
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> > >> > > >>> > > does
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI
> targets/LUNs.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a
> > bit
> > >> to
> > >> > > see
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
> > >> > > >>> > > > libvirt
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > >> > > mentioned,
> > >> > > >>> > since
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my
> > >> iSCSI
> > >> > > >>> > > > targets/LUNs).
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM,
> Mike
> > >> > > Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com>
> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this
> type:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
> > NETFS("netfs"),
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type
> > is
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
> > >> > > >>> > > being
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were
> > >> getting
> > >> > at.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say,
> 4.2),
> > >> when
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it
> > >> with
> > >> > > iSCSI,
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
> > >> > > >>> > > > that
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM,
> > Marcus
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on
> > the
> > >> > iSCSI
> > >> > > >>> > server,
> > >> > > >>> > > > and
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.",
> > which
> > >> I
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> > >> > > >>> > > your
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the
> work
> > of
> > >> > > logging
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > >> > > >>> > > and
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does
> > >> that
> > >> > > work
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > >> > > >>> > the
> > >> > > >>> > > > Xen
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether
> > >> this
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> > >> > > >>> > a
> > >> > > >>> > > > 1:1
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1
> > iscsi
> > >> > > device
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
> > >> > > >>> > a
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read
> up a
> > >> bit
> > >> > > more
> > >> > > >>> > about
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have
> to
> > >> write
> > >> > > your
> > >> > > >>> > own
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > >> > > >>> > >  We
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with
> libvirt,
> > >> see
> > >> > the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > http://libvirt.org/sources/java/javadoc/Normally,
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then
> > calls
> > >> > made
> > >> > > to
> > >> > > >>> > that
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
> > LibvirtStorageAdaptor
> > >> to
> > >> > > see
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
> > >> > > >>> > > that
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write
> > some
> > >> > test
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > >> > > >>> > > code
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
> > >> register
> > >> > > iscsi
> > >> > > >>> > > storage
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM,
> > Mike
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com>
> > wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to
> investigate
> > >> > libvirt
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > >> > > >>> > > but
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
> > >> iSCSI
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM,
> > >> Mike
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com>
> > >> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through
> > >> some of
> > >> > > the
> > >> > > >>> > > classes
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26
> PM,
> > >> > Marcus
> > >> > > >>> > Sorensen
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will
> > >> need
> > >> > the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
> > >> standard
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > >> > > >>> > > for
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do
> > the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > >> > > >>> > > > login.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > >> > > >>> > and
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > >> > > Tutkowski"
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mike.tutkowski@solidfire.com
> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during
> the
> > >> 4.2
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > >> > > >>> > I
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
> > CloudStack.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by
> the
> > >> > > storage
> > >> > > >>> > > > framework
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically
> > >> create
> > >> > and
> > >> > > >>> > delete
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> > >> > establish a
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > >> > > >>> > > > mapping
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume
> > for
> > >> > QoS.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack
> always
> > >> > > expected
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > >>> > > > admin
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and
> those
> > >> > > volumes
> > >> > > >>> > would
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> > >> > > friendly).
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping
> scheme
> > >> > work,
> > >> > > I
> > >> > > >>> > needed
> > >> > > >>> > > > to
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins
> > so
> > >> > they
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as
> > >> needed.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this
> > >> happen
> > >> > > with
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with
> how
> > >> this
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > >> > > >>> > > work
> > >> > > >>> > > > on
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM
> > >> know
> > >> > > how I
> > >> > > >>> > will
> > >> > > >>> > > > need
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example,
> > will I
> > >> > > have to
> > >> > > >>> > > expect
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and
> > >> use it
> > >> > > for
> > >> > > >>> > this
> > >> > > >>> > > to
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> > >> > SolidFire
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
> > mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world
> > uses
> > >> the
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> > >> SolidFire
> > >> > > Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e:
> mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world
> uses
> > >> the
> > >> > > cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
> > >> SolidFire
> > >> > > Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses
> > the
> > >> > > cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
> > SolidFire
> > >> > Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses
> the
> > >> > cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer,
> SolidFire
> > >> Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
> > >> cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >>
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> --
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer,
> SolidFire
> > >> Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the
> > >> cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>> --
> > >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
> Inc.
> > >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the
> cloud™
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>>>>> --
> > >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire
> Inc.
> > >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the
> cloud™
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>>
> > >> > > >>> > > > >>>>>>>>> --
> > >> > > >>> > > > >>>>>>>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>>>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>>
> > >> > > >>> > > > >>>>>> --
> > >> > > >>> > > > >>>>>> Mike Tutkowski
> > >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>>>>> o: 303.746.7302
> > >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>>
> > >> > > >>> > > > >>> --
> > >> > > >>> > > > >>> Mike Tutkowski
> > >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > >>> o: 303.746.7302
> > >> > > >>> > > > >>> Advancing the way the world uses the cloud™
> > >> > > >>> > > > >
> > >> > > >>> > > > >
> > >> > > >>> > > > >
> > >> > > >>> > > > >
> > >> > > >>> > > > > --
> > >> > > >>> > > > > Mike Tutkowski
> > >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> > >> > > >>> > > > > e: mike.tutkowski@solidfire.com
> > >> > > >>> > > > > o: 303.746.7302
> > >> > > >>> > > > > Advancing the way the world uses the cloud™
> > >> > > >>> > > >
> > >> > > >>> > >
> > >> > > >>> > >
> > >> > > >>> > >
> > >> > > >>> > > --
> > >> > > >>> > > *Mike Tutkowski*
> > >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> > >> > > >>> > > e: mike.tutkowski@solidfire.com
> > >> > > >>> > > o: 303.746.7302
> > >> > > >>> > > Advancing the way the world uses the
> > >> > > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > >> > > >>> > > *™*
> > >> > > >>> > >
> > >> > > >>> >
> > >> > > >>>
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> --
> > >> > > >>> *Mike Tutkowski*
> > >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > >> > > >>> e: mike.tutkowski@solidfire.com
> > >> > > >>> o: 303.746.7302
> > >> > > >>> Advancing the way the world uses the
> > >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
> > >> > > >>> *™*
> > >> > >
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > *Mike Tutkowski*
> > >> > *Senior CloudStack Developer, SolidFire Inc.*
> > >> > e: mike.tutkowski@solidfire.com
> > >> > o: 303.746.7302
> > >> > Advancing the way the world uses the
> > >> > cloud<http://solidfire.com/solution/overview/?video=play>
> > >> > *™*
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud<
> > http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Yeah, that's why I thought it was specific to your implementation. Perhaps
that's true, then?
On Sep 18, 2013 12:04 AM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> I totally get where you're coming from with the tiered-pool approach,
> though.
>
> Prior to SolidFire, I worked at HP and the product I worked on allowed a
> single, clustered SAN to host multiple pools of storage. One pool might be
> made up of all-SSD storage nodes while another pool might be made up of
> slower HDDs.
>
> That kind of tiering is not what SolidFire QoS is about, though, as that
> kind of tiering does not guarantee QoS.
>
> In the SolidFire SAN, QoS was designed in from the beginning and is
> extremely granular. Each volume has its own performance and capacity. You
> do not have to worry about Noisy Neighbors.
>
> The idea is to encourage businesses to trust the cloud with their most
> critical business applications at a price point on par with traditional
> SANs.
>
>
> On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
> > Ah, I think I see the miscommunication.
> >
> > I should have gone into a bit more detail about the SolidFire SAN.
> >
> > It is built from the ground up to support QoS on a LUN-by-LUN basis.
> Every
> > LUN is assigned a Min, Max, and Burst number of IOPS.
> >
> > The Min IOPS are a guaranteed number (as long as the SAN itself is not
> > over provisioned). Capacity and IOPS are provisioned independently.
> > Multiple volumes and multiple tenants using the same SAN do not suffer
> from
> > the Noisy Neighbor effect.
> >
> > When you create a Disk Offering in CS that is storage tagged to use
> > SolidFire primary storage, you specify a Min, Max, and Burst number of
> IOPS
> > to provision from the SAN for volumes created from that Disk Offering.
> >
> > There is no notion of RAID groups that you see in more traditional SANs.
> > The SAN is built from clusters of storage nodes and data is replicated
> > amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the
> > cluster to avoid hot spots and protect the data should a drives and/or
> > nodes fail. You then scale the SAN by adding new storage nodes.
> >
> > Data is compressed and de-duplicated inline across the cluster and all
> > volumes are thinly provisioned.
> >
> >
> > On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
> >
> >> I'm surprised there's no mention of pool on the SAN in your description
> of
> >> the framework. I had assumed this was specific to your implementation,
> >> because normally SANs host multiple disk pools, maybe multiple RAID 50s
> >> and
> >> 10s, or however the SAN admin wants to split it up. Maybe a pool
> intended
> >> for root disks and a separate one for data disks. Or one pool for
> >> cloudstack and one dedicated to some other internal db application. But
> it
> >> sounds as though there's no place to specify which disks or pool on the
> >> SAN
> >> to use.
> >>
> >> We implemented our own internal storage SAN plugin based on 4.1. We used
> >> the 'path' attribute of the primary storage pool object to specify which
> >> pool name on the back end SAN to use, so we could create all-ssd pools
> and
> >> slower spindle pools, then differentiate between them based on storage
> >> tags. Normally the path attribute would be the mount point for NFS, but
> >> its
> >> just a string. So when registering ours we enter San dns host name, the
> >> san's rest api port, and the pool name. Then luns created from that
> >> primary
> >> storage come from the matching disk pool on the SAN. We can create and
> >> register multiple pools of different types and purposes on the same SAN.
> >> We
> >> haven't yet gotten to porting it to the 4.2 frame work, so it will be
> >> interesting to see what we can come up with to make it work similarly.
> >>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> >> >
> >> wrote:
> >>
> >> > What you're saying here is definitely something we should talk about.
> >> >
> >> > Hopefully my previous e-mail has clarified how this works a bit.
> >> >
> >> > It mainly comes down to this:
> >> >
> >> > For the first time in CS history, primary storage is no longer
> required
> >> to
> >> > be preallocated by the admin and then handed to CS. CS volumes don't
> >> have
> >> > to share a preallocated volume anymore.
> >> >
> >> > As of 4.2, primary storage can be based on a SAN (or some other
> storage
> >> > device). You can tell CS how many bytes and IOPS to use from this
> >> storage
> >> > device and CS invokes the appropriate plug-in to carve out LUNs
> >> > dynamically.
> >> >
> >> > Each LUN is home to one and only one data disk. Data disks - in this
> >> model
> >> > - never share a LUN.
> >> >
> >> > The main use case for this is so a CS volume can deliver guaranteed
> >> IOPS if
> >> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
> >> > LUN-by-LUN basis.
> >> >
> >> >
> >> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >> > >wrote:
> >> >
> >> > > I guess whether or not a solidfire device is capable of hosting
> >> > > multiple disk pools is irrelevant, we'd hope that we could get the
> >> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if
> these
> >> > > stats aren't collected, I can't as an admin define multiple pools
> and
> >> > > expect cloudstack to allocate evenly from them or fill one up and
> move
> >> > > to the next, because it doesn't know how big it is.
> >> > >
> >> > > Ultimately this discussion has nothing to do with the KVM stuff
> >> > > itself, just a tangent, but something to think about.
> >> > >
> >> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
> >> shadowsor@gmail.com>
> >> > > wrote:
> >> > > > Ok, on most storage pools it shows how many GB free/used when
> >> listing
> >> > > > the pool both via API and in the UI. I'm guessing those are empty
> >> then
> >> > > > for the solid fire storage, but it seems like the user should have
> >> to
> >> > > > define some sort of pool that the luns get carved out of, and you
> >> > > > should be able to get the stats for that, right? Or is a solid
> fire
> >> > > > appliance only one pool per appliance? This isn't about billing,
> but
> >> > > > just so cloudstack itself knows whether or not there is space left
> >> on
> >> > > > the storage device, so cloudstack can go on allocating from a
> >> > > > different primary storage as this one fills up. There are also
> >> > > > notifications and things. It seems like there should be a call you
> >> can
> >> > > > handle for this, maybe Edison knows.
> >> > > >
> >> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
> >> shadowsor@gmail.com>
> >> > > wrote:
> >> > > >> You respond to more than attach and detach, right? Don't you
> create
> >> > > luns as
> >> > > >> well? Or are you just referring to the hypervisor stuff?
> >> > > >>
> >> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> >> > mike.tutkowski@solidfire.com
> >> > > >
> >> > > >> wrote:
> >> > > >>>
> >> > > >>> Hi Marcus,
> >> > > >>>
> >> > > >>> I never need to respond to a CreateStoragePool call for either
> >> > > XenServer
> >> > > >>> or
> >> > > >>> VMware.
> >> > > >>>
> >> > > >>> What happens is I respond only to the Attach- and Detach-volume
> >> > > commands.
> >> > > >>>
> >> > > >>> Let's say an attach comes in:
> >> > > >>>
> >> > > >>> In this case, I check to see if the storage is "managed."
> Talking
> >> > > >>> XenServer
> >> > > >>> here, if it is, I log in to the LUN that is the disk we want to
> >> > attach.
> >> > > >>> After, if this is the first time attaching this disk, I create
> an
> >> SR
> >> > > and a
> >> > > >>> VDI within the SR. If it is not the first time attaching this
> >> disk,
> >> > the
> >> > > >>> LUN
> >> > > >>> already has the SR and VDI on it.
> >> > > >>>
> >> > > >>> Once this is done, I let the normal "attach" logic run because
> >> this
> >> > > logic
> >> > > >>> expected an SR and a VDI and now it has it.
> >> > > >>>
> >> > > >>> It's the same thing for VMware: Just substitute datastore for SR
> >> and
> >> > > VMDK
> >> > > >>> for VDI.
> >> > > >>>
> >> > > >>> Does that make sense?
> >> > > >>>
> >> > > >>> Thanks!
> >> > > >>>
> >> > > >>>
> >> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> >> > > >>> <sh...@gmail.com>wrote:
> >> > > >>>
> >> > > >>> > What do you do with Xen? I imagine the user enter the SAN
> >> details
> >> > > when
> >> > > >>> > registering the pool? A the pool details are basically just
> >> > > instructions
> >> > > >>> > on
> >> > > >>> > how to log into a target, correct?
> >> > > >>> >
> >> > > >>> > You can choose to log in a KVM host to the target during
> >> > > >>> > createStoragePool
> >> > > >>> > and save the pool in a map, or just save the pool info in a
> map
> >> for
> >> > > >>> > future
> >> > > >>> > reference by uuid, for when you do need to log in. The
> >> > > createStoragePool
> >> > > >>> > then just becomes a way to save the pool info to the agent.
> >> > > Personally,
> >> > > >>> > I'd
> >> > > >>> > log in on the pool create and look/scan for specific luns when
> >> > > they're
> >> > > >>> > needed, but I haven't thought it through thoroughly. I just
> say
> >> > that
> >> > > >>> > mainly
> >> > > >>> > because login only happens once, the first time the pool is
> >> used,
> >> > and
> >> > > >>> > every
> >> > > >>> > other storage command is about discovering new luns or maybe
> >> > > >>> > deleting/disconnecting luns no longer needed. On the other
> hand,
> >> > you
> >> > > >>> > could
> >> > > >>> > do all of the above: log in on pool create, then also check if
> >> > you're
> >> > > >>> > logged in on other commands and log in if you've lost
> >> connection.
> >> > > >>> >
> >> > > >>> > With Xen, what does your registered pool   show in the UI for
> >> > > avail/used
> >> > > >>> > capacity, and how does it get that info? I assume there is
> some
> >> > sort
> >> > > of
> >> > > >>> > disk pool that the luns are carved from, and that your plugin
> is
> >> > > called
> >> > > >>> > to
> >> > > >>> > talk to the SAN and expose to the user how much of that pool
> has
> >> > been
> >> > > >>> > allocated. Knowing how you already solves these problems with
> >> Xen
> >> > > will
> >> > > >>> > help
> >> > > >>> > figure out what to do with KVM.
> >> > > >>> >
> >> > > >>> > If this is the case, I think the plugin can continue to handle
> >> it
> >> > > rather
> >> > > >>> > than getting details from the agent. I'm not sure if that
> means
> >> > nulls
> >> > > >>> > are
> >> > > >>> > OK for these on the agent side or what, I need to look at the
> >> > storage
> >> > > >>> > plugin arch more closely.
> >> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> >> > > mike.tutkowski@solidfire.com>
> >> > > >>> > wrote:
> >> > > >>> >
> >> > > >>> > > Hey Marcus,
> >> > > >>> > >
> >> > > >>> > > I'm reviewing your e-mails as I implement the necessary
> >> methods
> >> > in
> >> > > new
> >> > > >>> > > classes.
> >> > > >>> > >
> >> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool
> >> accepts
> >> > > all of
> >> > > >>> > > the pool data (host, port, name, path) which would be used
> to
> >> log
> >> > > the
> >> > > >>> > > host into the initiator."
> >> > > >>> > >
> >> > > >>> > > Can you tell me, in my case, since a storage pool (primary
> >> > > storage) is
> >> > > >>> > > actually the SAN, I wouldn't really be logging into anything
> >> at
> >> > > this
> >> > > >>> > point,
> >> > > >>> > > correct?
> >> > > >>> > >
> >> > > >>> > > Also, what kind of capacity, available, and used bytes make
> >> sense
> >> > > to
> >> > > >>> > report
> >> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN
> >> in my
> >> > > case
> >> > > >>> > and
> >> > > >>> > > not an individual LUN)?
> >> > > >>> > >
> >> > > >>> > > Thanks!
> >> > > >>> > >
> >> > > >>> > >
> >> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> >> > > shadowsor@gmail.com
> >> > > >>> > > >wrote:
> >> > > >>> > >
> >> > > >>> > > > Ok, KVM will be close to that, of course, because only the
> >> > > >>> > > > hypervisor
> >> > > >>> > > > classes differ, the rest is all mgmt server. Creating a
> >> volume
> >> > is
> >> > > >>> > > > just
> >> > > >>> > > > a db entry until it's deployed for the first time.
> >> > > >>> > > > AttachVolumeCommand
> >> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous
> >> to
> >> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands
> (via
> >> a
> >> > KVM
> >> > > >>> > > > StorageAdaptor) to log in the host to the target and then
> >> you
> >> > > have a
> >> > > >>> > > > block device.  Maybe libvirt will do that for you, but my
> >> quick
> >> > > read
> >> > > >>> > > > made it sound like the iscsi libvirt pool type is
> actually a
> >> > > pool,
> >> > > >>> > > > not
> >> > > >>> > > > a lun or volume, so you'll need to figure out if that
> works
> >> or
> >> > if
> >> > > >>> > > > you'll have to use iscsiadm commands.
> >> > > >>> > > >
> >> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> >> > Libvirt
> >> > > >>> > > > doesn't really manage your pool the way you want), you're
> >> going
> >> > > to
> >> > > >>> > > > have to create a version of KVMStoragePool class and a
> >> > > >>> > > > StorageAdaptor
> >> > > >>> > > > class (see LibvirtStoragePool.java and
> >> > > LibvirtStorageAdaptor.java),
> >> > > >>> > > > implementing all of the methods, then in
> >> KVMStorageManager.java
> >> > > >>> > > > there's a "_storageMapper" map. This is used to select the
> >> > > correct
> >> > > >>> > > > adaptor, you can see in this file that every call first
> >> pulls
> >> > the
> >> > > >>> > > > correct adaptor out of this map via getStorageAdaptor. So
> >> you
> >> > can
> >> > > >>> > > > see
> >> > > >>> > > > a comment in this file that says "add other storage
> adaptors
> >> > > here",
> >> > > >>> > > > where it puts to this map, this is where you'd register
> your
> >> > > >>> > > > adaptor.
> >> > > >>> > > >
> >> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool
> >> accepts
> >> > > all
> >> > > >>> > > > of
> >> > > >>> > > > the pool data (host, port, name, path) which would be used
> >> to
> >> > log
> >> > > >>> > > > the
> >> > > >>> > > > host into the initiator. I *believe* the method
> >> getPhysicalDisk
> >> > > will
> >> > > >>> > > > need to do the work of attaching the lun.
> >>  AttachVolumeCommand
> >> > > calls
> >> > > >>> > > > this and then creates the XML diskdef and attaches it to
> the
> >> > VM.
> >> > > >>> > > > Now,
> >> > > >>> > > > one thing you need to know is that createStoragePool is
> >> called
> >> > > >>> > > > often,
> >> > > >>> > > > sometimes just to make sure the pool is there. You may
> want
> >> to
> >> > > >>> > > > create
> >> > > >>> > > > a map in your adaptor class and keep track of pools that
> >> have
> >> > > been
> >> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this
> >> because
> >> > it
> >> > > >>> > > > asks
> >> > > >>> > > > libvirt about which storage pools exist. There are also
> >> calls
> >> > to
> >> > > >>> > > > refresh the pool stats, and all of the other calls can be
> >> seen
> >> > in
> >> > > >>> > > > the
> >> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk,
> >> clone,
> >> > > etc,
> >> > > >>> > > > but
> >> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague
> idea
> >> > that
> >> > > >>> > > > volumes are created on the mgmt server via the plugin now,
> >> so
> >> > > >>> > > > whatever
> >> > > >>> > > > doesn't apply can just be stubbed out (or optionally
> >> > > >>> > > > extended/reimplemented here, if you don't mind the hosts
> >> > talking
> >> > > to
> >> > > >>> > > > the san api).
> >> > > >>> > > >
> >> > > >>> > > > There is a difference between attaching new volumes and
> >> > > launching a
> >> > > >>> > > > VM
> >> > > >>> > > > with existing volumes.  In the latter case, the VM
> >> definition
> >> > > that
> >> > > >>> > > > was
> >> > > >>> > > > passed to the KVM agent includes the disks,
> (StartCommand).
> >> > > >>> > > >
> >> > > >>> > > > I'd be interested in how your pool is defined for Xen, I
> >> > imagine
> >> > > it
> >> > > >>> > > > would need to be kept the same. Is it just a definition to
> >> the
> >> > > SAN
> >> > > >>> > > > (ip address or some such, port number) and perhaps a
> volume
> >> > pool
> >> > > >>> > > > name?
> >> > > >>> > > >
> >> > > >>> > > > > If there is a way for me to update the ACL list on the
> >> SAN to
> >> > > have
> >> > > >>> > > only a
> >> > > >>> > > > > single KVM host have access to the volume, that would be
> >> > ideal.
> >> > > >>> > > >
> >> > > >>> > > > That depends on your SAN API.  I was under the impression
> >> that
> >> > > the
> >> > > >>> > > > storage plugin framework allowed for acls, or for you to
> do
> >> > > whatever
> >> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd
> just
> >> > call
> >> > > >>> > > > your
> >> > > >>> > > > SAN API with the host info for the ACLs prior to when the
> >> disk
> >> > is
> >> > > >>> > > > attached (or the VM is started).  I'd have to look more at
> >> the
> >> > > >>> > > > framework to know the details, in 4.1 I would do this in
> >> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> >> > > >>> > > >
> >> > > >>> > > >
> >> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >> > > >>> > > > <mi...@solidfire.com> wrote:
> >> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a
> bit
> >> > > >>> > > > > different
> >> > > >>> > > from
> >> > > >>> > > > how
> >> > > >>> > > > > it works with XenServer and VMware.
> >> > > >>> > > > >
> >> > > >>> > > > > Just to give you an idea how it works in 4.2 with
> >> XenServer:
> >> > > >>> > > > >
> >> > > >>> > > > > * The user creates a CS volume (this is just recorded in
> >> the
> >> > > >>> > > > cloud.volumes
> >> > > >>> > > > > table).
> >> > > >>> > > > >
> >> > > >>> > > > > * The user attaches the volume as a disk to a VM for the
> >> > first
> >> > > >>> > > > > time
> >> > > >>> > (if
> >> > > >>> > > > the
> >> > > >>> > > > > storage allocator picks the SolidFire plug-in, the
> storage
> >> > > >>> > > > > framework
> >> > > >>> > > > invokes
> >> > > >>> > > > > a method on the plug-in that creates a volume on the
> >> > SAN...info
> >> > > >>> > > > > like
> >> > > >>> > > the
> >> > > >>> > > > IQN
> >> > > >>> > > > > of the SAN volume is recorded in the DB).
> >> > > >>> > > > >
> >> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> >> > > executed.
> >> > > >>> > > > > It
> >> > > >>> > > > > determines based on a flag passed in that the storage in
> >> > > question
> >> > > >>> > > > > is
> >> > > >>> > > > > "CloudStack-managed" storage (as opposed to
> "traditional"
> >> > > >>> > preallocated
> >> > > >>> > > > > storage). This tells it to discover the iSCSI target.
> Once
> >> > > >>> > > > > discovered
> >> > > >>> > > it
> >> > > >>> > > > > determines if the iSCSI target already contains a
> storage
> >> > > >>> > > > > repository
> >> > > >>> > > (it
> >> > > >>> > > > > would if this were a re-attach situation). If it does
> >> contain
> >> > > an
> >> > > >>> > > > > SR
> >> > > >>> > > > already,
> >> > > >>> > > > > then there should already be one VDI, as well. If there
> >> is no
> >> > > SR,
> >> > > >>> > > > > an
> >> > > >>> > SR
> >> > > >>> > > > is
> >> > > >>> > > > > created and a single VDI is created within it (that
> takes
> >> up
> >> > > about
> >> > > >>> > > > > as
> >> > > >>> > > > much
> >> > > >>> > > > > space as was requested for the CloudStack volume).
> >> > > >>> > > > >
> >> > > >>> > > > > * The normal attach-volume logic continues (it depends
> on
> >> the
> >> > > >>> > existence
> >> > > >>> > > > of
> >> > > >>> > > > > an SR and a VDI).
> >> > > >>> > > > >
> >> > > >>> > > > > The VMware case is essentially the same (mainly just
> >> > substitute
> >> > > >>> > > datastore
> >> > > >>> > > > > for SR and VMDK for VDI).
> >> > > >>> > > > >
> >> > > >>> > > > > In both cases, all hosts in the cluster have discovered
> >> the
> >> > > iSCSI
> >> > > >>> > > target,
> >> > > >>> > > > > but only the host that is currently running the VM that
> is
> >> > > using
> >> > > >>> > > > > the
> >> > > >>> > > VDI
> >> > > >>> > > > (or
> >> > > >>> > > > > VMKD) is actually using the disk.
> >> > > >>> > > > >
> >> > > >>> > > > > Live Migration should be OK because the hypervisors
> >> > communicate
> >> > > >>> > > > > with
> >> > > >>> > > > > whatever metadata they have on the SR (or datastore).
> >> > > >>> > > > >
> >> > > >>> > > > > I see what you're saying with KVM, though.
> >> > > >>> > > > >
> >> > > >>> > > > > In that case, the hosts are clustered only in
> CloudStack's
> >> > > eyes.
> >> > > >>> > > > > CS
> >> > > >>> > > > controls
> >> > > >>> > > > > Live Migration. You don't really need a clustered
> >> filesystem
> >> > on
> >> > > >>> > > > > the
> >> > > >>> > > LUN.
> >> > > >>> > > > The
> >> > > >>> > > > > LUN could be handed over raw to the VM using it.
> >> > > >>> > > > >
> >> > > >>> > > > > If there is a way for me to update the ACL list on the
> >> SAN to
> >> > > have
> >> > > >>> > > only a
> >> > > >>> > > > > single KVM host have access to the volume, that would be
> >> > ideal.
> >> > > >>> > > > >
> >> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover and
> >> log
> >> > in
> >> > > to
> >> > > >>> > > > > the
> >> > > >>> > > > iSCSI
> >> > > >>> > > > > target. I'll also need to take the resultant new device
> >> and
> >> > > pass
> >> > > >>> > > > > it
> >> > > >>> > > into
> >> > > >>> > > > the
> >> > > >>> > > > > VM.
> >> > > >>> > > > >
> >> > > >>> > > > > Does this sound reasonable? Please call me out on
> >> anything I
> >> > > seem
> >> > > >>> > > > incorrect
> >> > > >>> > > > > about. :)
> >> > > >>> > > > >
> >> > > >>> > > > > Thanks for all the thought on this, Marcus!
> >> > > >>> > > > >
> >> > > >>> > > > >
> >> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> >> > > >>> > shadowsor@gmail.com>
> >> > > >>> > > > > wrote:
> >> > > >>> > > > >>
> >> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk
> def,
> >> and
> >> > > the
> >> > > >>> > > attach
> >> > > >>> > > > >> the disk def to the vm. You may need to do your own
> >> > > >>> > > > >> StorageAdaptor
> >> > > >>> > and
> >> > > >>> > > > run
> >> > > >>> > > > >> iscsiadm commands to accomplish that, depending on how
> >> the
> >> > > >>> > > > >> libvirt
> >> > > >>> > > iscsi
> >> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume
> >> isn't
> >> > > how it
> >> > > >>> > > works
> >> > > >>> > > > on
> >> > > >>> > > > >> xen at the momen., nor is it ideal.
> >> > > >>> > > > >>
> >> > > >>> > > > >> Your plugin will handle acls as far as which host can
> see
> >> > > which
> >> > > >>> > > > >> luns
> >> > > >>> > > as
> >> > > >>> > > > >> well, I remember discussing that months ago, so that a
> >> disk
> >> > > won't
> >> > > >>> > > > >> be
> >> > > >>> > > > >> connected until the hypervisor has exclusive access, so
> >> it
> >> > > will
> >> > > >>> > > > >> be
> >> > > >>> > > safe
> >> > > >>> > > > and
> >> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
> >> > > >>> > > > >> connectivity
> >> > > >>> > > > with. It
> >> > > >>> > > > >> should revoke access to everything but the target
> host...
> >> > > Except
> >> > > >>> > > > >> for
> >> > > >>> > > > during
> >> > > >>> > > > >> migration but we can discuss that later, there's a
> >> migration
> >> > > prep
> >> > > >>> > > > process
> >> > > >>> > > > >> where the new host can be added to the acls, and the
> old
> >> > host
> >> > > can
> >> > > >>> > > > >> be
> >> > > >>> > > > removed
> >> > > >>> > > > >> post migration.
> >> > > >>> > > > >>
> >> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> >> > > >>> > > mike.tutkowski@solidfire.com
> >> > > >>> > > > >
> >> > > >>> > > > >> wrote:
> >> > > >>> > > > >>>
> >> > > >>> > > > >>> Yeah, that would be ideal.
> >> > > >>> > > > >>>
> >> > > >>> > > > >>> So, I would still need to discover the iSCSI target,
> >> log in
> >> > > to
> >> > > >>> > > > >>> it,
> >> > > >>> > > then
> >> > > >>> > > > >>> figure out what /dev/sdX was created as a result (and
> >> leave
> >> > > it
> >> > > >>> > > > >>> as
> >> > > >>> > is
> >> > > >>> > > -
> >> > > >>> > > > do
> >> > > >>> > > > >>> not format it with any file system...clustered or
> not).
> >> I
> >> > > would
> >> > > >>> > pass
> >> > > >>> > > > that
> >> > > >>> > > > >>> device into the VM.
> >> > > >>> > > > >>>
> >> > > >>> > > > >>> Kind of accurate?
> >> > > >>> > > > >>>
> >> > > >>> > > > >>>
> >> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> >> > > >>> > > shadowsor@gmail.com>
> >> > > >>> > > > >>> wrote:
> >> > > >>> > > > >>>>
> >> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> >> > > definitions.
> >> > > >>> > There
> >> > > >>> > > > are
> >> > > >>> > > > >>>> ones that work for block devices rather than files.
> You
> >> > can
> >> > > >>> > > > >>>> piggy
> >> > > >>> > > > back off
> >> > > >>> > > > >>>> of the existing disk definitions and attach it to the
> >> vm
> >> > as
> >> > > a
> >> > > >>> > block
> >> > > >>> > > > device.
> >> > > >>> > > > >>>> The definition is an XML string per libvirt XML
> format.
> >> > You
> >> > > may
> >> > > >>> > want
> >> > > >>> > > > to use
> >> > > >>> > > > >>>> an alternate path to the disk rather than just
> /dev/sdx
> >> > > like I
> >> > > >>> > > > mentioned,
> >> > > >>> > > > >>>> there are by-id paths to the block devices, as well
> as
> >> > other
> >> > > >>> > > > >>>> ones
> >> > > >>> > > > that will
> >> > > >>> > > > >>>> be consistent and easier for management, not sure how
> >> > > familiar
> >> > > >>> > > > >>>> you
> >> > > >>> > > > are with
> >> > > >>> > > > >>>> device naming on Linux.
> >> > > >>> > > > >>>>
> >> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> >> > > >>> > > > >>>> <sh...@gmail.com>
> >> > > >>> > > > wrote:
> >> > > >>> > > > >>>>>
> >> > > >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
> >> > > initiator
> >> > > >>> > > inside
> >> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx
> (your
> >> > lun
> >> > > on
> >> > > >>> > > > hypervisor) as
> >> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image
> >> file
> >> > > that
> >> > > >>> > > resides
> >> > > >>> > > > on a
> >> > > >>> > > > >>>>> filesystem, mounted on the host, living on a target.
> >> > > >>> > > > >>>>>
> >> > > >>> > > > >>>>> Actually, if you plan on the storage supporting live
> >> > > migration
> >> > > >>> > > > >>>>> I
> >> > > >>> > > > think
> >> > > >>> > > > >>>>> this is the only way. You can't put a filesystem on
> it
> >> > and
> >> > > >>> > > > >>>>> mount
> >> > > >>> > it
> >> > > >>> > > > in two
> >> > > >>> > > > >>>>> places to facilitate migration unless its a
> clustered
> >> > > >>> > > > >>>>> filesystem,
> >> > > >>> > > in
> >> > > >>> > > > which
> >> > > >>> > > > >>>>> case you're back to shared mount point.
> >> > > >>> > > > >>>>>
> >> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
> >> basically
> >> > > LVM
> >> > > >>> > with a
> >> > > >>> > > > xen
> >> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They
> don't
> >> > use
> >> > > a
> >> > > >>> > > > filesystem
> >> > > >>> > > > >>>>> either.
> >> > > >>> > > > >>>>>
> >> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm,"
> >> do
> >> > you
> >> > > >>> > > > >>>>>> mean
> >> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we
> >> could do
> >> > > that
> >> > > >>> > > > >>>>>> in
> >> > > >>> > > CS.
> >> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents
> the
> >> > > >>> > > > >>>>>> hypervisor,
> >> > > >>> > > as
> >> > > >>> > > > far as I
> >> > > >>> > > > >>>>>> know.
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> >> > > >>> > > > shadowsor@gmail.com>
> >> > > >>> > > > >>>>>> wrote:
> >> > > >>> > > > >>>>>>>
> >> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm
> unless
> >> > > there is
> >> > > >>> > > > >>>>>>> a
> >> > > >>> > > good
> >> > > >>> > > > >>>>>>> reason not to.
> >> > > >>> > > > >>>>>>>
> >> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> >> > > >>> > shadowsor@gmail.com>
> >> > > >>> > > > >>>>>>> wrote:
> >> > > >>> > > > >>>>>>>>
> >> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
> >> > > mistake
> >> > > >>> > > > >>>>>>>> to
> >> > > >>> > go
> >> > > >>> > > to
> >> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
> >> volumes to
> >> > > luns
> >> > > >>> > and
> >> > > >>> > > > then putting
> >> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then
> putting a
> >> > > QCOW2
> >> > > >>> > > > >>>>>>>> or
> >> > > >>> > > even
> >> > > >>> > > > RAW disk
> >> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of
> iops
> >> > > along
> >> > > >>> > > > >>>>>>>> the
> >> > > >>> > > > way, and have
> >> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
> >> journaling,
> >> > > etc.
> >> > > >>> > > > >>>>>>>>
> >> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground
> in
> >> KVM
> >> > > with
> >> > > >>> > CS.
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS
> >> today
> >> > > is by
> >> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> >> > location
> >> > > of
> >> > > >>> > > > >>>>>>>>> the
> >> > > >>> > > > share.
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> >> > > >>> > > > >>>>>>>>> discovering
> >> > > >>> > > their
> >> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> >> > > somewhere
> >> > > >>> > > > >>>>>>>>> on
> >> > > >>> > > > their file
> >> > > >>> > > > >>>>>>>>> system.
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
> >> discovery,
> >> > > >>> > > > >>>>>>>>> logging
> >> > > >>> > > in,
> >> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and
> >> letting
> >> > the
> >> > > >>> > current
> >> > > >>> > > > code manage
> >> > > >>> > > > >>>>>>>>> the rest as it currently does?
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> >> > > >>> > > > >>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
> >> need
> >> > > to
> >> > > >>> > catch
> >> > > >>> > > up
> >> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically
> >> just
> >> > > disk
> >> > > >>> > > > snapshots + memory
> >> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
> >> preferably
> >> > be
> >> > > >>> > handled
> >> > > >>> > > > by the SAN,
> >> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
> >> storage or
> >> > > >>> > something
> >> > > >>> > > > else. This is
> >> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we
> will
> >> > > want to
> >> > > >>> > see
> >> > > >>> > > > how others are
> >> > > >>> > > > >>>>>>>>>> planning theirs.
> >> > > >>> > > > >>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> >> > > >>> > > shadowsor@gmail.com
> >> > > >>> > > > >
> >> > > >>> > > > >>>>>>>>>> wrote:
> >> > > >>> > > > >>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd
> use a
> >> > vdi
> >> > > >>> > > > >>>>>>>>>>> style
> >> > > >>> > on
> >> > > >>> > > > an
> >> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a
> >> RAW
> >> > > >>> > > > >>>>>>>>>>> format.
> >> > > >>> > > > Otherwise you're
> >> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> >> > > creating
> >> > > >>> > > > >>>>>>>>>>> a
> >> > > >>> > > > QCOW2 disk image,
> >> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> >> > killer.
> >> > > >>> > > > >>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a
> >> disk
> >> > to
> >> > > the
> >> > > >>> > VM,
> >> > > >>> > > > and
> >> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the
> >> storage
> >> > > >>> > > > >>>>>>>>>>> plugin
> >> > > >>> > is
> >> > > >>> > > > best. My
> >> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor
> was
> >> > that
> >> > > >>> > > > >>>>>>>>>>> there
> >> > > >>> > > was
> >> > > >>> > > > a snapshot
> >> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle
> >> > snapshots.
> >> > > >>> > > > >>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> >> > > >>> > > > shadowsor@gmail.com>
> >> > > >>> > > > >>>>>>>>>>> wrote:
> >> > > >>> > > > >>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by
> the
> >> SAN
> >> > > back
> >> > > >>> > end,
> >> > > >>> > > > if
> >> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt
> server
> >> > > could
> >> > > >>> > > > >>>>>>>>>>>> call
> >> > > >>> > > > your plugin for
> >> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> >> > > agnostic. As
> >> > > >>> > far
> >> > > >>> > > > as space, that
> >> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
> >> > ours,
> >> > > we
> >> > > >>> > carve
> >> > > >>> > > > out luns from a
> >> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the
> >> pool
> >> > > and is
> >> > > >>> > > > independent of the
> >> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
> >> > > >>> > > > >>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> >> > libvirt
> >> > > >>> > > > >>>>>>>>>>>>> won't
> >> > > >>> > > > work
> >> > > >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> >> > > snapshots?
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> >> > snapshot,
> >> > > the
> >> > > >>> > VDI
> >> > > >>> > > > for
> >> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> >> > > repository
> >> > > >>> > > > >>>>>>>>>>>>> as
> >> > > >>> > > the
> >> > > >>> > > > volume is on.
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say
> >> for
> >> > > >>> > > > >>>>>>>>>>>>> XenServer
> >> > > >>> > > and
> >> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> >> hypervisor
> >> > > >>> > snapshots
> >> > > >>> > > > in 4.2) is I'd
> >> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than
> what
> >> the
> >> > > user
> >> > > >>> > > > requested for the
> >> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our
> >> SAN
> >> > > >>> > > > >>>>>>>>>>>>> thinly
> >> > > >>> > > > provisions volumes,
> >> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it
> >> needs
> >> > > to
> >> > > >>> > > > >>>>>>>>>>>>> be).
> >> > > >>> > > > The CloudStack
> >> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> >> > volume
> >> > > >>> > until a
> >> > > >>> > > > hypervisor
> >> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
> >> > reside
> >> > > on
> >> > > >>> > > > >>>>>>>>>>>>> the
> >> > > >>> > > > SAN volume.
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is
> >> no
> >> > > >>> > > > >>>>>>>>>>>>> creation
> >> > > >>> > of
> >> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
> >> (which,
> >> > > even
> >> > > >>> > > > >>>>>>>>>>>>> if
> >> > > >>> > > > there were support
> >> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one
> >> LUN
> >> > per
> >> > > >>> > > > >>>>>>>>>>>>> iSCSI
> >> > > >>> > > > target), then I
> >> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the
> current
> >> way
> >> > > this
> >> > > >>> > > works
> >> > > >>> > > > >>>>>>>>>>>>> with DIR?
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> What do you think?
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> Thanks
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
> >> Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for
> >> iSCSI
> >> > > access
> >> > > >>> > > today.
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
> >> > might
> >> > > as
> >> > > >>> > well
> >> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI
> instead.
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
> >> Sorensen
> >> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> >> > > believe
> >> > > >>> > > > >>>>>>>>>>>>>>> it
> >> > > >>> > > just
> >> > > >>> > > > >>>>>>>>>>>>>>> acts like a
> >> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
> >> > that.
> >> > > The
> >> > > >>> > > > end-user
> >> > > >>> > > > >>>>>>>>>>>>>>> is
> >> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system
> that
> >> all
> >> > > KVM
> >> > > >>> > hosts
> >> > > >>> > > > can
> >> > > >>> > > > >>>>>>>>>>>>>>> access,
> >> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> >> > providing
> >> > > the
> >> > > >>> > > > storage.
> >> > > >>> > > > >>>>>>>>>>>>>>> It could
> >> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> >> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
> >> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
> >> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has
> >> VM
> >> > > >>> > > > >>>>>>>>>>>>>>> images.
> >> > > >>> > > > >>>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> >> > Sorensen
> >> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI
> >> all
> >> > at
> >> > > the
> >> > > >>> > same
> >> > > >>> > > > >>>>>>>>>>>>>>> > time.
> >> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> >> > > >>> > > > >>>>>>>>>>>>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> >> > Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple
> storage
> >> > > pools:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State
>  Autostart
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> -----------------------------------------
> >> > > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
> >> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> >> > > Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed
> >> out.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
> >> storage
> >> > > pool
> >> > > >>> > based
> >> > > >>> > > on
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would
> only
> >> > have
> >> > > one
> >> > > >>> > LUN,
> >> > > >>> > > > so
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume
> in
> >> > the
> >> > > >>> > > (libvirt)
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
> >> destroys
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem
> >> that
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> >> > > >>> > > does
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a
> bit
> >> to
> >> > > see
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> if
> >> > > >>> > > > libvirt
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> >> > > mentioned,
> >> > > >>> > since
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my
> >> iSCSI
> >> > > >>> > > > targets/LUNs).
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> >> > > Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"),
> NETFS("netfs"),
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type
> is
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
> >> > > >>> > > being
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were
> >> getting
> >> > at.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
> >> when
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it
> >> with
> >> > > iSCSI,
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
> >> > > >>> > > > that
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM,
> Marcus
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on
> the
> >> > iSCSI
> >> > > >>> > server,
> >> > > >>> > > > and
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.",
> which
> >> I
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> >> > > >>> > > your
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work
> of
> >> > > logging
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > > >>> > > and
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does
> >> that
> >> > > work
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >> > > >>> > the
> >> > > >>> > > > Xen
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether
> >> this
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> >> > > >>> > a
> >> > > >>> > > > 1:1
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1
> iscsi
> >> > > device
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
> >> > > >>> > a
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a
> >> bit
> >> > > more
> >> > > >>> > about
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
> >> write
> >> > > your
> >> > > >>> > own
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> >> > > >>> > >  We
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt,
> >> see
> >> > the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > http://libvirt.org/sources/java/javadoc/Normally,
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then
> calls
> >> > made
> >> > > to
> >> > > >>> > that
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the
> LibvirtStorageAdaptor
> >> to
> >> > > see
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
> >> > > >>> > > that
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write
> some
> >> > test
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >> > > >>> > > code
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
> >> register
> >> > > iscsi
> >> > > >>> > > storage
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM,
> Mike
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com>
> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> >> > libvirt
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> >> > > >>> > > but
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
> >> iSCSI
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM,
> >> Mike
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com>
> >> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through
> >> some of
> >> > > the
> >> > > >>> > > classes
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
> >> > Marcus
> >> > > >>> > Sorensen
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will
> >> need
> >> > the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
> >> standard
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> >> > > >>> > > for
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do
> the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> >> > > >>> > > > login.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> >> > > >>> > and
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> >> > > Tutkowski"
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the
> >> 4.2
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> >> > > >>> > I
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for
> CloudStack.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> >> > > storage
> >> > > >>> > > > framework
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically
> >> create
> >> > and
> >> > > >>> > delete
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> >> > establish a
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> >> > > >>> > > > mapping
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume
> for
> >> > QoS.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> >> > > expected
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > >>> > > > admin
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> >> > > volumes
> >> > > >>> > would
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> >> > > friendly).
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> >> > work,
> >> > > I
> >> > > >>> > needed
> >> > > >>> > > > to
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins
> so
> >> > they
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as
> >> needed.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this
> >> happen
> >> > > with
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
> >> this
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> >> > > >>> > > work
> >> > > >>> > > > on
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM
> >> know
> >> > > how I
> >> > > >>> > will
> >> > > >>> > > > need
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example,
> will I
> >> > > have to
> >> > > >>> > > expect
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and
> >> use it
> >> > > for
> >> > > >>> > this
> >> > > >>> > > to
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> >> > SolidFire
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e:
> mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world
> uses
> >> the
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> >> SolidFire
> >> > > Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses
> >> the
> >> > > cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
> >> SolidFire
> >> > > Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses
> the
> >> > > cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer,
> SolidFire
> >> > Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
> >> > cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> --
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire
> >> Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
> >> cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >>
> >> > > >>> > > > >>>>>>>>>>>>>>> >> --
> >> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire
> >> Inc.
> >> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the
> >> cloud™
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>> --
> >> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>>
> >> > > >>> > > > >>>>>>>>>>>>> --
> >> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>>
> >> > > >>> > > > >>>>>>>>> --
> >> > > >>> > > > >>>>>>>>> Mike Tutkowski
> >> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>>>>> o: 303.746.7302
> >> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>>
> >> > > >>> > > > >>>>>> --
> >> > > >>> > > > >>>>>> Mike Tutkowski
> >> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>>>>> o: 303.746.7302
> >> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
> >> > > >>> > > > >>>
> >> > > >>> > > > >>>
> >> > > >>> > > > >>>
> >> > > >>> > > > >>>
> >> > > >>> > > > >>> --
> >> > > >>> > > > >>> Mike Tutkowski
> >> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
> >> > > >>> > > > >>> o: 303.746.7302
> >> > > >>> > > > >>> Advancing the way the world uses the cloud™
> >> > > >>> > > > >
> >> > > >>> > > > >
> >> > > >>> > > > >
> >> > > >>> > > > >
> >> > > >>> > > > > --
> >> > > >>> > > > > Mike Tutkowski
> >> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> >> > > >>> > > > > e: mike.tutkowski@solidfire.com
> >> > > >>> > > > > o: 303.746.7302
> >> > > >>> > > > > Advancing the way the world uses the cloud™
> >> > > >>> > > >
> >> > > >>> > >
> >> > > >>> > >
> >> > > >>> > >
> >> > > >>> > > --
> >> > > >>> > > *Mike Tutkowski*
> >> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> > > e: mike.tutkowski@solidfire.com
> >> > > >>> > > o: 303.746.7302
> >> > > >>> > > Advancing the way the world uses the
> >> > > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > >>> > > *™*
> >> > > >>> > >
> >> > > >>> >
> >> > > >>>
> >> > > >>>
> >> > > >>>
> >> > > >>> --
> >> > > >>> *Mike Tutkowski*
> >> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> >> > > >>> e: mike.tutkowski@solidfire.com
> >> > > >>> o: 303.746.7302
> >> > > >>> Advancing the way the world uses the
> >> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
> >> > > >>> *™*
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > *Mike Tutkowski*
> >> > *Senior CloudStack Developer, SolidFire Inc.*
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the
> >> > cloud<http://solidfire.com/solution/overview/?video=play>
> >> > *™*
> >> >
> >>
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud<
> http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I totally get where you're coming from with the tiered-pool approach,
though.

Prior to SolidFire, I worked at HP and the product I worked on allowed a
single, clustered SAN to host multiple pools of storage. One pool might be
made up of all-SSD storage nodes while another pool might be made up of
slower HDDs.

That kind of tiering is not what SolidFire QoS is about, though, as that
kind of tiering does not guarantee QoS.

In the SolidFire SAN, QoS was designed in from the beginning and is
extremely granular. Each volume has its own performance and capacity. You
do not have to worry about Noisy Neighbors.

The idea is to encourage businesses to trust the cloud with their most
critical business applications at a price point on par with traditional
SANs.


On Tue, Sep 17, 2013 at 11:52 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Ah, I think I see the miscommunication.
>
> I should have gone into a bit more detail about the SolidFire SAN.
>
> It is built from the ground up to support QoS on a LUN-by-LUN basis. Every
> LUN is assigned a Min, Max, and Burst number of IOPS.
>
> The Min IOPS are a guaranteed number (as long as the SAN itself is not
> over provisioned). Capacity and IOPS are provisioned independently.
> Multiple volumes and multiple tenants using the same SAN do not suffer from
> the Noisy Neighbor effect.
>
> When you create a Disk Offering in CS that is storage tagged to use
> SolidFire primary storage, you specify a Min, Max, and Burst number of IOPS
> to provision from the SAN for volumes created from that Disk Offering.
>
> There is no notion of RAID groups that you see in more traditional SANs.
> The SAN is built from clusters of storage nodes and data is replicated
> amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the
> cluster to avoid hot spots and protect the data should a drives and/or
> nodes fail. You then scale the SAN by adding new storage nodes.
>
> Data is compressed and de-duplicated inline across the cluster and all
> volumes are thinly provisioned.
>
>
> On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> I'm surprised there's no mention of pool on the SAN in your description of
>> the framework. I had assumed this was specific to your implementation,
>> because normally SANs host multiple disk pools, maybe multiple RAID 50s
>> and
>> 10s, or however the SAN admin wants to split it up. Maybe a pool intended
>> for root disks and a separate one for data disks. Or one pool for
>> cloudstack and one dedicated to some other internal db application. But it
>> sounds as though there's no place to specify which disks or pool on the
>> SAN
>> to use.
>>
>> We implemented our own internal storage SAN plugin based on 4.1. We used
>> the 'path' attribute of the primary storage pool object to specify which
>> pool name on the back end SAN to use, so we could create all-ssd pools and
>> slower spindle pools, then differentiate between them based on storage
>> tags. Normally the path attribute would be the mount point for NFS, but
>> its
>> just a string. So when registering ours we enter San dns host name, the
>> san's rest api port, and the pool name. Then luns created from that
>> primary
>> storage come from the matching disk pool on the SAN. We can create and
>> register multiple pools of different types and purposes on the same SAN.
>> We
>> haven't yet gotten to porting it to the 4.2 frame work, so it will be
>> interesting to see what we can come up with to make it work similarly.
>>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
>> >
>> wrote:
>>
>> > What you're saying here is definitely something we should talk about.
>> >
>> > Hopefully my previous e-mail has clarified how this works a bit.
>> >
>> > It mainly comes down to this:
>> >
>> > For the first time in CS history, primary storage is no longer required
>> to
>> > be preallocated by the admin and then handed to CS. CS volumes don't
>> have
>> > to share a preallocated volume anymore.
>> >
>> > As of 4.2, primary storage can be based on a SAN (or some other storage
>> > device). You can tell CS how many bytes and IOPS to use from this
>> storage
>> > device and CS invokes the appropriate plug-in to carve out LUNs
>> > dynamically.
>> >
>> > Each LUN is home to one and only one data disk. Data disks - in this
>> model
>> > - never share a LUN.
>> >
>> > The main use case for this is so a CS volume can deliver guaranteed
>> IOPS if
>> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
>> > LUN-by-LUN basis.
>> >
>> >
>> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > I guess whether or not a solidfire device is capable of hosting
>> > > multiple disk pools is irrelevant, we'd hope that we could get the
>> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
>> > > stats aren't collected, I can't as an admin define multiple pools and
>> > > expect cloudstack to allocate evenly from them or fill one up and move
>> > > to the next, because it doesn't know how big it is.
>> > >
>> > > Ultimately this discussion has nothing to do with the KVM stuff
>> > > itself, just a tangent, but something to think about.
>> > >
>> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> > > wrote:
>> > > > Ok, on most storage pools it shows how many GB free/used when
>> listing
>> > > > the pool both via API and in the UI. I'm guessing those are empty
>> then
>> > > > for the solid fire storage, but it seems like the user should have
>> to
>> > > > define some sort of pool that the luns get carved out of, and you
>> > > > should be able to get the stats for that, right? Or is a solid fire
>> > > > appliance only one pool per appliance? This isn't about billing, but
>> > > > just so cloudstack itself knows whether or not there is space left
>> on
>> > > > the storage device, so cloudstack can go on allocating from a
>> > > > different primary storage as this one fills up. There are also
>> > > > notifications and things. It seems like there should be a call you
>> can
>> > > > handle for this, maybe Edison knows.
>> > > >
>> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> > > wrote:
>> > > >> You respond to more than attach and detach, right? Don't you create
>> > > luns as
>> > > >> well? Or are you just referring to the hypervisor stuff?
>> > > >>
>> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
>> > mike.tutkowski@solidfire.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Hi Marcus,
>> > > >>>
>> > > >>> I never need to respond to a CreateStoragePool call for either
>> > > XenServer
>> > > >>> or
>> > > >>> VMware.
>> > > >>>
>> > > >>> What happens is I respond only to the Attach- and Detach-volume
>> > > commands.
>> > > >>>
>> > > >>> Let's say an attach comes in:
>> > > >>>
>> > > >>> In this case, I check to see if the storage is "managed." Talking
>> > > >>> XenServer
>> > > >>> here, if it is, I log in to the LUN that is the disk we want to
>> > attach.
>> > > >>> After, if this is the first time attaching this disk, I create an
>> SR
>> > > and a
>> > > >>> VDI within the SR. If it is not the first time attaching this
>> disk,
>> > the
>> > > >>> LUN
>> > > >>> already has the SR and VDI on it.
>> > > >>>
>> > > >>> Once this is done, I let the normal "attach" logic run because
>> this
>> > > logic
>> > > >>> expected an SR and a VDI and now it has it.
>> > > >>>
>> > > >>> It's the same thing for VMware: Just substitute datastore for SR
>> and
>> > > VMDK
>> > > >>> for VDI.
>> > > >>>
>> > > >>> Does that make sense?
>> > > >>>
>> > > >>> Thanks!
>> > > >>>
>> > > >>>
>> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>> > > >>> <sh...@gmail.com>wrote:
>> > > >>>
>> > > >>> > What do you do with Xen? I imagine the user enter the SAN
>> details
>> > > when
>> > > >>> > registering the pool? A the pool details are basically just
>> > > instructions
>> > > >>> > on
>> > > >>> > how to log into a target, correct?
>> > > >>> >
>> > > >>> > You can choose to log in a KVM host to the target during
>> > > >>> > createStoragePool
>> > > >>> > and save the pool in a map, or just save the pool info in a map
>> for
>> > > >>> > future
>> > > >>> > reference by uuid, for when you do need to log in. The
>> > > createStoragePool
>> > > >>> > then just becomes a way to save the pool info to the agent.
>> > > Personally,
>> > > >>> > I'd
>> > > >>> > log in on the pool create and look/scan for specific luns when
>> > > they're
>> > > >>> > needed, but I haven't thought it through thoroughly. I just say
>> > that
>> > > >>> > mainly
>> > > >>> > because login only happens once, the first time the pool is
>> used,
>> > and
>> > > >>> > every
>> > > >>> > other storage command is about discovering new luns or maybe
>> > > >>> > deleting/disconnecting luns no longer needed. On the other hand,
>> > you
>> > > >>> > could
>> > > >>> > do all of the above: log in on pool create, then also check if
>> > you're
>> > > >>> > logged in on other commands and log in if you've lost
>> connection.
>> > > >>> >
>> > > >>> > With Xen, what does your registered pool   show in the UI for
>> > > avail/used
>> > > >>> > capacity, and how does it get that info? I assume there is some
>> > sort
>> > > of
>> > > >>> > disk pool that the luns are carved from, and that your plugin is
>> > > called
>> > > >>> > to
>> > > >>> > talk to the SAN and expose to the user how much of that pool has
>> > been
>> > > >>> > allocated. Knowing how you already solves these problems with
>> Xen
>> > > will
>> > > >>> > help
>> > > >>> > figure out what to do with KVM.
>> > > >>> >
>> > > >>> > If this is the case, I think the plugin can continue to handle
>> it
>> > > rather
>> > > >>> > than getting details from the agent. I'm not sure if that means
>> > nulls
>> > > >>> > are
>> > > >>> > OK for these on the agent side or what, I need to look at the
>> > storage
>> > > >>> > plugin arch more closely.
>> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
>> > > mike.tutkowski@solidfire.com>
>> > > >>> > wrote:
>> > > >>> >
>> > > >>> > > Hey Marcus,
>> > > >>> > >
>> > > >>> > > I'm reviewing your e-mails as I implement the necessary
>> methods
>> > in
>> > > new
>> > > >>> > > classes.
>> > > >>> > >
>> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool
>> accepts
>> > > all of
>> > > >>> > > the pool data (host, port, name, path) which would be used to
>> log
>> > > the
>> > > >>> > > host into the initiator."
>> > > >>> > >
>> > > >>> > > Can you tell me, in my case, since a storage pool (primary
>> > > storage) is
>> > > >>> > > actually the SAN, I wouldn't really be logging into anything
>> at
>> > > this
>> > > >>> > point,
>> > > >>> > > correct?
>> > > >>> > >
>> > > >>> > > Also, what kind of capacity, available, and used bytes make
>> sense
>> > > to
>> > > >>> > report
>> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN
>> in my
>> > > case
>> > > >>> > and
>> > > >>> > > not an individual LUN)?
>> > > >>> > >
>> > > >>> > > Thanks!
>> > > >>> > >
>> > > >>> > >
>> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
>> > > shadowsor@gmail.com
>> > > >>> > > >wrote:
>> > > >>> > >
>> > > >>> > > > Ok, KVM will be close to that, of course, because only the
>> > > >>> > > > hypervisor
>> > > >>> > > > classes differ, the rest is all mgmt server. Creating a
>> volume
>> > is
>> > > >>> > > > just
>> > > >>> > > > a db entry until it's deployed for the first time.
>> > > >>> > > > AttachVolumeCommand
>> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous
>> to
>> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via
>> a
>> > KVM
>> > > >>> > > > StorageAdaptor) to log in the host to the target and then
>> you
>> > > have a
>> > > >>> > > > block device.  Maybe libvirt will do that for you, but my
>> quick
>> > > read
>> > > >>> > > > made it sound like the iscsi libvirt pool type is actually a
>> > > pool,
>> > > >>> > > > not
>> > > >>> > > > a lun or volume, so you'll need to figure out if that works
>> or
>> > if
>> > > >>> > > > you'll have to use iscsiadm commands.
>> > > >>> > > >
>> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because
>> > Libvirt
>> > > >>> > > > doesn't really manage your pool the way you want), you're
>> going
>> > > to
>> > > >>> > > > have to create a version of KVMStoragePool class and a
>> > > >>> > > > StorageAdaptor
>> > > >>> > > > class (see LibvirtStoragePool.java and
>> > > LibvirtStorageAdaptor.java),
>> > > >>> > > > implementing all of the methods, then in
>> KVMStorageManager.java
>> > > >>> > > > there's a "_storageMapper" map. This is used to select the
>> > > correct
>> > > >>> > > > adaptor, you can see in this file that every call first
>> pulls
>> > the
>> > > >>> > > > correct adaptor out of this map via getStorageAdaptor. So
>> you
>> > can
>> > > >>> > > > see
>> > > >>> > > > a comment in this file that says "add other storage adaptors
>> > > here",
>> > > >>> > > > where it puts to this map, this is where you'd register your
>> > > >>> > > > adaptor.
>> > > >>> > > >
>> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool
>> accepts
>> > > all
>> > > >>> > > > of
>> > > >>> > > > the pool data (host, port, name, path) which would be used
>> to
>> > log
>> > > >>> > > > the
>> > > >>> > > > host into the initiator. I *believe* the method
>> getPhysicalDisk
>> > > will
>> > > >>> > > > need to do the work of attaching the lun.
>>  AttachVolumeCommand
>> > > calls
>> > > >>> > > > this and then creates the XML diskdef and attaches it to the
>> > VM.
>> > > >>> > > > Now,
>> > > >>> > > > one thing you need to know is that createStoragePool is
>> called
>> > > >>> > > > often,
>> > > >>> > > > sometimes just to make sure the pool is there. You may want
>> to
>> > > >>> > > > create
>> > > >>> > > > a map in your adaptor class and keep track of pools that
>> have
>> > > been
>> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this
>> because
>> > it
>> > > >>> > > > asks
>> > > >>> > > > libvirt about which storage pools exist. There are also
>> calls
>> > to
>> > > >>> > > > refresh the pool stats, and all of the other calls can be
>> seen
>> > in
>> > > >>> > > > the
>> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk,
>> clone,
>> > > etc,
>> > > >>> > > > but
>> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague idea
>> > that
>> > > >>> > > > volumes are created on the mgmt server via the plugin now,
>> so
>> > > >>> > > > whatever
>> > > >>> > > > doesn't apply can just be stubbed out (or optionally
>> > > >>> > > > extended/reimplemented here, if you don't mind the hosts
>> > talking
>> > > to
>> > > >>> > > > the san api).
>> > > >>> > > >
>> > > >>> > > > There is a difference between attaching new volumes and
>> > > launching a
>> > > >>> > > > VM
>> > > >>> > > > with existing volumes.  In the latter case, the VM
>> definition
>> > > that
>> > > >>> > > > was
>> > > >>> > > > passed to the KVM agent includes the disks, (StartCommand).
>> > > >>> > > >
>> > > >>> > > > I'd be interested in how your pool is defined for Xen, I
>> > imagine
>> > > it
>> > > >>> > > > would need to be kept the same. Is it just a definition to
>> the
>> > > SAN
>> > > >>> > > > (ip address or some such, port number) and perhaps a volume
>> > pool
>> > > >>> > > > name?
>> > > >>> > > >
>> > > >>> > > > > If there is a way for me to update the ACL list on the
>> SAN to
>> > > have
>> > > >>> > > only a
>> > > >>> > > > > single KVM host have access to the volume, that would be
>> > ideal.
>> > > >>> > > >
>> > > >>> > > > That depends on your SAN API.  I was under the impression
>> that
>> > > the
>> > > >>> > > > storage plugin framework allowed for acls, or for you to do
>> > > whatever
>> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd just
>> > call
>> > > >>> > > > your
>> > > >>> > > > SAN API with the host info for the ACLs prior to when the
>> disk
>> > is
>> > > >>> > > > attached (or the VM is started).  I'd have to look more at
>> the
>> > > >>> > > > framework to know the details, in 4.1 I would do this in
>> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
>> > > >>> > > >
>> > > >>> > > >
>> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > > >>> > > > <mi...@solidfire.com> wrote:
>> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
>> > > >>> > > > > different
>> > > >>> > > from
>> > > >>> > > > how
>> > > >>> > > > > it works with XenServer and VMware.
>> > > >>> > > > >
>> > > >>> > > > > Just to give you an idea how it works in 4.2 with
>> XenServer:
>> > > >>> > > > >
>> > > >>> > > > > * The user creates a CS volume (this is just recorded in
>> the
>> > > >>> > > > cloud.volumes
>> > > >>> > > > > table).
>> > > >>> > > > >
>> > > >>> > > > > * The user attaches the volume as a disk to a VM for the
>> > first
>> > > >>> > > > > time
>> > > >>> > (if
>> > > >>> > > > the
>> > > >>> > > > > storage allocator picks the SolidFire plug-in, the storage
>> > > >>> > > > > framework
>> > > >>> > > > invokes
>> > > >>> > > > > a method on the plug-in that creates a volume on the
>> > SAN...info
>> > > >>> > > > > like
>> > > >>> > > the
>> > > >>> > > > IQN
>> > > >>> > > > > of the SAN volume is recorded in the DB).
>> > > >>> > > > >
>> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
>> > > executed.
>> > > >>> > > > > It
>> > > >>> > > > > determines based on a flag passed in that the storage in
>> > > question
>> > > >>> > > > > is
>> > > >>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
>> > > >>> > preallocated
>> > > >>> > > > > storage). This tells it to discover the iSCSI target. Once
>> > > >>> > > > > discovered
>> > > >>> > > it
>> > > >>> > > > > determines if the iSCSI target already contains a storage
>> > > >>> > > > > repository
>> > > >>> > > (it
>> > > >>> > > > > would if this were a re-attach situation). If it does
>> contain
>> > > an
>> > > >>> > > > > SR
>> > > >>> > > > already,
>> > > >>> > > > > then there should already be one VDI, as well. If there
>> is no
>> > > SR,
>> > > >>> > > > > an
>> > > >>> > SR
>> > > >>> > > > is
>> > > >>> > > > > created and a single VDI is created within it (that takes
>> up
>> > > about
>> > > >>> > > > > as
>> > > >>> > > > much
>> > > >>> > > > > space as was requested for the CloudStack volume).
>> > > >>> > > > >
>> > > >>> > > > > * The normal attach-volume logic continues (it depends on
>> the
>> > > >>> > existence
>> > > >>> > > > of
>> > > >>> > > > > an SR and a VDI).
>> > > >>> > > > >
>> > > >>> > > > > The VMware case is essentially the same (mainly just
>> > substitute
>> > > >>> > > datastore
>> > > >>> > > > > for SR and VMDK for VDI).
>> > > >>> > > > >
>> > > >>> > > > > In both cases, all hosts in the cluster have discovered
>> the
>> > > iSCSI
>> > > >>> > > target,
>> > > >>> > > > > but only the host that is currently running the VM that is
>> > > using
>> > > >>> > > > > the
>> > > >>> > > VDI
>> > > >>> > > > (or
>> > > >>> > > > > VMKD) is actually using the disk.
>> > > >>> > > > >
>> > > >>> > > > > Live Migration should be OK because the hypervisors
>> > communicate
>> > > >>> > > > > with
>> > > >>> > > > > whatever metadata they have on the SR (or datastore).
>> > > >>> > > > >
>> > > >>> > > > > I see what you're saying with KVM, though.
>> > > >>> > > > >
>> > > >>> > > > > In that case, the hosts are clustered only in CloudStack's
>> > > eyes.
>> > > >>> > > > > CS
>> > > >>> > > > controls
>> > > >>> > > > > Live Migration. You don't really need a clustered
>> filesystem
>> > on
>> > > >>> > > > > the
>> > > >>> > > LUN.
>> > > >>> > > > The
>> > > >>> > > > > LUN could be handed over raw to the VM using it.
>> > > >>> > > > >
>> > > >>> > > > > If there is a way for me to update the ACL list on the
>> SAN to
>> > > have
>> > > >>> > > only a
>> > > >>> > > > > single KVM host have access to the volume, that would be
>> > ideal.
>> > > >>> > > > >
>> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover and
>> log
>> > in
>> > > to
>> > > >>> > > > > the
>> > > >>> > > > iSCSI
>> > > >>> > > > > target. I'll also need to take the resultant new device
>> and
>> > > pass
>> > > >>> > > > > it
>> > > >>> > > into
>> > > >>> > > > the
>> > > >>> > > > > VM.
>> > > >>> > > > >
>> > > >>> > > > > Does this sound reasonable? Please call me out on
>> anything I
>> > > seem
>> > > >>> > > > incorrect
>> > > >>> > > > > about. :)
>> > > >>> > > > >
>> > > >>> > > > > Thanks for all the thought on this, Marcus!
>> > > >>> > > > >
>> > > >>> > > > >
>> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> > > >>> > shadowsor@gmail.com>
>> > > >>> > > > > wrote:
>> > > >>> > > > >>
>> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def,
>> and
>> > > the
>> > > >>> > > attach
>> > > >>> > > > >> the disk def to the vm. You may need to do your own
>> > > >>> > > > >> StorageAdaptor
>> > > >>> > and
>> > > >>> > > > run
>> > > >>> > > > >> iscsiadm commands to accomplish that, depending on how
>> the
>> > > >>> > > > >> libvirt
>> > > >>> > > iscsi
>> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume
>> isn't
>> > > how it
>> > > >>> > > works
>> > > >>> > > > on
>> > > >>> > > > >> xen at the momen., nor is it ideal.
>> > > >>> > > > >>
>> > > >>> > > > >> Your plugin will handle acls as far as which host can see
>> > > which
>> > > >>> > > > >> luns
>> > > >>> > > as
>> > > >>> > > > >> well, I remember discussing that months ago, so that a
>> disk
>> > > won't
>> > > >>> > > > >> be
>> > > >>> > > > >> connected until the hypervisor has exclusive access, so
>> it
>> > > will
>> > > >>> > > > >> be
>> > > >>> > > safe
>> > > >>> > > > and
>> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
>> > > >>> > > > >> connectivity
>> > > >>> > > > with. It
>> > > >>> > > > >> should revoke access to everything but the target host...
>> > > Except
>> > > >>> > > > >> for
>> > > >>> > > > during
>> > > >>> > > > >> migration but we can discuss that later, there's a
>> migration
>> > > prep
>> > > >>> > > > process
>> > > >>> > > > >> where the new host can be added to the acls, and the old
>> > host
>> > > can
>> > > >>> > > > >> be
>> > > >>> > > > removed
>> > > >>> > > > >> post migration.
>> > > >>> > > > >>
>> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> > > >>> > > mike.tutkowski@solidfire.com
>> > > >>> > > > >
>> > > >>> > > > >> wrote:
>> > > >>> > > > >>>
>> > > >>> > > > >>> Yeah, that would be ideal.
>> > > >>> > > > >>>
>> > > >>> > > > >>> So, I would still need to discover the iSCSI target,
>> log in
>> > > to
>> > > >>> > > > >>> it,
>> > > >>> > > then
>> > > >>> > > > >>> figure out what /dev/sdX was created as a result (and
>> leave
>> > > it
>> > > >>> > > > >>> as
>> > > >>> > is
>> > > >>> > > -
>> > > >>> > > > do
>> > > >>> > > > >>> not format it with any file system...clustered or not).
>> I
>> > > would
>> > > >>> > pass
>> > > >>> > > > that
>> > > >>> > > > >>> device into the VM.
>> > > >>> > > > >>>
>> > > >>> > > > >>> Kind of accurate?
>> > > >>> > > > >>>
>> > > >>> > > > >>>
>> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> > > >>> > > shadowsor@gmail.com>
>> > > >>> > > > >>> wrote:
>> > > >>> > > > >>>>
>> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
>> > > definitions.
>> > > >>> > There
>> > > >>> > > > are
>> > > >>> > > > >>>> ones that work for block devices rather than files. You
>> > can
>> > > >>> > > > >>>> piggy
>> > > >>> > > > back off
>> > > >>> > > > >>>> of the existing disk definitions and attach it to the
>> vm
>> > as
>> > > a
>> > > >>> > block
>> > > >>> > > > device.
>> > > >>> > > > >>>> The definition is an XML string per libvirt XML format.
>> > You
>> > > may
>> > > >>> > want
>> > > >>> > > > to use
>> > > >>> > > > >>>> an alternate path to the disk rather than just /dev/sdx
>> > > like I
>> > > >>> > > > mentioned,
>> > > >>> > > > >>>> there are by-id paths to the block devices, as well as
>> > other
>> > > >>> > > > >>>> ones
>> > > >>> > > > that will
>> > > >>> > > > >>>> be consistent and easier for management, not sure how
>> > > familiar
>> > > >>> > > > >>>> you
>> > > >>> > > > are with
>> > > >>> > > > >>>> device naming on Linux.
>> > > >>> > > > >>>>
>> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> > > >>> > > > >>>> <sh...@gmail.com>
>> > > >>> > > > wrote:
>> > > >>> > > > >>>>>
>> > > >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
>> > > initiator
>> > > >>> > > inside
>> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>> > lun
>> > > on
>> > > >>> > > > hypervisor) as
>> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image
>> file
>> > > that
>> > > >>> > > resides
>> > > >>> > > > on a
>> > > >>> > > > >>>>> filesystem, mounted on the host, living on a target.
>> > > >>> > > > >>>>>
>> > > >>> > > > >>>>> Actually, if you plan on the storage supporting live
>> > > migration
>> > > >>> > > > >>>>> I
>> > > >>> > > > think
>> > > >>> > > > >>>>> this is the only way. You can't put a filesystem on it
>> > and
>> > > >>> > > > >>>>> mount
>> > > >>> > it
>> > > >>> > > > in two
>> > > >>> > > > >>>>> places to facilitate migration unless its a clustered
>> > > >>> > > > >>>>> filesystem,
>> > > >>> > > in
>> > > >>> > > > which
>> > > >>> > > > >>>>> case you're back to shared mount point.
>> > > >>> > > > >>>>>
>> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
>> basically
>> > > LVM
>> > > >>> > with a
>> > > >>> > > > xen
>> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They don't
>> > use
>> > > a
>> > > >>> > > > filesystem
>> > > >>> > > > >>>>> either.
>> > > >>> > > > >>>>>
>> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm,"
>> do
>> > you
>> > > >>> > > > >>>>>> mean
>> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we
>> could do
>> > > that
>> > > >>> > > > >>>>>> in
>> > > >>> > > CS.
>> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
>> > > >>> > > > >>>>>> hypervisor,
>> > > >>> > > as
>> > > >>> > > > far as I
>> > > >>> > > > >>>>>> know.
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> > > >>> > > > shadowsor@gmail.com>
>> > > >>> > > > >>>>>> wrote:
>> > > >>> > > > >>>>>>>
>> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless
>> > > there is
>> > > >>> > > > >>>>>>> a
>> > > >>> > > good
>> > > >>> > > > >>>>>>> reason not to.
>> > > >>> > > > >>>>>>>
>> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> > > >>> > shadowsor@gmail.com>
>> > > >>> > > > >>>>>>> wrote:
>> > > >>> > > > >>>>>>>>
>> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
>> > > mistake
>> > > >>> > > > >>>>>>>> to
>> > > >>> > go
>> > > >>> > > to
>> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS
>> volumes to
>> > > luns
>> > > >>> > and
>> > > >>> > > > then putting
>> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
>> > > QCOW2
>> > > >>> > > > >>>>>>>> or
>> > > >>> > > even
>> > > >>> > > > RAW disk
>> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
>> > > along
>> > > >>> > > > >>>>>>>> the
>> > > >>> > > > way, and have
>> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
>> journaling,
>> > > etc.
>> > > >>> > > > >>>>>>>>
>> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in
>> KVM
>> > > with
>> > > >>> > CS.
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS
>> today
>> > > is by
>> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
>> > location
>> > > of
>> > > >>> > > > >>>>>>>>> the
>> > > >>> > > > share.
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
>> > > >>> > > > >>>>>>>>> discovering
>> > > >>> > > their
>> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
>> > > somewhere
>> > > >>> > > > >>>>>>>>> on
>> > > >>> > > > their file
>> > > >>> > > > >>>>>>>>> system.
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
>> discovery,
>> > > >>> > > > >>>>>>>>> logging
>> > > >>> > > in,
>> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and
>> letting
>> > the
>> > > >>> > current
>> > > >>> > > > code manage
>> > > >>> > > > >>>>>>>>> the rest as it currently does?
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>> > > > >>>>>>>>>>
>> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
>> need
>> > > to
>> > > >>> > catch
>> > > >>> > > up
>> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically
>> just
>> > > disk
>> > > >>> > > > snapshots + memory
>> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
>> preferably
>> > be
>> > > >>> > handled
>> > > >>> > > > by the SAN,
>> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary
>> storage or
>> > > >>> > something
>> > > >>> > > > else. This is
>> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
>> > > want to
>> > > >>> > see
>> > > >>> > > > how others are
>> > > >>> > > > >>>>>>>>>> planning theirs.
>> > > >>> > > > >>>>>>>>>>
>> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> > > >>> > > shadowsor@gmail.com
>> > > >>> > > > >
>> > > >>> > > > >>>>>>>>>> wrote:
>> > > >>> > > > >>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a
>> > vdi
>> > > >>> > > > >>>>>>>>>>> style
>> > > >>> > on
>> > > >>> > > > an
>> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a
>> RAW
>> > > >>> > > > >>>>>>>>>>> format.
>> > > >>> > > > Otherwise you're
>> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
>> > > creating
>> > > >>> > > > >>>>>>>>>>> a
>> > > >>> > > > QCOW2 disk image,
>> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
>> > killer.
>> > > >>> > > > >>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a
>> disk
>> > to
>> > > the
>> > > >>> > VM,
>> > > >>> > > > and
>> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the
>> storage
>> > > >>> > > > >>>>>>>>>>> plugin
>> > > >>> > is
>> > > >>> > > > best. My
>> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was
>> > that
>> > > >>> > > > >>>>>>>>>>> there
>> > > >>> > > was
>> > > >>> > > > a snapshot
>> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle
>> > snapshots.
>> > > >>> > > > >>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> > > >>> > > > shadowsor@gmail.com>
>> > > >>> > > > >>>>>>>>>>> wrote:
>> > > >>> > > > >>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the
>> SAN
>> > > back
>> > > >>> > end,
>> > > >>> > > > if
>> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>> > > could
>> > > >>> > > > >>>>>>>>>>>> call
>> > > >>> > > > your plugin for
>> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
>> > > agnostic. As
>> > > >>> > far
>> > > >>> > > > as space, that
>> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
>> > ours,
>> > > we
>> > > >>> > carve
>> > > >>> > > > out luns from a
>> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the
>> pool
>> > > and is
>> > > >>> > > > independent of the
>> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
>> > > >>> > > > >>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>> > libvirt
>> > > >>> > > > >>>>>>>>>>>>> won't
>> > > >>> > > > work
>> > > >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
>> > > snapshots?
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
>> > snapshot,
>> > > the
>> > > >>> > VDI
>> > > >>> > > > for
>> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
>> > > repository
>> > > >>> > > > >>>>>>>>>>>>> as
>> > > >>> > > the
>> > > >>> > > > volume is on.
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say
>> for
>> > > >>> > > > >>>>>>>>>>>>> XenServer
>> > > >>> > > and
>> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
>> hypervisor
>> > > >>> > snapshots
>> > > >>> > > > in 4.2) is I'd
>> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what
>> the
>> > > user
>> > > >>> > > > requested for the
>> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our
>> SAN
>> > > >>> > > > >>>>>>>>>>>>> thinly
>> > > >>> > > > provisions volumes,
>> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it
>> needs
>> > > to
>> > > >>> > > > >>>>>>>>>>>>> be).
>> > > >>> > > > The CloudStack
>> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
>> > volume
>> > > >>> > until a
>> > > >>> > > > hypervisor
>> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
>> > reside
>> > > on
>> > > >>> > > > >>>>>>>>>>>>> the
>> > > >>> > > > SAN volume.
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is
>> no
>> > > >>> > > > >>>>>>>>>>>>> creation
>> > > >>> > of
>> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
>> (which,
>> > > even
>> > > >>> > > > >>>>>>>>>>>>> if
>> > > >>> > > > there were support
>> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one
>> LUN
>> > per
>> > > >>> > > > >>>>>>>>>>>>> iSCSI
>> > > >>> > > > target), then I
>> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current
>> way
>> > > this
>> > > >>> > > works
>> > > >>> > > > >>>>>>>>>>>>> with DIR?
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> What do you think?
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> Thanks
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike
>> Tutkowski
>> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for
>> iSCSI
>> > > access
>> > > >>> > > today.
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
>> > might
>> > > as
>> > > >>> > well
>> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
>> Sorensen
>> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>> > > believe
>> > > >>> > > > >>>>>>>>>>>>>>> it
>> > > >>> > > just
>> > > >>> > > > >>>>>>>>>>>>>>> acts like a
>> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>> > that.
>> > > The
>> > > >>> > > > end-user
>> > > >>> > > > >>>>>>>>>>>>>>> is
>> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that
>> all
>> > > KVM
>> > > >>> > hosts
>> > > >>> > > > can
>> > > >>> > > > >>>>>>>>>>>>>>> access,
>> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>> > providing
>> > > the
>> > > >>> > > > storage.
>> > > >>> > > > >>>>>>>>>>>>>>> It could
>> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
>> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
>> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has
>> VM
>> > > >>> > > > >>>>>>>>>>>>>>> images.
>> > > >>> > > > >>>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>> > Sorensen
>> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI
>> all
>> > at
>> > > the
>> > > >>> > same
>> > > >>> > > > >>>>>>>>>>>>>>> > time.
>> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> > > >>> > > > >>>>>>>>>>>>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>> > Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>> > > pools:
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>> > > >>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
>> > > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
>> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>> > > Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed
>> out.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt)
>> storage
>> > > pool
>> > > >>> > based
>> > > >>> > > on
>> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>> > have
>> > > one
>> > > >>> > LUN,
>> > > >>> > > > so
>> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
>> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
>> > the
>> > > >>> > > (libvirt)
>> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
>> destroys
>> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem
>> that
>> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
>> > > >>> > > does
>> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
>> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit
>> to
>> > > see
>> > > >>> > > > >>>>>>>>>>>>>>> >>> if
>> > > >>> > > > libvirt
>> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
>> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>> > > mentioned,
>> > > >>> > since
>> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
>> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my
>> iSCSI
>> > > >>> > > > targets/LUNs).
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>> > > Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
>> > > >>> > > being
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were
>> getting
>> > at.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
>> when
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it
>> with
>> > > iSCSI,
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
>> > > >>> > > > that
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>> > iSCSI
>> > > >>> > server,
>> > > >>> > > > and
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which
>> I
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
>> > > >>> > > your
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>> > > logging
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > > >>> > > and
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does
>> that
>> > > work
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > > >>> > the
>> > > >>> > > > Xen
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether
>> this
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
>> > > >>> > a
>> > > >>> > > > 1:1
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>> > > device
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
>> > > >>> > a
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a
>> bit
>> > > more
>> > > >>> > about
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
>> write
>> > > your
>> > > >>> > own
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> > > >>> > >  We
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt,
>> see
>> > the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > http://libvirt.org/sources/java/javadoc/Normally,
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>> > made
>> > > to
>> > > >>> > that
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor
>> to
>> > > see
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
>> > > >>> > > that
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>> > test
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > > >>> > > code
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and
>> register
>> > > iscsi
>> > > >>> > > storage
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>> > libvirt
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>> > > >>> > > but
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
>> iSCSI
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM,
>> Mike
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com>
>> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through
>> some of
>> > > the
>> > > >>> > > classes
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
>> > Marcus
>> > > >>> > Sorensen
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will
>> need
>> > the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be
>> standard
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>> > > >>> > > for
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>> > > >>> > > > login.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> > > >>> > and
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>> > > Tutkowski"
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the
>> 4.2
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>> > > >>> > I
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>> > > storage
>> > > >>> > > > framework
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically
>> create
>> > and
>> > > >>> > delete
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>> > establish a
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> > > >>> > > > mapping
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>> > QoS.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>> > > expected
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>> > > > admin
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>> > > volumes
>> > > >>> > would
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>> > > friendly).
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>> > work,
>> > > I
>> > > >>> > needed
>> > > >>> > > > to
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
>> > they
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as
>> needed.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this
>> happen
>> > > with
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
>> this
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>> > > >>> > > work
>> > > >>> > > > on
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM
>> know
>> > > how I
>> > > >>> > will
>> > > >>> > > > need
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>> > > have to
>> > > >>> > > expect
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and
>> use it
>> > > for
>> > > >>> > this
>> > > >>> > > to
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>> > SolidFire
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses
>> the
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
>> SolidFire
>> > > Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses
>> the
>> > > cloud™
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
>> SolidFire
>> > > Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
>> > > cloud™
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire
>> > Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
>> > cloud™
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>>
>> > > >>> > > > >>>>>>>>>>>>>>> >>> --
>> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire
>> Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
>> cloud™
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >>
>> > > >>> > > > >>>>>>>>>>>>>>> >> --
>> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire
>> Inc.
>> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the
>> cloud™
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>> --
>> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>>
>> > > >>> > > > >>>>>>>>>>>>> --
>> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>>
>> > > >>> > > > >>>>>>>>> --
>> > > >>> > > > >>>>>>>>> Mike Tutkowski
>> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>>>>> o: 303.746.7302
>> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>>
>> > > >>> > > > >>>>>> --
>> > > >>> > > > >>>>>> Mike Tutkowski
>> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>>>>> o: 303.746.7302
>> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
>> > > >>> > > > >>>
>> > > >>> > > > >>>
>> > > >>> > > > >>>
>> > > >>> > > > >>>
>> > > >>> > > > >>> --
>> > > >>> > > > >>> Mike Tutkowski
>> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
>> > > >>> > > > >>> o: 303.746.7302
>> > > >>> > > > >>> Advancing the way the world uses the cloud™
>> > > >>> > > > >
>> > > >>> > > > >
>> > > >>> > > > >
>> > > >>> > > > >
>> > > >>> > > > > --
>> > > >>> > > > > Mike Tutkowski
>> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
>> > > >>> > > > > e: mike.tutkowski@solidfire.com
>> > > >>> > > > > o: 303.746.7302
>> > > >>> > > > > Advancing the way the world uses the cloud™
>> > > >>> > > >
>> > > >>> > >
>> > > >>> > >
>> > > >>> > >
>> > > >>> > > --
>> > > >>> > > *Mike Tutkowski*
>> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> > > e: mike.tutkowski@solidfire.com
>> > > >>> > > o: 303.746.7302
>> > > >>> > > Advancing the way the world uses the
>> > > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > >>> > > *™*
>> > > >>> > >
>> > > >>> >
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> --
>> > > >>> *Mike Tutkowski*
>> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
>> > > >>> e: mike.tutkowski@solidfire.com
>> > > >>> o: 303.746.7302
>> > > >>> Advancing the way the world uses the
>> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
>> > > >>> *™*
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Ah, I think I see the miscommunication.

I should have gone into a bit more detail about the SolidFire SAN.

It is built from the ground up to support QoS on a LUN-by-LUN basis. Every
LUN is assigned a Min, Max, and Burst number of IOPS.

The Min IOPS are a guaranteed number (as long as the SAN itself is not over
provisioned). Capacity and IOPS are provisioned independently. Multiple
volumes and multiple tenants using the same SAN do not suffer from the
Noisy Neighbor effect.

When you create a Disk Offering in CS that is storage tagged to use
SolidFire primary storage, you specify a Min, Max, and Burst number of IOPS
to provision from the SAN for volumes created from that Disk Offering.

There is no notion of RAID groups that you see in more traditional SANs.
The SAN is built from clusters of storage nodes and data is replicated
amongst all SSDs in all storage nodes (this is an SSD-only SAN) in the
cluster to avoid hot spots and protect the data should a drives and/or
nodes fail. You then scale the SAN by adding new storage nodes.

Data is compressed and de-duplicated inline across the cluster and all
volumes are thinly provisioned.


On Tue, Sep 17, 2013 at 11:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> I'm surprised there's no mention of pool on the SAN in your description of
> the framework. I had assumed this was specific to your implementation,
> because normally SANs host multiple disk pools, maybe multiple RAID 50s and
> 10s, or however the SAN admin wants to split it up. Maybe a pool intended
> for root disks and a separate one for data disks. Or one pool for
> cloudstack and one dedicated to some other internal db application. But it
> sounds as though there's no place to specify which disks or pool on the SAN
> to use.
>
> We implemented our own internal storage SAN plugin based on 4.1. We used
> the 'path' attribute of the primary storage pool object to specify which
> pool name on the back end SAN to use, so we could create all-ssd pools and
> slower spindle pools, then differentiate between them based on storage
> tags. Normally the path attribute would be the mount point for NFS, but its
> just a string. So when registering ours we enter San dns host name, the
> san's rest api port, and the pool name. Then luns created from that primary
> storage come from the matching disk pool on the SAN. We can create and
> register multiple pools of different types and purposes on the same SAN. We
> haven't yet gotten to porting it to the 4.2 frame work, so it will be
> interesting to see what we can come up with to make it work similarly.
>  On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > What you're saying here is definitely something we should talk about.
> >
> > Hopefully my previous e-mail has clarified how this works a bit.
> >
> > It mainly comes down to this:
> >
> > For the first time in CS history, primary storage is no longer required
> to
> > be preallocated by the admin and then handed to CS. CS volumes don't have
> > to share a preallocated volume anymore.
> >
> > As of 4.2, primary storage can be based on a SAN (or some other storage
> > device). You can tell CS how many bytes and IOPS to use from this storage
> > device and CS invokes the appropriate plug-in to carve out LUNs
> > dynamically.
> >
> > Each LUN is home to one and only one data disk. Data disks - in this
> model
> > - never share a LUN.
> >
> > The main use case for this is so a CS volume can deliver guaranteed IOPS
> if
> > the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
> > LUN-by-LUN basis.
> >
> >
> > On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > I guess whether or not a solidfire device is capable of hosting
> > > multiple disk pools is irrelevant, we'd hope that we could get the
> > > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
> > > stats aren't collected, I can't as an admin define multiple pools and
> > > expect cloudstack to allocate evenly from them or fill one up and move
> > > to the next, because it doesn't know how big it is.
> > >
> > > Ultimately this discussion has nothing to do with the KVM stuff
> > > itself, just a tangent, but something to think about.
> > >
> > > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <shadowsor@gmail.com
> >
> > > wrote:
> > > > Ok, on most storage pools it shows how many GB free/used when listing
> > > > the pool both via API and in the UI. I'm guessing those are empty
> then
> > > > for the solid fire storage, but it seems like the user should have to
> > > > define some sort of pool that the luns get carved out of, and you
> > > > should be able to get the stats for that, right? Or is a solid fire
> > > > appliance only one pool per appliance? This isn't about billing, but
> > > > just so cloudstack itself knows whether or not there is space left on
> > > > the storage device, so cloudstack can go on allocating from a
> > > > different primary storage as this one fills up. There are also
> > > > notifications and things. It seems like there should be a call you
> can
> > > > handle for this, maybe Edison knows.
> > > >
> > > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> > > wrote:
> > > >> You respond to more than attach and detach, right? Don't you create
> > > luns as
> > > >> well? Or are you just referring to the hypervisor stuff?
> > > >>
> > > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Hi Marcus,
> > > >>>
> > > >>> I never need to respond to a CreateStoragePool call for either
> > > XenServer
> > > >>> or
> > > >>> VMware.
> > > >>>
> > > >>> What happens is I respond only to the Attach- and Detach-volume
> > > commands.
> > > >>>
> > > >>> Let's say an attach comes in:
> > > >>>
> > > >>> In this case, I check to see if the storage is "managed." Talking
> > > >>> XenServer
> > > >>> here, if it is, I log in to the LUN that is the disk we want to
> > attach.
> > > >>> After, if this is the first time attaching this disk, I create an
> SR
> > > and a
> > > >>> VDI within the SR. If it is not the first time attaching this disk,
> > the
> > > >>> LUN
> > > >>> already has the SR and VDI on it.
> > > >>>
> > > >>> Once this is done, I let the normal "attach" logic run because this
> > > logic
> > > >>> expected an SR and a VDI and now it has it.
> > > >>>
> > > >>> It's the same thing for VMware: Just substitute datastore for SR
> and
> > > VMDK
> > > >>> for VDI.
> > > >>>
> > > >>> Does that make sense?
> > > >>>
> > > >>> Thanks!
> > > >>>
> > > >>>
> > > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > > >>> <sh...@gmail.com>wrote:
> > > >>>
> > > >>> > What do you do with Xen? I imagine the user enter the SAN details
> > > when
> > > >>> > registering the pool? A the pool details are basically just
> > > instructions
> > > >>> > on
> > > >>> > how to log into a target, correct?
> > > >>> >
> > > >>> > You can choose to log in a KVM host to the target during
> > > >>> > createStoragePool
> > > >>> > and save the pool in a map, or just save the pool info in a map
> for
> > > >>> > future
> > > >>> > reference by uuid, for when you do need to log in. The
> > > createStoragePool
> > > >>> > then just becomes a way to save the pool info to the agent.
> > > Personally,
> > > >>> > I'd
> > > >>> > log in on the pool create and look/scan for specific luns when
> > > they're
> > > >>> > needed, but I haven't thought it through thoroughly. I just say
> > that
> > > >>> > mainly
> > > >>> > because login only happens once, the first time the pool is used,
> > and
> > > >>> > every
> > > >>> > other storage command is about discovering new luns or maybe
> > > >>> > deleting/disconnecting luns no longer needed. On the other hand,
> > you
> > > >>> > could
> > > >>> > do all of the above: log in on pool create, then also check if
> > you're
> > > >>> > logged in on other commands and log in if you've lost connection.
> > > >>> >
> > > >>> > With Xen, what does your registered pool   show in the UI for
> > > avail/used
> > > >>> > capacity, and how does it get that info? I assume there is some
> > sort
> > > of
> > > >>> > disk pool that the luns are carved from, and that your plugin is
> > > called
> > > >>> > to
> > > >>> > talk to the SAN and expose to the user how much of that pool has
> > been
> > > >>> > allocated. Knowing how you already solves these problems with Xen
> > > will
> > > >>> > help
> > > >>> > figure out what to do with KVM.
> > > >>> >
> > > >>> > If this is the case, I think the plugin can continue to handle it
> > > rather
> > > >>> > than getting details from the agent. I'm not sure if that means
> > nulls
> > > >>> > are
> > > >>> > OK for these on the agent side or what, I need to look at the
> > storage
> > > >>> > plugin arch more closely.
> > > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > > mike.tutkowski@solidfire.com>
> > > >>> > wrote:
> > > >>> >
> > > >>> > > Hey Marcus,
> > > >>> > >
> > > >>> > > I'm reviewing your e-mails as I implement the necessary methods
> > in
> > > new
> > > >>> > > classes.
> > > >>> > >
> > > >>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> > > all of
> > > >>> > > the pool data (host, port, name, path) which would be used to
> log
> > > the
> > > >>> > > host into the initiator."
> > > >>> > >
> > > >>> > > Can you tell me, in my case, since a storage pool (primary
> > > storage) is
> > > >>> > > actually the SAN, I wouldn't really be logging into anything at
> > > this
> > > >>> > point,
> > > >>> > > correct?
> > > >>> > >
> > > >>> > > Also, what kind of capacity, available, and used bytes make
> sense
> > > to
> > > >>> > report
> > > >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in
> my
> > > case
> > > >>> > and
> > > >>> > > not an individual LUN)?
> > > >>> > >
> > > >>> > > Thanks!
> > > >>> > >
> > > >>> > >
> > > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > > shadowsor@gmail.com
> > > >>> > > >wrote:
> > > >>> > >
> > > >>> > > > Ok, KVM will be close to that, of course, because only the
> > > >>> > > > hypervisor
> > > >>> > > > classes differ, the rest is all mgmt server. Creating a
> volume
> > is
> > > >>> > > > just
> > > >>> > > > a db entry until it's deployed for the first time.
> > > >>> > > > AttachVolumeCommand
> > > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
> > KVM
> > > >>> > > > StorageAdaptor) to log in the host to the target and then you
> > > have a
> > > >>> > > > block device.  Maybe libvirt will do that for you, but my
> quick
> > > read
> > > >>> > > > made it sound like the iscsi libvirt pool type is actually a
> > > pool,
> > > >>> > > > not
> > > >>> > > > a lun or volume, so you'll need to figure out if that works
> or
> > if
> > > >>> > > > you'll have to use iscsiadm commands.
> > > >>> > > >
> > > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> > Libvirt
> > > >>> > > > doesn't really manage your pool the way you want), you're
> going
> > > to
> > > >>> > > > have to create a version of KVMStoragePool class and a
> > > >>> > > > StorageAdaptor
> > > >>> > > > class (see LibvirtStoragePool.java and
> > > LibvirtStorageAdaptor.java),
> > > >>> > > > implementing all of the methods, then in
> KVMStorageManager.java
> > > >>> > > > there's a "_storageMapper" map. This is used to select the
> > > correct
> > > >>> > > > adaptor, you can see in this file that every call first pulls
> > the
> > > >>> > > > correct adaptor out of this map via getStorageAdaptor. So you
> > can
> > > >>> > > > see
> > > >>> > > > a comment in this file that says "add other storage adaptors
> > > here",
> > > >>> > > > where it puts to this map, this is where you'd register your
> > > >>> > > > adaptor.
> > > >>> > > >
> > > >>> > > > So, referencing StorageAdaptor.java, createStoragePool
> accepts
> > > all
> > > >>> > > > of
> > > >>> > > > the pool data (host, port, name, path) which would be used to
> > log
> > > >>> > > > the
> > > >>> > > > host into the initiator. I *believe* the method
> getPhysicalDisk
> > > will
> > > >>> > > > need to do the work of attaching the lun.
>  AttachVolumeCommand
> > > calls
> > > >>> > > > this and then creates the XML diskdef and attaches it to the
> > VM.
> > > >>> > > > Now,
> > > >>> > > > one thing you need to know is that createStoragePool is
> called
> > > >>> > > > often,
> > > >>> > > > sometimes just to make sure the pool is there. You may want
> to
> > > >>> > > > create
> > > >>> > > > a map in your adaptor class and keep track of pools that have
> > > been
> > > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this
> because
> > it
> > > >>> > > > asks
> > > >>> > > > libvirt about which storage pools exist. There are also calls
> > to
> > > >>> > > > refresh the pool stats, and all of the other calls can be
> seen
> > in
> > > >>> > > > the
> > > >>> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> > > etc,
> > > >>> > > > but
> > > >>> > > > it's probably a hold-over from 4.1, as I have the vague idea
> > that
> > > >>> > > > volumes are created on the mgmt server via the plugin now, so
> > > >>> > > > whatever
> > > >>> > > > doesn't apply can just be stubbed out (or optionally
> > > >>> > > > extended/reimplemented here, if you don't mind the hosts
> > talking
> > > to
> > > >>> > > > the san api).
> > > >>> > > >
> > > >>> > > > There is a difference between attaching new volumes and
> > > launching a
> > > >>> > > > VM
> > > >>> > > > with existing volumes.  In the latter case, the VM definition
> > > that
> > > >>> > > > was
> > > >>> > > > passed to the KVM agent includes the disks, (StartCommand).
> > > >>> > > >
> > > >>> > > > I'd be interested in how your pool is defined for Xen, I
> > imagine
> > > it
> > > >>> > > > would need to be kept the same. Is it just a definition to
> the
> > > SAN
> > > >>> > > > (ip address or some such, port number) and perhaps a volume
> > pool
> > > >>> > > > name?
> > > >>> > > >
> > > >>> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >>> > > only a
> > > >>> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >>> > > >
> > > >>> > > > That depends on your SAN API.  I was under the impression
> that
> > > the
> > > >>> > > > storage plugin framework allowed for acls, or for you to do
> > > whatever
> > > >>> > > > you want for create/attach/delete/snapshot, etc. You'd just
> > call
> > > >>> > > > your
> > > >>> > > > SAN API with the host info for the ACLs prior to when the
> disk
> > is
> > > >>> > > > attached (or the VM is started).  I'd have to look more at
> the
> > > >>> > > > framework to know the details, in 4.1 I would do this in
> > > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> > > >>> > > >
> > > >>> > > >
> > > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > >>> > > > <mi...@solidfire.com> wrote:
> > > >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> > > >>> > > > > different
> > > >>> > > from
> > > >>> > > > how
> > > >>> > > > > it works with XenServer and VMware.
> > > >>> > > > >
> > > >>> > > > > Just to give you an idea how it works in 4.2 with
> XenServer:
> > > >>> > > > >
> > > >>> > > > > * The user creates a CS volume (this is just recorded in
> the
> > > >>> > > > cloud.volumes
> > > >>> > > > > table).
> > > >>> > > > >
> > > >>> > > > > * The user attaches the volume as a disk to a VM for the
> > first
> > > >>> > > > > time
> > > >>> > (if
> > > >>> > > > the
> > > >>> > > > > storage allocator picks the SolidFire plug-in, the storage
> > > >>> > > > > framework
> > > >>> > > > invokes
> > > >>> > > > > a method on the plug-in that creates a volume on the
> > SAN...info
> > > >>> > > > > like
> > > >>> > > the
> > > >>> > > > IQN
> > > >>> > > > > of the SAN volume is recorded in the DB).
> > > >>> > > > >
> > > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> > > executed.
> > > >>> > > > > It
> > > >>> > > > > determines based on a flag passed in that the storage in
> > > question
> > > >>> > > > > is
> > > >>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > > >>> > preallocated
> > > >>> > > > > storage). This tells it to discover the iSCSI target. Once
> > > >>> > > > > discovered
> > > >>> > > it
> > > >>> > > > > determines if the iSCSI target already contains a storage
> > > >>> > > > > repository
> > > >>> > > (it
> > > >>> > > > > would if this were a re-attach situation). If it does
> contain
> > > an
> > > >>> > > > > SR
> > > >>> > > > already,
> > > >>> > > > > then there should already be one VDI, as well. If there is
> no
> > > SR,
> > > >>> > > > > an
> > > >>> > SR
> > > >>> > > > is
> > > >>> > > > > created and a single VDI is created within it (that takes
> up
> > > about
> > > >>> > > > > as
> > > >>> > > > much
> > > >>> > > > > space as was requested for the CloudStack volume).
> > > >>> > > > >
> > > >>> > > > > * The normal attach-volume logic continues (it depends on
> the
> > > >>> > existence
> > > >>> > > > of
> > > >>> > > > > an SR and a VDI).
> > > >>> > > > >
> > > >>> > > > > The VMware case is essentially the same (mainly just
> > substitute
> > > >>> > > datastore
> > > >>> > > > > for SR and VMDK for VDI).
> > > >>> > > > >
> > > >>> > > > > In both cases, all hosts in the cluster have discovered the
> > > iSCSI
> > > >>> > > target,
> > > >>> > > > > but only the host that is currently running the VM that is
> > > using
> > > >>> > > > > the
> > > >>> > > VDI
> > > >>> > > > (or
> > > >>> > > > > VMKD) is actually using the disk.
> > > >>> > > > >
> > > >>> > > > > Live Migration should be OK because the hypervisors
> > communicate
> > > >>> > > > > with
> > > >>> > > > > whatever metadata they have on the SR (or datastore).
> > > >>> > > > >
> > > >>> > > > > I see what you're saying with KVM, though.
> > > >>> > > > >
> > > >>> > > > > In that case, the hosts are clustered only in CloudStack's
> > > eyes.
> > > >>> > > > > CS
> > > >>> > > > controls
> > > >>> > > > > Live Migration. You don't really need a clustered
> filesystem
> > on
> > > >>> > > > > the
> > > >>> > > LUN.
> > > >>> > > > The
> > > >>> > > > > LUN could be handed over raw to the VM using it.
> > > >>> > > > >
> > > >>> > > > > If there is a way for me to update the ACL list on the SAN
> to
> > > have
> > > >>> > > only a
> > > >>> > > > > single KVM host have access to the volume, that would be
> > ideal.
> > > >>> > > > >
> > > >>> > > > > Also, I agree I'll need to use iscsiadm to discover and log
> > in
> > > to
> > > >>> > > > > the
> > > >>> > > > iSCSI
> > > >>> > > > > target. I'll also need to take the resultant new device and
> > > pass
> > > >>> > > > > it
> > > >>> > > into
> > > >>> > > > the
> > > >>> > > > > VM.
> > > >>> > > > >
> > > >>> > > > > Does this sound reasonable? Please call me out on anything
> I
> > > seem
> > > >>> > > > incorrect
> > > >>> > > > > about. :)
> > > >>> > > > >
> > > >>> > > > > Thanks for all the thought on this, Marcus!
> > > >>> > > > >
> > > >>> > > > >
> > > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > > >>> > shadowsor@gmail.com>
> > > >>> > > > > wrote:
> > > >>> > > > >>
> > > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def,
> and
> > > the
> > > >>> > > attach
> > > >>> > > > >> the disk def to the vm. You may need to do your own
> > > >>> > > > >> StorageAdaptor
> > > >>> > and
> > > >>> > > > run
> > > >>> > > > >> iscsiadm commands to accomplish that, depending on how the
> > > >>> > > > >> libvirt
> > > >>> > > iscsi
> > > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> > > how it
> > > >>> > > works
> > > >>> > > > on
> > > >>> > > > >> xen at the momen., nor is it ideal.
> > > >>> > > > >>
> > > >>> > > > >> Your plugin will handle acls as far as which host can see
> > > which
> > > >>> > > > >> luns
> > > >>> > > as
> > > >>> > > > >> well, I remember discussing that months ago, so that a
> disk
> > > won't
> > > >>> > > > >> be
> > > >>> > > > >> connected until the hypervisor has exclusive access, so it
> > > will
> > > >>> > > > >> be
> > > >>> > > safe
> > > >>> > > > and
> > > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
> > > >>> > > > >> connectivity
> > > >>> > > > with. It
> > > >>> > > > >> should revoke access to everything but the target host...
> > > Except
> > > >>> > > > >> for
> > > >>> > > > during
> > > >>> > > > >> migration but we can discuss that later, there's a
> migration
> > > prep
> > > >>> > > > process
> > > >>> > > > >> where the new host can be added to the acls, and the old
> > host
> > > can
> > > >>> > > > >> be
> > > >>> > > > removed
> > > >>> > > > >> post migration.
> > > >>> > > > >>
> > > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > >>> > > mike.tutkowski@solidfire.com
> > > >>> > > > >
> > > >>> > > > >> wrote:
> > > >>> > > > >>>
> > > >>> > > > >>> Yeah, that would be ideal.
> > > >>> > > > >>>
> > > >>> > > > >>> So, I would still need to discover the iSCSI target, log
> in
> > > to
> > > >>> > > > >>> it,
> > > >>> > > then
> > > >>> > > > >>> figure out what /dev/sdX was created as a result (and
> leave
> > > it
> > > >>> > > > >>> as
> > > >>> > is
> > > >>> > > -
> > > >>> > > > do
> > > >>> > > > >>> not format it with any file system...clustered or not). I
> > > would
> > > >>> > pass
> > > >>> > > > that
> > > >>> > > > >>> device into the VM.
> > > >>> > > > >>>
> > > >>> > > > >>> Kind of accurate?
> > > >>> > > > >>>
> > > >>> > > > >>>
> > > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > >>> > > shadowsor@gmail.com>
> > > >>> > > > >>> wrote:
> > > >>> > > > >>>>
> > > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> > > definitions.
> > > >>> > There
> > > >>> > > > are
> > > >>> > > > >>>> ones that work for block devices rather than files. You
> > can
> > > >>> > > > >>>> piggy
> > > >>> > > > back off
> > > >>> > > > >>>> of the existing disk definitions and attach it to the vm
> > as
> > > a
> > > >>> > block
> > > >>> > > > device.
> > > >>> > > > >>>> The definition is an XML string per libvirt XML format.
> > You
> > > may
> > > >>> > want
> > > >>> > > > to use
> > > >>> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> > > like I
> > > >>> > > > mentioned,
> > > >>> > > > >>>> there are by-id paths to the block devices, as well as
> > other
> > > >>> > > > >>>> ones
> > > >>> > > > that will
> > > >>> > > > >>>> be consistent and easier for management, not sure how
> > > familiar
> > > >>> > > > >>>> you
> > > >>> > > > are with
> > > >>> > > > >>>> device naming on Linux.
> > > >>> > > > >>>>
> > > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > > >>> > > > >>>> <sh...@gmail.com>
> > > >>> > > > wrote:
> > > >>> > > > >>>>>
> > > >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
> > > initiator
> > > >>> > > inside
> > > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
> > lun
> > > on
> > > >>> > > > hypervisor) as
> > > >>> > > > >>>>> a disk to the VM, rather than attaching some image file
> > > that
> > > >>> > > resides
> > > >>> > > > on a
> > > >>> > > > >>>>> filesystem, mounted on the host, living on a target.
> > > >>> > > > >>>>>
> > > >>> > > > >>>>> Actually, if you plan on the storage supporting live
> > > migration
> > > >>> > > > >>>>> I
> > > >>> > > > think
> > > >>> > > > >>>>> this is the only way. You can't put a filesystem on it
> > and
> > > >>> > > > >>>>> mount
> > > >>> > it
> > > >>> > > > in two
> > > >>> > > > >>>>> places to facilitate migration unless its a clustered
> > > >>> > > > >>>>> filesystem,
> > > >>> > > in
> > > >>> > > > which
> > > >>> > > > >>>>> case you're back to shared mount point.
> > > >>> > > > >>>>>
> > > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is
> basically
> > > LVM
> > > >>> > with a
> > > >>> > > > xen
> > > >>> > > > >>>>> specific cluster management, a custom CLVM. They don't
> > use
> > > a
> > > >>> > > > filesystem
> > > >>> > > > >>>>> either.
> > > >>> > > > >>>>>
> > > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > >>> > > > >>>>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
> > you
> > > >>> > > > >>>>>> mean
> > > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we could
> do
> > > that
> > > >>> > > > >>>>>> in
> > > >>> > > CS.
> > > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > > >>> > > > >>>>>> hypervisor,
> > > >>> > > as
> > > >>> > > > far as I
> > > >>> > > > >>>>>> know.
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > >>> > > > shadowsor@gmail.com>
> > > >>> > > > >>>>>> wrote:
> > > >>> > > > >>>>>>>
> > > >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> > > there is
> > > >>> > > > >>>>>>> a
> > > >>> > > good
> > > >>> > > > >>>>>>> reason not to.
> > > >>> > > > >>>>>>>
> > > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > > >>> > shadowsor@gmail.com>
> > > >>> > > > >>>>>>> wrote:
> > > >>> > > > >>>>>>>>
> > > >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
> > > mistake
> > > >>> > > > >>>>>>>> to
> > > >>> > go
> > > >>> > > to
> > > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
> to
> > > luns
> > > >>> > and
> > > >>> > > > then putting
> > > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> > > QCOW2
> > > >>> > > > >>>>>>>> or
> > > >>> > > even
> > > >>> > > > RAW disk
> > > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> > > along
> > > >>> > > > >>>>>>>> the
> > > >>> > > > way, and have
> > > >>> > > > >>>>>>>> more overhead with the filesystem and its
> journaling,
> > > etc.
> > > >>> > > > >>>>>>>>
> > > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in
> KVM
> > > with
> > > >>> > CS.
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS
> today
> > > is by
> > > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> > location
> > > of
> > > >>> > > > >>>>>>>>> the
> > > >>> > > > share.
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> > > >>> > > > >>>>>>>>> discovering
> > > >>> > > their
> > > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> > > somewhere
> > > >>> > > > >>>>>>>>> on
> > > >>> > > > their file
> > > >>> > > > >>>>>>>>> system.
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> Would it make sense for me to just do that
> discovery,
> > > >>> > > > >>>>>>>>> logging
> > > >>> > > in,
> > > >>> > > > >>>>>>>>> and mounting behind the scenes for them and letting
> > the
> > > >>> > current
> > > >>> > > > code manage
> > > >>> > > > >>>>>>>>> the rest as it currently does?
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > >>> > > > >>>>>>>>>>
> > > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
> need
> > > to
> > > >>> > catch
> > > >>> > > up
> > > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically
> just
> > > disk
> > > >>> > > > snapshots + memory
> > > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would
> preferably
> > be
> > > >>> > handled
> > > >>> > > > by the SAN,
> > > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage
> or
> > > >>> > something
> > > >>> > > > else. This is
> > > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> > > want to
> > > >>> > see
> > > >>> > > > how others are
> > > >>> > > > >>>>>>>>>> planning theirs.
> > > >>> > > > >>>>>>>>>>
> > > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > >>> > > shadowsor@gmail.com
> > > >>> > > > >
> > > >>> > > > >>>>>>>>>> wrote:
> > > >>> > > > >>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a
> > vdi
> > > >>> > > > >>>>>>>>>>> style
> > > >>> > on
> > > >>> > > > an
> > > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a
> RAW
> > > >>> > > > >>>>>>>>>>> format.
> > > >>> > > > Otherwise you're
> > > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> > > creating
> > > >>> > > > >>>>>>>>>>> a
> > > >>> > > > QCOW2 disk image,
> > > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> > killer.
> > > >>> > > > >>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
> > to
> > > the
> > > >>> > VM,
> > > >>> > > > and
> > > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the
> storage
> > > >>> > > > >>>>>>>>>>> plugin
> > > >>> > is
> > > >>> > > > best. My
> > > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was
> > that
> > > >>> > > > >>>>>>>>>>> there
> > > >>> > > was
> > > >>> > > > a snapshot
> > > >>> > > > >>>>>>>>>>> service that would allow the San to handle
> > snapshots.
> > > >>> > > > >>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > >>> > > > shadowsor@gmail.com>
> > > >>> > > > >>>>>>>>>>> wrote:
> > > >>> > > > >>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the
> SAN
> > > back
> > > >>> > end,
> > > >>> > > > if
> > > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> > > could
> > > >>> > > > >>>>>>>>>>>> call
> > > >>> > > > your plugin for
> > > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> > > agnostic. As
> > > >>> > far
> > > >>> > > > as space, that
> > > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
> > ours,
> > > we
> > > >>> > carve
> > > >>> > > > out luns from a
> > > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> > > and is
> > > >>> > > > independent of the
> > > >>> > > > >>>>>>>>>>>> LUN size the host sees.
> > > >>> > > > >>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> Hey Marcus,
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> > libvirt
> > > >>> > > > >>>>>>>>>>>>> won't
> > > >>> > > > work
> > > >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> > > snapshots?
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> > snapshot,
> > > the
> > > >>> > VDI
> > > >>> > > > for
> > > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > > repository
> > > >>> > > > >>>>>>>>>>>>> as
> > > >>> > > the
> > > >>> > > > volume is on.
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > > >>> > > > >>>>>>>>>>>>> XenServer
> > > >>> > > and
> > > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support
> hypervisor
> > > >>> > snapshots
> > > >>> > > > in 4.2) is I'd
> > > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what
> the
> > > user
> > > >>> > > > requested for the
> > > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our
> SAN
> > > >>> > > > >>>>>>>>>>>>> thinly
> > > >>> > > > provisions volumes,
> > > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it
> needs
> > > to
> > > >>> > > > >>>>>>>>>>>>> be).
> > > >>> > > > The CloudStack
> > > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> > volume
> > > >>> > until a
> > > >>> > > > hypervisor
> > > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
> > reside
> > > on
> > > >>> > > > >>>>>>>>>>>>> the
> > > >>> > > > SAN volume.
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > > >>> > > > >>>>>>>>>>>>> creation
> > > >>> > of
> > > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
> (which,
> > > even
> > > >>> > > > >>>>>>>>>>>>> if
> > > >>> > > > there were support
> > > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
> > per
> > > >>> > > > >>>>>>>>>>>>> iSCSI
> > > >>> > > > target), then I
> > > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current
> way
> > > this
> > > >>> > > works
> > > >>> > > > >>>>>>>>>>>>> with DIR?
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> What do you think?
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> Thanks
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> > > access
> > > >>> > > today.
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
> > might
> > > as
> > > >>> > well
> > > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
> Sorensen
> > > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> > > believe
> > > >>> > > > >>>>>>>>>>>>>>> it
> > > >>> > > just
> > > >>> > > > >>>>>>>>>>>>>>> acts like a
> > > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
> > that.
> > > The
> > > >>> > > > end-user
> > > >>> > > > >>>>>>>>>>>>>>> is
> > > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that
> all
> > > KVM
> > > >>> > hosts
> > > >>> > > > can
> > > >>> > > > >>>>>>>>>>>>>>> access,
> > > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> > providing
> > > the
> > > >>> > > > storage.
> > > >>> > > > >>>>>>>>>>>>>>> It could
> > > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> > > >>> > > > >>>>>>>>>>>>>>> filesystem,
> > > >>> > > > >>>>>>>>>>>>>>> cloudstack just
> > > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> > > >>> > > > >>>>>>>>>>>>>>> images.
> > > >>> > > > >>>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> > Sorensen
> > > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
> > at
> > > the
> > > >>> > same
> > > >>> > > > >>>>>>>>>>>>>>> > time.
> > > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > >>> > > > >>>>>>>>>>>>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> > Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> > > pools:
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > >>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > >>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> > > Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> > > pool
> > > >>> > based
> > > >>> > > on
> > > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
> > have
> > > one
> > > >>> > LUN,
> > > >>> > > > so
> > > >>> > > > >>>>>>>>>>>>>>> >>> there would only
> > > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
> > the
> > > >>> > > (libvirt)
> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
> destroys
> > > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem
> that
> > > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> > > >>> > > does
> > > >>> > > > >>>>>>>>>>>>>>> >>> not support
> > > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit
> to
> > > see
> > > >>> > > > >>>>>>>>>>>>>>> >>> if
> > > >>> > > > libvirt
> > > >>> > > > >>>>>>>>>>>>>>> >>> supports
> > > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > > mentioned,
> > > >>> > since
> > > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> > > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my
> iSCSI
> > > >>> > > > targets/LUNs).
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> > > Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>     }
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> > > >>> > > > >>>>>>>>>>>>>>> >>>> currently
> > > >>> > > being
> > > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
> > at.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
> when
> > > >>> > > > >>>>>>>>>>>>>>> >>>> someone
> > > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> > > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> > > iSCSI,
> > > >>> > > > >>>>>>>>>>>>>>> >>>> is
> > > >>> > > > that
> > > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
> > iSCSI
> > > >>> > server,
> > > >>> > > > and
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> > > >>> > > your
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> > > logging
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >>> > > and
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> > > work
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > > >>> > the
> > > >>> > > > Xen
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> > > >>> > a
> > > >>> > > > 1:1
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> > > device
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> as
> > > >>> > a
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a
> bit
> > > more
> > > >>> > about
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
> write
> > > your
> > > >>> > own
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > > >>> > >  We
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
> > the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > http://libvirt.org/sources/java/javadoc/Normally,
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
> > made
> > > to
> > > >>> > that
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor
> to
> > > see
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> how
> > > >>> > > that
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
> > test
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > > >>> > > code
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> > > iscsi
> > > >>> > > storage
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> > libvirt
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > > >>> > > but
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
> iSCSI
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com>
> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
> of
> > > the
> > > >>> > > classes
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
> > Marcus
> > > >>> > Sorensen
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
> > the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > > >>> > > for
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > > >>> > > > login.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > > >>> > and
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > > Tutkowski"
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the
> 4.2
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > > >>> > I
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> > > storage
> > > >>> > > > framework
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
> > and
> > > >>> > delete
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> > establish a
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > > >>> > > > mapping
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
> > QoS.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> > > expected
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>> > > > admin
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> > > volumes
> > > >>> > would
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> > > friendly).
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> > work,
> > > I
> > > >>> > needed
> > > >>> > > > to
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
> > they
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> > > with
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
> this
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > > >>> > > work
> > > >>> > > > on
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> > > how I
> > > >>> > will
> > > >>> > > > need
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> > > have to
> > > >>> > > expect
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
> it
> > > for
> > > >>> > this
> > > >>> > > to
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> > SolidFire
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses
> the
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer,
> SolidFire
> > > Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> > > cloud™
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer,
> SolidFire
> > > Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> > > cloud™
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>> --
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire
> > Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
> > cloud™
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>>
> > > >>> > > > >>>>>>>>>>>>>>> >>> --
> > > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire
> Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the
> cloud™
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >>
> > > >>> > > > >>>>>>>>>>>>>>> >> --
> > > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> > > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the
> cloud™
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>> --
> > > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>>
> > > >>> > > > >>>>>>>>>>>>> --
> > > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> > > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> > > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>>
> > > >>> > > > >>>>>>>>> --
> > > >>> > > > >>>>>>>>> Mike Tutkowski
> > > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>>>>> o: 303.746.7302
> > > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>>
> > > >>> > > > >>>>>> --
> > > >>> > > > >>>>>> Mike Tutkowski
> > > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>>>>> o: 303.746.7302
> > > >>> > > > >>>>>> Advancing the way the world uses the cloud™
> > > >>> > > > >>>
> > > >>> > > > >>>
> > > >>> > > > >>>
> > > >>> > > > >>>
> > > >>> > > > >>> --
> > > >>> > > > >>> Mike Tutkowski
> > > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > >>> e: mike.tutkowski@solidfire.com
> > > >>> > > > >>> o: 303.746.7302
> > > >>> > > > >>> Advancing the way the world uses the cloud™
> > > >>> > > > >
> > > >>> > > > >
> > > >>> > > > >
> > > >>> > > > >
> > > >>> > > > > --
> > > >>> > > > > Mike Tutkowski
> > > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> > > >>> > > > > e: mike.tutkowski@solidfire.com
> > > >>> > > > > o: 303.746.7302
> > > >>> > > > > Advancing the way the world uses the cloud™
> > > >>> > > >
> > > >>> > >
> > > >>> > >
> > > >>> > >
> > > >>> > > --
> > > >>> > > *Mike Tutkowski*
> > > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> > > e: mike.tutkowski@solidfire.com
> > > >>> > > o: 303.746.7302
> > > >>> > > Advancing the way the world uses the
> > > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > >>> > > *™*
> > > >>> > >
> > > >>> >
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> *Mike Tutkowski*
> > > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > > >>> e: mike.tutkowski@solidfire.com
> > > >>> o: 303.746.7302
> > > >>> Advancing the way the world uses the
> > > >>> cloud<http://solidfire.com/solution/overview/?video=play>
> > > >>> *™*
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
I'm surprised there's no mention of pool on the SAN in your description of
the framework. I had assumed this was specific to your implementation,
because normally SANs host multiple disk pools, maybe multiple RAID 50s and
10s, or however the SAN admin wants to split it up. Maybe a pool intended
for root disks and a separate one for data disks. Or one pool for
cloudstack and one dedicated to some other internal db application. But it
sounds as though there's no place to specify which disks or pool on the SAN
to use.

We implemented our own internal storage SAN plugin based on 4.1. We used
the 'path' attribute of the primary storage pool object to specify which
pool name on the back end SAN to use, so we could create all-ssd pools and
slower spindle pools, then differentiate between them based on storage
tags. Normally the path attribute would be the mount point for NFS, but its
just a string. So when registering ours we enter San dns host name, the
san's rest api port, and the pool name. Then luns created from that primary
storage come from the matching disk pool on the SAN. We can create and
register multiple pools of different types and purposes on the same SAN. We
haven't yet gotten to porting it to the 4.2 frame work, so it will be
interesting to see what we can come up with to make it work similarly.
 On Sep 17, 2013 10:43 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> What you're saying here is definitely something we should talk about.
>
> Hopefully my previous e-mail has clarified how this works a bit.
>
> It mainly comes down to this:
>
> For the first time in CS history, primary storage is no longer required to
> be preallocated by the admin and then handed to CS. CS volumes don't have
> to share a preallocated volume anymore.
>
> As of 4.2, primary storage can be based on a SAN (or some other storage
> device). You can tell CS how many bytes and IOPS to use from this storage
> device and CS invokes the appropriate plug-in to carve out LUNs
> dynamically.
>
> Each LUN is home to one and only one data disk. Data disks - in this model
> - never share a LUN.
>
> The main use case for this is so a CS volume can deliver guaranteed IOPS if
> the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
> LUN-by-LUN basis.
>
>
> On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > I guess whether or not a solidfire device is capable of hosting
> > multiple disk pools is irrelevant, we'd hope that we could get the
> > stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
> > stats aren't collected, I can't as an admin define multiple pools and
> > expect cloudstack to allocate evenly from them or fill one up and move
> > to the next, because it doesn't know how big it is.
> >
> > Ultimately this discussion has nothing to do with the KVM stuff
> > itself, just a tangent, but something to think about.
> >
> > On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <sh...@gmail.com>
> > wrote:
> > > Ok, on most storage pools it shows how many GB free/used when listing
> > > the pool both via API and in the UI. I'm guessing those are empty then
> > > for the solid fire storage, but it seems like the user should have to
> > > define some sort of pool that the luns get carved out of, and you
> > > should be able to get the stats for that, right? Or is a solid fire
> > > appliance only one pool per appliance? This isn't about billing, but
> > > just so cloudstack itself knows whether or not there is space left on
> > > the storage device, so cloudstack can go on allocating from a
> > > different primary storage as this one fills up. There are also
> > > notifications and things. It seems like there should be a call you can
> > > handle for this, maybe Edison knows.
> > >
> > > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
> > wrote:
> > >> You respond to more than attach and detach, right? Don't you create
> > luns as
> > >> well? Or are you just referring to the hypervisor stuff?
> > >>
> > >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> > >
> > >> wrote:
> > >>>
> > >>> Hi Marcus,
> > >>>
> > >>> I never need to respond to a CreateStoragePool call for either
> > XenServer
> > >>> or
> > >>> VMware.
> > >>>
> > >>> What happens is I respond only to the Attach- and Detach-volume
> > commands.
> > >>>
> > >>> Let's say an attach comes in:
> > >>>
> > >>> In this case, I check to see if the storage is "managed." Talking
> > >>> XenServer
> > >>> here, if it is, I log in to the LUN that is the disk we want to
> attach.
> > >>> After, if this is the first time attaching this disk, I create an SR
> > and a
> > >>> VDI within the SR. If it is not the first time attaching this disk,
> the
> > >>> LUN
> > >>> already has the SR and VDI on it.
> > >>>
> > >>> Once this is done, I let the normal "attach" logic run because this
> > logic
> > >>> expected an SR and a VDI and now it has it.
> > >>>
> > >>> It's the same thing for VMware: Just substitute datastore for SR and
> > VMDK
> > >>> for VDI.
> > >>>
> > >>> Does that make sense?
> > >>>
> > >>> Thanks!
> > >>>
> > >>>
> > >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> > >>> <sh...@gmail.com>wrote:
> > >>>
> > >>> > What do you do with Xen? I imagine the user enter the SAN details
> > when
> > >>> > registering the pool? A the pool details are basically just
> > instructions
> > >>> > on
> > >>> > how to log into a target, correct?
> > >>> >
> > >>> > You can choose to log in a KVM host to the target during
> > >>> > createStoragePool
> > >>> > and save the pool in a map, or just save the pool info in a map for
> > >>> > future
> > >>> > reference by uuid, for when you do need to log in. The
> > createStoragePool
> > >>> > then just becomes a way to save the pool info to the agent.
> > Personally,
> > >>> > I'd
> > >>> > log in on the pool create and look/scan for specific luns when
> > they're
> > >>> > needed, but I haven't thought it through thoroughly. I just say
> that
> > >>> > mainly
> > >>> > because login only happens once, the first time the pool is used,
> and
> > >>> > every
> > >>> > other storage command is about discovering new luns or maybe
> > >>> > deleting/disconnecting luns no longer needed. On the other hand,
> you
> > >>> > could
> > >>> > do all of the above: log in on pool create, then also check if
> you're
> > >>> > logged in on other commands and log in if you've lost connection.
> > >>> >
> > >>> > With Xen, what does your registered pool   show in the UI for
> > avail/used
> > >>> > capacity, and how does it get that info? I assume there is some
> sort
> > of
> > >>> > disk pool that the luns are carved from, and that your plugin is
> > called
> > >>> > to
> > >>> > talk to the SAN and expose to the user how much of that pool has
> been
> > >>> > allocated. Knowing how you already solves these problems with Xen
> > will
> > >>> > help
> > >>> > figure out what to do with KVM.
> > >>> >
> > >>> > If this is the case, I think the plugin can continue to handle it
> > rather
> > >>> > than getting details from the agent. I'm not sure if that means
> nulls
> > >>> > are
> > >>> > OK for these on the agent side or what, I need to look at the
> storage
> > >>> > plugin arch more closely.
> > >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com>
> > >>> > wrote:
> > >>> >
> > >>> > > Hey Marcus,
> > >>> > >
> > >>> > > I'm reviewing your e-mails as I implement the necessary methods
> in
> > new
> > >>> > > classes.
> > >>> > >
> > >>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> > all of
> > >>> > > the pool data (host, port, name, path) which would be used to log
> > the
> > >>> > > host into the initiator."
> > >>> > >
> > >>> > > Can you tell me, in my case, since a storage pool (primary
> > storage) is
> > >>> > > actually the SAN, I wouldn't really be logging into anything at
> > this
> > >>> > point,
> > >>> > > correct?
> > >>> > >
> > >>> > > Also, what kind of capacity, available, and used bytes make sense
> > to
> > >>> > report
> > >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> > case
> > >>> > and
> > >>> > > not an individual LUN)?
> > >>> > >
> > >>> > > Thanks!
> > >>> > >
> > >>> > >
> > >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> > shadowsor@gmail.com
> > >>> > > >wrote:
> > >>> > >
> > >>> > > > Ok, KVM will be close to that, of course, because only the
> > >>> > > > hypervisor
> > >>> > > > classes differ, the rest is all mgmt server. Creating a volume
> is
> > >>> > > > just
> > >>> > > > a db entry until it's deployed for the first time.
> > >>> > > > AttachVolumeCommand
> > >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a
> KVM
> > >>> > > > StorageAdaptor) to log in the host to the target and then you
> > have a
> > >>> > > > block device.  Maybe libvirt will do that for you, but my quick
> > read
> > >>> > > > made it sound like the iscsi libvirt pool type is actually a
> > pool,
> > >>> > > > not
> > >>> > > > a lun or volume, so you'll need to figure out if that works or
> if
> > >>> > > > you'll have to use iscsiadm commands.
> > >>> > > >
> > >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because
> Libvirt
> > >>> > > > doesn't really manage your pool the way you want), you're going
> > to
> > >>> > > > have to create a version of KVMStoragePool class and a
> > >>> > > > StorageAdaptor
> > >>> > > > class (see LibvirtStoragePool.java and
> > LibvirtStorageAdaptor.java),
> > >>> > > > implementing all of the methods, then in KVMStorageManager.java
> > >>> > > > there's a "_storageMapper" map. This is used to select the
> > correct
> > >>> > > > adaptor, you can see in this file that every call first pulls
> the
> > >>> > > > correct adaptor out of this map via getStorageAdaptor. So you
> can
> > >>> > > > see
> > >>> > > > a comment in this file that says "add other storage adaptors
> > here",
> > >>> > > > where it puts to this map, this is where you'd register your
> > >>> > > > adaptor.
> > >>> > > >
> > >>> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
> > all
> > >>> > > > of
> > >>> > > > the pool data (host, port, name, path) which would be used to
> log
> > >>> > > > the
> > >>> > > > host into the initiator. I *believe* the method getPhysicalDisk
> > will
> > >>> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> > calls
> > >>> > > > this and then creates the XML diskdef and attaches it to the
> VM.
> > >>> > > > Now,
> > >>> > > > one thing you need to know is that createStoragePool is called
> > >>> > > > often,
> > >>> > > > sometimes just to make sure the pool is there. You may want to
> > >>> > > > create
> > >>> > > > a map in your adaptor class and keep track of pools that have
> > been
> > >>> > > > created, LibvirtStorageAdaptor doesn't have to do this because
> it
> > >>> > > > asks
> > >>> > > > libvirt about which storage pools exist. There are also calls
> to
> > >>> > > > refresh the pool stats, and all of the other calls can be seen
> in
> > >>> > > > the
> > >>> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> > etc,
> > >>> > > > but
> > >>> > > > it's probably a hold-over from 4.1, as I have the vague idea
> that
> > >>> > > > volumes are created on the mgmt server via the plugin now, so
> > >>> > > > whatever
> > >>> > > > doesn't apply can just be stubbed out (or optionally
> > >>> > > > extended/reimplemented here, if you don't mind the hosts
> talking
> > to
> > >>> > > > the san api).
> > >>> > > >
> > >>> > > > There is a difference between attaching new volumes and
> > launching a
> > >>> > > > VM
> > >>> > > > with existing volumes.  In the latter case, the VM definition
> > that
> > >>> > > > was
> > >>> > > > passed to the KVM agent includes the disks, (StartCommand).
> > >>> > > >
> > >>> > > > I'd be interested in how your pool is defined for Xen, I
> imagine
> > it
> > >>> > > > would need to be kept the same. Is it just a definition to the
> > SAN
> > >>> > > > (ip address or some such, port number) and perhaps a volume
> pool
> > >>> > > > name?
> > >>> > > >
> > >>> > > > > If there is a way for me to update the ACL list on the SAN to
> > have
> > >>> > > only a
> > >>> > > > > single KVM host have access to the volume, that would be
> ideal.
> > >>> > > >
> > >>> > > > That depends on your SAN API.  I was under the impression that
> > the
> > >>> > > > storage plugin framework allowed for acls, or for you to do
> > whatever
> > >>> > > > you want for create/attach/delete/snapshot, etc. You'd just
> call
> > >>> > > > your
> > >>> > > > SAN API with the host info for the ACLs prior to when the disk
> is
> > >>> > > > attached (or the VM is started).  I'd have to look more at the
> > >>> > > > framework to know the details, in 4.1 I would do this in
> > >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> > >>> > > >
> > >>> > > >
> > >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > >>> > > > <mi...@solidfire.com> wrote:
> > >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> > >>> > > > > different
> > >>> > > from
> > >>> > > > how
> > >>> > > > > it works with XenServer and VMware.
> > >>> > > > >
> > >>> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > >>> > > > >
> > >>> > > > > * The user creates a CS volume (this is just recorded in the
> > >>> > > > cloud.volumes
> > >>> > > > > table).
> > >>> > > > >
> > >>> > > > > * The user attaches the volume as a disk to a VM for the
> first
> > >>> > > > > time
> > >>> > (if
> > >>> > > > the
> > >>> > > > > storage allocator picks the SolidFire plug-in, the storage
> > >>> > > > > framework
> > >>> > > > invokes
> > >>> > > > > a method on the plug-in that creates a volume on the
> SAN...info
> > >>> > > > > like
> > >>> > > the
> > >>> > > > IQN
> > >>> > > > > of the SAN volume is recorded in the DB).
> > >>> > > > >
> > >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> > executed.
> > >>> > > > > It
> > >>> > > > > determines based on a flag passed in that the storage in
> > question
> > >>> > > > > is
> > >>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > >>> > preallocated
> > >>> > > > > storage). This tells it to discover the iSCSI target. Once
> > >>> > > > > discovered
> > >>> > > it
> > >>> > > > > determines if the iSCSI target already contains a storage
> > >>> > > > > repository
> > >>> > > (it
> > >>> > > > > would if this were a re-attach situation). If it does contain
> > an
> > >>> > > > > SR
> > >>> > > > already,
> > >>> > > > > then there should already be one VDI, as well. If there is no
> > SR,
> > >>> > > > > an
> > >>> > SR
> > >>> > > > is
> > >>> > > > > created and a single VDI is created within it (that takes up
> > about
> > >>> > > > > as
> > >>> > > > much
> > >>> > > > > space as was requested for the CloudStack volume).
> > >>> > > > >
> > >>> > > > > * The normal attach-volume logic continues (it depends on the
> > >>> > existence
> > >>> > > > of
> > >>> > > > > an SR and a VDI).
> > >>> > > > >
> > >>> > > > > The VMware case is essentially the same (mainly just
> substitute
> > >>> > > datastore
> > >>> > > > > for SR and VMDK for VDI).
> > >>> > > > >
> > >>> > > > > In both cases, all hosts in the cluster have discovered the
> > iSCSI
> > >>> > > target,
> > >>> > > > > but only the host that is currently running the VM that is
> > using
> > >>> > > > > the
> > >>> > > VDI
> > >>> > > > (or
> > >>> > > > > VMKD) is actually using the disk.
> > >>> > > > >
> > >>> > > > > Live Migration should be OK because the hypervisors
> communicate
> > >>> > > > > with
> > >>> > > > > whatever metadata they have on the SR (or datastore).
> > >>> > > > >
> > >>> > > > > I see what you're saying with KVM, though.
> > >>> > > > >
> > >>> > > > > In that case, the hosts are clustered only in CloudStack's
> > eyes.
> > >>> > > > > CS
> > >>> > > > controls
> > >>> > > > > Live Migration. You don't really need a clustered filesystem
> on
> > >>> > > > > the
> > >>> > > LUN.
> > >>> > > > The
> > >>> > > > > LUN could be handed over raw to the VM using it.
> > >>> > > > >
> > >>> > > > > If there is a way for me to update the ACL list on the SAN to
> > have
> > >>> > > only a
> > >>> > > > > single KVM host have access to the volume, that would be
> ideal.
> > >>> > > > >
> > >>> > > > > Also, I agree I'll need to use iscsiadm to discover and log
> in
> > to
> > >>> > > > > the
> > >>> > > > iSCSI
> > >>> > > > > target. I'll also need to take the resultant new device and
> > pass
> > >>> > > > > it
> > >>> > > into
> > >>> > > > the
> > >>> > > > > VM.
> > >>> > > > >
> > >>> > > > > Does this sound reasonable? Please call me out on anything I
> > seem
> > >>> > > > incorrect
> > >>> > > > > about. :)
> > >>> > > > >
> > >>> > > > > Thanks for all the thought on this, Marcus!
> > >>> > > > >
> > >>> > > > >
> > >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > >>> > shadowsor@gmail.com>
> > >>> > > > > wrote:
> > >>> > > > >>
> > >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and
> > the
> > >>> > > attach
> > >>> > > > >> the disk def to the vm. You may need to do your own
> > >>> > > > >> StorageAdaptor
> > >>> > and
> > >>> > > > run
> > >>> > > > >> iscsiadm commands to accomplish that, depending on how the
> > >>> > > > >> libvirt
> > >>> > > iscsi
> > >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> > how it
> > >>> > > works
> > >>> > > > on
> > >>> > > > >> xen at the momen., nor is it ideal.
> > >>> > > > >>
> > >>> > > > >> Your plugin will handle acls as far as which host can see
> > which
> > >>> > > > >> luns
> > >>> > > as
> > >>> > > > >> well, I remember discussing that months ago, so that a disk
> > won't
> > >>> > > > >> be
> > >>> > > > >> connected until the hypervisor has exclusive access, so it
> > will
> > >>> > > > >> be
> > >>> > > safe
> > >>> > > > and
> > >>> > > > >> fence the disk from rogue nodes that cloudstack loses
> > >>> > > > >> connectivity
> > >>> > > > with. It
> > >>> > > > >> should revoke access to everything but the target host...
> > Except
> > >>> > > > >> for
> > >>> > > > during
> > >>> > > > >> migration but we can discuss that later, there's a migration
> > prep
> > >>> > > > process
> > >>> > > > >> where the new host can be added to the acls, and the old
> host
> > can
> > >>> > > > >> be
> > >>> > > > removed
> > >>> > > > >> post migration.
> > >>> > > > >>
> > >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > >>> > > mike.tutkowski@solidfire.com
> > >>> > > > >
> > >>> > > > >> wrote:
> > >>> > > > >>>
> > >>> > > > >>> Yeah, that would be ideal.
> > >>> > > > >>>
> > >>> > > > >>> So, I would still need to discover the iSCSI target, log in
> > to
> > >>> > > > >>> it,
> > >>> > > then
> > >>> > > > >>> figure out what /dev/sdX was created as a result (and leave
> > it
> > >>> > > > >>> as
> > >>> > is
> > >>> > > -
> > >>> > > > do
> > >>> > > > >>> not format it with any file system...clustered or not). I
> > would
> > >>> > pass
> > >>> > > > that
> > >>> > > > >>> device into the VM.
> > >>> > > > >>>
> > >>> > > > >>> Kind of accurate?
> > >>> > > > >>>
> > >>> > > > >>>
> > >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > >>> > > shadowsor@gmail.com>
> > >>> > > > >>> wrote:
> > >>> > > > >>>>
> > >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> > definitions.
> > >>> > There
> > >>> > > > are
> > >>> > > > >>>> ones that work for block devices rather than files. You
> can
> > >>> > > > >>>> piggy
> > >>> > > > back off
> > >>> > > > >>>> of the existing disk definitions and attach it to the vm
> as
> > a
> > >>> > block
> > >>> > > > device.
> > >>> > > > >>>> The definition is an XML string per libvirt XML format.
> You
> > may
> > >>> > want
> > >>> > > > to use
> > >>> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> > like I
> > >>> > > > mentioned,
> > >>> > > > >>>> there are by-id paths to the block devices, as well as
> other
> > >>> > > > >>>> ones
> > >>> > > > that will
> > >>> > > > >>>> be consistent and easier for management, not sure how
> > familiar
> > >>> > > > >>>> you
> > >>> > > > are with
> > >>> > > > >>>> device naming on Linux.
> > >>> > > > >>>>
> > >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> > >>> > > > >>>> <sh...@gmail.com>
> > >>> > > > wrote:
> > >>> > > > >>>>>
> > >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
> > initiator
> > >>> > > inside
> > >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your
> lun
> > on
> > >>> > > > hypervisor) as
> > >>> > > > >>>>> a disk to the VM, rather than attaching some image file
> > that
> > >>> > > resides
> > >>> > > > on a
> > >>> > > > >>>>> filesystem, mounted on the host, living on a target.
> > >>> > > > >>>>>
> > >>> > > > >>>>> Actually, if you plan on the storage supporting live
> > migration
> > >>> > > > >>>>> I
> > >>> > > > think
> > >>> > > > >>>>> this is the only way. You can't put a filesystem on it
> and
> > >>> > > > >>>>> mount
> > >>> > it
> > >>> > > > in two
> > >>> > > > >>>>> places to facilitate migration unless its a clustered
> > >>> > > > >>>>> filesystem,
> > >>> > > in
> > >>> > > > which
> > >>> > > > >>>>> case you're back to shared mount point.
> > >>> > > > >>>>>
> > >>> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
> > LVM
> > >>> > with a
> > >>> > > > xen
> > >>> > > > >>>>> specific cluster management, a custom CLVM. They don't
> use
> > a
> > >>> > > > filesystem
> > >>> > > > >>>>> either.
> > >>> > > > >>>>>
> > >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > >>> > > > >>>>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>
> > >>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do
> you
> > >>> > > > >>>>>> mean
> > >>> > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> > that
> > >>> > > > >>>>>> in
> > >>> > > CS.
> > >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > >>> > > > >>>>>> hypervisor,
> > >>> > > as
> > >>> > > > far as I
> > >>> > > > >>>>>> know.
> > >>> > > > >>>>>>
> > >>> > > > >>>>>>
> > >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > >>> > > > shadowsor@gmail.com>
> > >>> > > > >>>>>> wrote:
> > >>> > > > >>>>>>>
> > >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> > there is
> > >>> > > > >>>>>>> a
> > >>> > > good
> > >>> > > > >>>>>>> reason not to.
> > >>> > > > >>>>>>>
> > >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > >>> > shadowsor@gmail.com>
> > >>> > > > >>>>>>> wrote:
> > >>> > > > >>>>>>>>
> > >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
> > mistake
> > >>> > > > >>>>>>>> to
> > >>> > go
> > >>> > > to
> > >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> > luns
> > >>> > and
> > >>> > > > then putting
> > >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> > QCOW2
> > >>> > > > >>>>>>>> or
> > >>> > > even
> > >>> > > > RAW disk
> > >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> > along
> > >>> > > > >>>>>>>> the
> > >>> > > > way, and have
> > >>> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> > etc.
> > >>> > > > >>>>>>>>
> > >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> > with
> > >>> > CS.
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
> > is by
> > >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the
> location
> > of
> > >>> > > > >>>>>>>>> the
> > >>> > > > share.
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> > >>> > > > >>>>>>>>> discovering
> > >>> > > their
> > >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> > somewhere
> > >>> > > > >>>>>>>>> on
> > >>> > > > their file
> > >>> > > > >>>>>>>>> system.
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> > >>> > > > >>>>>>>>> logging
> > >>> > > in,
> > >>> > > > >>>>>>>>> and mounting behind the scenes for them and letting
> the
> > >>> > current
> > >>> > > > code manage
> > >>> > > > >>>>>>>>> the rest as it currently does?
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > >>> > > > >>>>>>>>>>
> > >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
> > to
> > >>> > catch
> > >>> > > up
> > >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> > disk
> > >>> > > > snapshots + memory
> > >>> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably
> be
> > >>> > handled
> > >>> > > > by the SAN,
> > >>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > >>> > something
> > >>> > > > else. This is
> > >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> > want to
> > >>> > see
> > >>> > > > how others are
> > >>> > > > >>>>>>>>>> planning theirs.
> > >>> > > > >>>>>>>>>>
> > >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > >>> > > shadowsor@gmail.com
> > >>> > > > >
> > >>> > > > >>>>>>>>>> wrote:
> > >>> > > > >>>>>>>>>>>
> > >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a
> vdi
> > >>> > > > >>>>>>>>>>> style
> > >>> > on
> > >>> > > > an
> > >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > >>> > > > >>>>>>>>>>> format.
> > >>> > > > Otherwise you're
> > >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> > creating
> > >>> > > > >>>>>>>>>>> a
> > >>> > > > QCOW2 disk image,
> > >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance
> killer.
> > >>> > > > >>>>>>>>>>>
> > >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
> to
> > the
> > >>> > VM,
> > >>> > > > and
> > >>> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> > >>> > > > >>>>>>>>>>> plugin
> > >>> > is
> > >>> > > > best. My
> > >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was
> that
> > >>> > > > >>>>>>>>>>> there
> > >>> > > was
> > >>> > > > a snapshot
> > >>> > > > >>>>>>>>>>> service that would allow the San to handle
> snapshots.
> > >>> > > > >>>>>>>>>>>
> > >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > >>> > > > shadowsor@gmail.com>
> > >>> > > > >>>>>>>>>>> wrote:
> > >>> > > > >>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> > back
> > >>> > end,
> > >>> > > > if
> > >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> > could
> > >>> > > > >>>>>>>>>>>> call
> > >>> > > > your plugin for
> > >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> > agnostic. As
> > >>> > far
> > >>> > > > as space, that
> > >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With
> ours,
> > we
> > >>> > carve
> > >>> > > > out luns from a
> > >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> > and is
> > >>> > > > independent of the
> > >>> > > > >>>>>>>>>>>> LUN size the host sees.
> > >>> > > > >>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> Hey Marcus,
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
> libvirt
> > >>> > > > >>>>>>>>>>>>> won't
> > >>> > > > work
> > >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> > snapshots?
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor
> snapshot,
> > the
> > >>> > VDI
> > >>> > > > for
> > >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> > repository
> > >>> > > > >>>>>>>>>>>>> as
> > >>> > > the
> > >>> > > > volume is on.
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > >>> > > > >>>>>>>>>>>>> XenServer
> > >>> > > and
> > >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > >>> > snapshots
> > >>> > > > in 4.2) is I'd
> > >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> > user
> > >>> > > > requested for the
> > >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> > >>> > > > >>>>>>>>>>>>> thinly
> > >>> > > > provisions volumes,
> > >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs
> > to
> > >>> > > > >>>>>>>>>>>>> be).
> > >>> > > > The CloudStack
> > >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN
> volume
> > >>> > until a
> > >>> > > > hypervisor
> > >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also
> reside
> > on
> > >>> > > > >>>>>>>>>>>>> the
> > >>> > > > SAN volume.
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > >>> > > > >>>>>>>>>>>>> creation
> > >>> > of
> > >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> > even
> > >>> > > > >>>>>>>>>>>>> if
> > >>> > > > there were support
> > >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN
> per
> > >>> > > > >>>>>>>>>>>>> iSCSI
> > >>> > > > target), then I
> > >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> > this
> > >>> > > works
> > >>> > > > >>>>>>>>>>>>> with DIR?
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> What do you think?
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> Thanks
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> > access
> > >>> > > today.
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I
> might
> > as
> > >>> > well
> > >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> > believe
> > >>> > > > >>>>>>>>>>>>>>> it
> > >>> > > just
> > >>> > > > >>>>>>>>>>>>>>> acts like a
> > >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to
> that.
> > The
> > >>> > > > end-user
> > >>> > > > >>>>>>>>>>>>>>> is
> > >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all
> > KVM
> > >>> > hosts
> > >>> > > > can
> > >>> > > > >>>>>>>>>>>>>>> access,
> > >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is
> providing
> > the
> > >>> > > > storage.
> > >>> > > > >>>>>>>>>>>>>>> It could
> > >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> > >>> > > > >>>>>>>>>>>>>>> filesystem,
> > >>> > > > >>>>>>>>>>>>>>> cloudstack just
> > >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> > >>> > > > >>>>>>>>>>>>>>> images.
> > >>> > > > >>>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
> Sorensen
> > >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
> at
> > the
> > >>> > same
> > >>> > > > >>>>>>>>>>>>>>> > time.
> > >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > >>> > > > >>>>>>>>>>>>>>> >
> > >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
> Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> > pools:
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > >>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > >>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> > Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> > pool
> > >>> > based
> > >>> > > on
> > >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
> have
> > one
> > >>> > LUN,
> > >>> > > > so
> > >>> > > > >>>>>>>>>>>>>>> >>> there would only
> > >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
> the
> > >>> > > (libvirt)
> > >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> > >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> > >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> > >>> > > does
> > >>> > > > >>>>>>>>>>>>>>> >>> not support
> > >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
> > see
> > >>> > > > >>>>>>>>>>>>>>> >>> if
> > >>> > > > libvirt
> > >>> > > > >>>>>>>>>>>>>>> >>> supports
> > >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> > mentioned,
> > >>> > since
> > >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> > >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > >>> > > > targets/LUNs).
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> > Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>         }
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>     }
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> > >>> > > > >>>>>>>>>>>>>>> >>>> currently
> > >>> > > being
> > >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting
> at.
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> > >>> > > > >>>>>>>>>>>>>>> >>>> someone
> > >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> > >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> > iSCSI,
> > >>> > > > >>>>>>>>>>>>>>> >>>> is
> > >>> > > > that
> > >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> > >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> > >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
> iSCSI
> > >>> > server,
> > >>> > > > and
> > >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> > >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> > >>> > > your
> > >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> > logging
> > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > >>> > > and
> > >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> > work
> > >>> > > > >>>>>>>>>>>>>>> >>>>> in
> > >>> > the
> > >>> > > > Xen
> > >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> > >>> > a
> > >>> > > > 1:1
> > >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> > device
> > >>> > > > >>>>>>>>>>>>>>> >>>>> as
> > >>> > a
> > >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> > more
> > >>> > about
> > >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> > your
> > >>> > own
> > >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> > >>> > >  We
> > >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
> the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/sources/java/javadoc/Normally,
> > >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls
> made
> > to
> > >>> > that
> > >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
> > see
> > >>> > > > >>>>>>>>>>>>>>> >>>>> how
> > >>> > > that
> > >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
> test
> > >>> > > > >>>>>>>>>>>>>>> >>>>> java
> > >>> > > code
> > >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> > iscsi
> > >>> > > storage
> > >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > >>> > > > >>>>>>>>>>>>>>> >>>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> > >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
> libvirt
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> > >>> > > but
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
> > the
> > >>> > > classes
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
> Marcus
> > >>> > Sorensen
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
> the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> > >>> > > for
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> > >>> > > > login.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> > >>> > and
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> > Tutkowski"
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> > >>> > I
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> > storage
> > >>> > > > framework
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
> and
> > >>> > delete
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
> establish a
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> > >>> > > > mapping
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
> QoS.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> > expected
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >>> > > > admin
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> > volumes
> > >>> > would
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> > friendly).
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
> work,
> > I
> > >>> > needed
> > >>> > > > to
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
> they
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> > with
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> > >>> > > work
> > >>> > > > on
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> > how I
> > >>> > will
> > >>> > > > need
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> > have to
> > >>> > > expect
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
> > for
> > >>> > this
> > >>> > > to
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
> SolidFire
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> > Inc.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> > cloud™
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> >
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> > Inc.
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> > cloud™
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>>
> > >>> > > > >>>>>>>>>>>>>>> >>>> --
> > >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> > >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the
> cloud™
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>>
> > >>> > > > >>>>>>>>>>>>>>> >>> --
> > >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >>
> > >>> > > > >>>>>>>>>>>>>>> >> --
> > >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>> --
> > >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>>
> > >>> > > > >>>>>>>>>>>>> --
> > >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> > >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> > >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>>
> > >>> > > > >>>>>>>>> --
> > >>> > > > >>>>>>>>> Mike Tutkowski
> > >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>>>>> o: 303.746.7302
> > >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > >>> > > > >>>>>>
> > >>> > > > >>>>>>
> > >>> > > > >>>>>>
> > >>> > > > >>>>>>
> > >>> > > > >>>>>> --
> > >>> > > > >>>>>> Mike Tutkowski
> > >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>>>>> o: 303.746.7302
> > >>> > > > >>>>>> Advancing the way the world uses the cloud™
> > >>> > > > >>>
> > >>> > > > >>>
> > >>> > > > >>>
> > >>> > > > >>>
> > >>> > > > >>> --
> > >>> > > > >>> Mike Tutkowski
> > >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > >>> e: mike.tutkowski@solidfire.com
> > >>> > > > >>> o: 303.746.7302
> > >>> > > > >>> Advancing the way the world uses the cloud™
> > >>> > > > >
> > >>> > > > >
> > >>> > > > >
> > >>> > > > >
> > >>> > > > > --
> > >>> > > > > Mike Tutkowski
> > >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> > >>> > > > > e: mike.tutkowski@solidfire.com
> > >>> > > > > o: 303.746.7302
> > >>> > > > > Advancing the way the world uses the cloud™
> > >>> > > >
> > >>> > >
> > >>> > >
> > >>> > >
> > >>> > > --
> > >>> > > *Mike Tutkowski*
> > >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> > >>> > > e: mike.tutkowski@solidfire.com
> > >>> > > o: 303.746.7302
> > >>> > > Advancing the way the world uses the
> > >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > >>> > > *™*
> > >>> > >
> > >>> >
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> *Mike Tutkowski*
> > >>> *Senior CloudStack Developer, SolidFire Inc.*
> > >>> e: mike.tutkowski@solidfire.com
> > >>> o: 303.746.7302
> > >>> Advancing the way the world uses the
> > >>> cloud<http://solidfire.com/solution/overview/?video=play>
> > >>> *™*
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
What you're saying here is definitely something we should talk about.

Hopefully my previous e-mail has clarified how this works a bit.

It mainly comes down to this:

For the first time in CS history, primary storage is no longer required to
be preallocated by the admin and then handed to CS. CS volumes don't have
to share a preallocated volume anymore.

As of 4.2, primary storage can be based on a SAN (or some other storage
device). You can tell CS how many bytes and IOPS to use from this storage
device and CS invokes the appropriate plug-in to carve out LUNs dynamically.

Each LUN is home to one and only one data disk. Data disks - in this model
- never share a LUN.

The main use case for this is so a CS volume can deliver guaranteed IOPS if
the storage device (ex. SolidFire SAN) delivers guaranteed IOPS on a
LUN-by-LUN basis.


On Tue, Sep 17, 2013 at 10:16 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> I guess whether or not a solidfire device is capable of hosting
> multiple disk pools is irrelevant, we'd hope that we could get the
> stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
> stats aren't collected, I can't as an admin define multiple pools and
> expect cloudstack to allocate evenly from them or fill one up and move
> to the next, because it doesn't know how big it is.
>
> Ultimately this discussion has nothing to do with the KVM stuff
> itself, just a tangent, but something to think about.
>
> On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> > Ok, on most storage pools it shows how many GB free/used when listing
> > the pool both via API and in the UI. I'm guessing those are empty then
> > for the solid fire storage, but it seems like the user should have to
> > define some sort of pool that the luns get carved out of, and you
> > should be able to get the stats for that, right? Or is a solid fire
> > appliance only one pool per appliance? This isn't about billing, but
> > just so cloudstack itself knows whether or not there is space left on
> > the storage device, so cloudstack can go on allocating from a
> > different primary storage as this one fills up. There are also
> > notifications and things. It seems like there should be a call you can
> > handle for this, maybe Edison knows.
> >
> > On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> >> You respond to more than attach and detach, right? Don't you create
> luns as
> >> well? Or are you just referring to the hypervisor stuff?
> >>
> >> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
> >
> >> wrote:
> >>>
> >>> Hi Marcus,
> >>>
> >>> I never need to respond to a CreateStoragePool call for either
> XenServer
> >>> or
> >>> VMware.
> >>>
> >>> What happens is I respond only to the Attach- and Detach-volume
> commands.
> >>>
> >>> Let's say an attach comes in:
> >>>
> >>> In this case, I check to see if the storage is "managed." Talking
> >>> XenServer
> >>> here, if it is, I log in to the LUN that is the disk we want to attach.
> >>> After, if this is the first time attaching this disk, I create an SR
> and a
> >>> VDI within the SR. If it is not the first time attaching this disk, the
> >>> LUN
> >>> already has the SR and VDI on it.
> >>>
> >>> Once this is done, I let the normal "attach" logic run because this
> logic
> >>> expected an SR and a VDI and now it has it.
> >>>
> >>> It's the same thing for VMware: Just substitute datastore for SR and
> VMDK
> >>> for VDI.
> >>>
> >>> Does that make sense?
> >>>
> >>> Thanks!
> >>>
> >>>
> >>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
> >>> <sh...@gmail.com>wrote:
> >>>
> >>> > What do you do with Xen? I imagine the user enter the SAN details
> when
> >>> > registering the pool? A the pool details are basically just
> instructions
> >>> > on
> >>> > how to log into a target, correct?
> >>> >
> >>> > You can choose to log in a KVM host to the target during
> >>> > createStoragePool
> >>> > and save the pool in a map, or just save the pool info in a map for
> >>> > future
> >>> > reference by uuid, for when you do need to log in. The
> createStoragePool
> >>> > then just becomes a way to save the pool info to the agent.
> Personally,
> >>> > I'd
> >>> > log in on the pool create and look/scan for specific luns when
> they're
> >>> > needed, but I haven't thought it through thoroughly. I just say that
> >>> > mainly
> >>> > because login only happens once, the first time the pool is used, and
> >>> > every
> >>> > other storage command is about discovering new luns or maybe
> >>> > deleting/disconnecting luns no longer needed. On the other hand, you
> >>> > could
> >>> > do all of the above: log in on pool create, then also check if you're
> >>> > logged in on other commands and log in if you've lost connection.
> >>> >
> >>> > With Xen, what does your registered pool   show in the UI for
> avail/used
> >>> > capacity, and how does it get that info? I assume there is some sort
> of
> >>> > disk pool that the luns are carved from, and that your plugin is
> called
> >>> > to
> >>> > talk to the SAN and expose to the user how much of that pool has been
> >>> > allocated. Knowing how you already solves these problems with Xen
> will
> >>> > help
> >>> > figure out what to do with KVM.
> >>> >
> >>> > If this is the case, I think the plugin can continue to handle it
> rather
> >>> > than getting details from the agent. I'm not sure if that means nulls
> >>> > are
> >>> > OK for these on the agent side or what, I need to look at the storage
> >>> > plugin arch more closely.
> >>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> >>> > wrote:
> >>> >
> >>> > > Hey Marcus,
> >>> > >
> >>> > > I'm reviewing your e-mails as I implement the necessary methods in
> new
> >>> > > classes.
> >>> > >
> >>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts
> all of
> >>> > > the pool data (host, port, name, path) which would be used to log
> the
> >>> > > host into the initiator."
> >>> > >
> >>> > > Can you tell me, in my case, since a storage pool (primary
> storage) is
> >>> > > actually the SAN, I wouldn't really be logging into anything at
> this
> >>> > point,
> >>> > > correct?
> >>> > >
> >>> > > Also, what kind of capacity, available, and used bytes make sense
> to
> >>> > report
> >>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> case
> >>> > and
> >>> > > not an individual LUN)?
> >>> > >
> >>> > > Thanks!
> >>> > >
> >>> > >
> >>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> shadowsor@gmail.com
> >>> > > >wrote:
> >>> > >
> >>> > > > Ok, KVM will be close to that, of course, because only the
> >>> > > > hypervisor
> >>> > > > classes differ, the rest is all mgmt server. Creating a volume is
> >>> > > > just
> >>> > > > a db entry until it's deployed for the first time.
> >>> > > > AttachVolumeCommand
> >>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> >>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> >>> > > > StorageAdaptor) to log in the host to the target and then you
> have a
> >>> > > > block device.  Maybe libvirt will do that for you, but my quick
> read
> >>> > > > made it sound like the iscsi libvirt pool type is actually a
> pool,
> >>> > > > not
> >>> > > > a lun or volume, so you'll need to figure out if that works or if
> >>> > > > you'll have to use iscsiadm commands.
> >>> > > >
> >>> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> >>> > > > doesn't really manage your pool the way you want), you're going
> to
> >>> > > > have to create a version of KVMStoragePool class and a
> >>> > > > StorageAdaptor
> >>> > > > class (see LibvirtStoragePool.java and
> LibvirtStorageAdaptor.java),
> >>> > > > implementing all of the methods, then in KVMStorageManager.java
> >>> > > > there's a "_storageMapper" map. This is used to select the
> correct
> >>> > > > adaptor, you can see in this file that every call first pulls the
> >>> > > > correct adaptor out of this map via getStorageAdaptor. So you can
> >>> > > > see
> >>> > > > a comment in this file that says "add other storage adaptors
> here",
> >>> > > > where it puts to this map, this is where you'd register your
> >>> > > > adaptor.
> >>> > > >
> >>> > > > So, referencing StorageAdaptor.java, createStoragePool accepts
> all
> >>> > > > of
> >>> > > > the pool data (host, port, name, path) which would be used to log
> >>> > > > the
> >>> > > > host into the initiator. I *believe* the method getPhysicalDisk
> will
> >>> > > > need to do the work of attaching the lun.  AttachVolumeCommand
> calls
> >>> > > > this and then creates the XML diskdef and attaches it to the VM.
> >>> > > > Now,
> >>> > > > one thing you need to know is that createStoragePool is called
> >>> > > > often,
> >>> > > > sometimes just to make sure the pool is there. You may want to
> >>> > > > create
> >>> > > > a map in your adaptor class and keep track of pools that have
> been
> >>> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> >>> > > > asks
> >>> > > > libvirt about which storage pools exist. There are also calls to
> >>> > > > refresh the pool stats, and all of the other calls can be seen in
> >>> > > > the
> >>> > > > StorageAdaptor as well. There's a createPhysical disk, clone,
> etc,
> >>> > > > but
> >>> > > > it's probably a hold-over from 4.1, as I have the vague idea that
> >>> > > > volumes are created on the mgmt server via the plugin now, so
> >>> > > > whatever
> >>> > > > doesn't apply can just be stubbed out (or optionally
> >>> > > > extended/reimplemented here, if you don't mind the hosts talking
> to
> >>> > > > the san api).
> >>> > > >
> >>> > > > There is a difference between attaching new volumes and
> launching a
> >>> > > > VM
> >>> > > > with existing volumes.  In the latter case, the VM definition
> that
> >>> > > > was
> >>> > > > passed to the KVM agent includes the disks, (StartCommand).
> >>> > > >
> >>> > > > I'd be interested in how your pool is defined for Xen, I imagine
> it
> >>> > > > would need to be kept the same. Is it just a definition to the
> SAN
> >>> > > > (ip address or some such, port number) and perhaps a volume pool
> >>> > > > name?
> >>> > > >
> >>> > > > > If there is a way for me to update the ACL list on the SAN to
> have
> >>> > > only a
> >>> > > > > single KVM host have access to the volume, that would be ideal.
> >>> > > >
> >>> > > > That depends on your SAN API.  I was under the impression that
> the
> >>> > > > storage plugin framework allowed for acls, or for you to do
> whatever
> >>> > > > you want for create/attach/delete/snapshot, etc. You'd just call
> >>> > > > your
> >>> > > > SAN API with the host info for the ACLs prior to when the disk is
> >>> > > > attached (or the VM is started).  I'd have to look more at the
> >>> > > > framework to know the details, in 4.1 I would do this in
> >>> > > > getPhysicalDisk just prior to connecting up the LUN.
> >>> > > >
> >>> > > >
> >>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >>> > > > <mi...@solidfire.com> wrote:
> >>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
> >>> > > > > different
> >>> > > from
> >>> > > > how
> >>> > > > > it works with XenServer and VMware.
> >>> > > > >
> >>> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> >>> > > > >
> >>> > > > > * The user creates a CS volume (this is just recorded in the
> >>> > > > cloud.volumes
> >>> > > > > table).
> >>> > > > >
> >>> > > > > * The user attaches the volume as a disk to a VM for the first
> >>> > > > > time
> >>> > (if
> >>> > > > the
> >>> > > > > storage allocator picks the SolidFire plug-in, the storage
> >>> > > > > framework
> >>> > > > invokes
> >>> > > > > a method on the plug-in that creates a volume on the SAN...info
> >>> > > > > like
> >>> > > the
> >>> > > > IQN
> >>> > > > > of the SAN volume is recorded in the DB).
> >>> > > > >
> >>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is
> executed.
> >>> > > > > It
> >>> > > > > determines based on a flag passed in that the storage in
> question
> >>> > > > > is
> >>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> >>> > preallocated
> >>> > > > > storage). This tells it to discover the iSCSI target. Once
> >>> > > > > discovered
> >>> > > it
> >>> > > > > determines if the iSCSI target already contains a storage
> >>> > > > > repository
> >>> > > (it
> >>> > > > > would if this were a re-attach situation). If it does contain
> an
> >>> > > > > SR
> >>> > > > already,
> >>> > > > > then there should already be one VDI, as well. If there is no
> SR,
> >>> > > > > an
> >>> > SR
> >>> > > > is
> >>> > > > > created and a single VDI is created within it (that takes up
> about
> >>> > > > > as
> >>> > > > much
> >>> > > > > space as was requested for the CloudStack volume).
> >>> > > > >
> >>> > > > > * The normal attach-volume logic continues (it depends on the
> >>> > existence
> >>> > > > of
> >>> > > > > an SR and a VDI).
> >>> > > > >
> >>> > > > > The VMware case is essentially the same (mainly just substitute
> >>> > > datastore
> >>> > > > > for SR and VMDK for VDI).
> >>> > > > >
> >>> > > > > In both cases, all hosts in the cluster have discovered the
> iSCSI
> >>> > > target,
> >>> > > > > but only the host that is currently running the VM that is
> using
> >>> > > > > the
> >>> > > VDI
> >>> > > > (or
> >>> > > > > VMKD) is actually using the disk.
> >>> > > > >
> >>> > > > > Live Migration should be OK because the hypervisors communicate
> >>> > > > > with
> >>> > > > > whatever metadata they have on the SR (or datastore).
> >>> > > > >
> >>> > > > > I see what you're saying with KVM, though.
> >>> > > > >
> >>> > > > > In that case, the hosts are clustered only in CloudStack's
> eyes.
> >>> > > > > CS
> >>> > > > controls
> >>> > > > > Live Migration. You don't really need a clustered filesystem on
> >>> > > > > the
> >>> > > LUN.
> >>> > > > The
> >>> > > > > LUN could be handed over raw to the VM using it.
> >>> > > > >
> >>> > > > > If there is a way for me to update the ACL list on the SAN to
> have
> >>> > > only a
> >>> > > > > single KVM host have access to the volume, that would be ideal.
> >>> > > > >
> >>> > > > > Also, I agree I'll need to use iscsiadm to discover and log in
> to
> >>> > > > > the
> >>> > > > iSCSI
> >>> > > > > target. I'll also need to take the resultant new device and
> pass
> >>> > > > > it
> >>> > > into
> >>> > > > the
> >>> > > > > VM.
> >>> > > > >
> >>> > > > > Does this sound reasonable? Please call me out on anything I
> seem
> >>> > > > incorrect
> >>> > > > > about. :)
> >>> > > > >
> >>> > > > > Thanks for all the thought on this, Marcus!
> >>> > > > >
> >>> > > > >
> >>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> >>> > shadowsor@gmail.com>
> >>> > > > > wrote:
> >>> > > > >>
> >>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and
> the
> >>> > > attach
> >>> > > > >> the disk def to the vm. You may need to do your own
> >>> > > > >> StorageAdaptor
> >>> > and
> >>> > > > run
> >>> > > > >> iscsiadm commands to accomplish that, depending on how the
> >>> > > > >> libvirt
> >>> > > iscsi
> >>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't
> how it
> >>> > > works
> >>> > > > on
> >>> > > > >> xen at the momen., nor is it ideal.
> >>> > > > >>
> >>> > > > >> Your plugin will handle acls as far as which host can see
> which
> >>> > > > >> luns
> >>> > > as
> >>> > > > >> well, I remember discussing that months ago, so that a disk
> won't
> >>> > > > >> be
> >>> > > > >> connected until the hypervisor has exclusive access, so it
> will
> >>> > > > >> be
> >>> > > safe
> >>> > > > and
> >>> > > > >> fence the disk from rogue nodes that cloudstack loses
> >>> > > > >> connectivity
> >>> > > > with. It
> >>> > > > >> should revoke access to everything but the target host...
> Except
> >>> > > > >> for
> >>> > > > during
> >>> > > > >> migration but we can discuss that later, there's a migration
> prep
> >>> > > > process
> >>> > > > >> where the new host can be added to the acls, and the old host
> can
> >>> > > > >> be
> >>> > > > removed
> >>> > > > >> post migration.
> >>> > > > >>
> >>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> >>> > > mike.tutkowski@solidfire.com
> >>> > > > >
> >>> > > > >> wrote:
> >>> > > > >>>
> >>> > > > >>> Yeah, that would be ideal.
> >>> > > > >>>
> >>> > > > >>> So, I would still need to discover the iSCSI target, log in
> to
> >>> > > > >>> it,
> >>> > > then
> >>> > > > >>> figure out what /dev/sdX was created as a result (and leave
> it
> >>> > > > >>> as
> >>> > is
> >>> > > -
> >>> > > > do
> >>> > > > >>> not format it with any file system...clustered or not). I
> would
> >>> > pass
> >>> > > > that
> >>> > > > >>> device into the VM.
> >>> > > > >>>
> >>> > > > >>> Kind of accurate?
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> >>> > > shadowsor@gmail.com>
> >>> > > > >>> wrote:
> >>> > > > >>>>
> >>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk
> definitions.
> >>> > There
> >>> > > > are
> >>> > > > >>>> ones that work for block devices rather than files. You can
> >>> > > > >>>> piggy
> >>> > > > back off
> >>> > > > >>>> of the existing disk definitions and attach it to the vm as
> a
> >>> > block
> >>> > > > device.
> >>> > > > >>>> The definition is an XML string per libvirt XML format. You
> may
> >>> > want
> >>> > > > to use
> >>> > > > >>>> an alternate path to the disk rather than just /dev/sdx
> like I
> >>> > > > mentioned,
> >>> > > > >>>> there are by-id paths to the block devices, as well as other
> >>> > > > >>>> ones
> >>> > > > that will
> >>> > > > >>>> be consistent and easier for management, not sure how
> familiar
> >>> > > > >>>> you
> >>> > > > are with
> >>> > > > >>>> device naming on Linux.
> >>> > > > >>>>
> >>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> >>> > > > >>>> <sh...@gmail.com>
> >>> > > > wrote:
> >>> > > > >>>>>
> >>> > > > >>>>> No, as that would rely on virtualized network/iscsi
> initiator
> >>> > > inside
> >>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun
> on
> >>> > > > hypervisor) as
> >>> > > > >>>>> a disk to the VM, rather than attaching some image file
> that
> >>> > > resides
> >>> > > > on a
> >>> > > > >>>>> filesystem, mounted on the host, living on a target.
> >>> > > > >>>>>
> >>> > > > >>>>> Actually, if you plan on the storage supporting live
> migration
> >>> > > > >>>>> I
> >>> > > > think
> >>> > > > >>>>> this is the only way. You can't put a filesystem on it and
> >>> > > > >>>>> mount
> >>> > it
> >>> > > > in two
> >>> > > > >>>>> places to facilitate migration unless its a clustered
> >>> > > > >>>>> filesystem,
> >>> > > in
> >>> > > > which
> >>> > > > >>>>> case you're back to shared mount point.
> >>> > > > >>>>>
> >>> > > > >>>>> As far as I'm aware, the xenserver SR style is basically
> LVM
> >>> > with a
> >>> > > > xen
> >>> > > > >>>>> specific cluster management, a custom CLVM. They don't use
> a
> >>> > > > filesystem
> >>> > > > >>>>> either.
> >>> > > > >>>>>
> >>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >>> > > > >>>>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>
> >>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> >>> > > > >>>>>> mean
> >>> > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> that
> >>> > > > >>>>>> in
> >>> > > CS.
> >>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> >>> > > > >>>>>> hypervisor,
> >>> > > as
> >>> > > > far as I
> >>> > > > >>>>>> know.
> >>> > > > >>>>>>
> >>> > > > >>>>>>
> >>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> >>> > > > shadowsor@gmail.com>
> >>> > > > >>>>>> wrote:
> >>> > > > >>>>>>>
> >>> > > > >>>>>>> Better to wire up the lun directly to the vm unless
> there is
> >>> > > > >>>>>>> a
> >>> > > good
> >>> > > > >>>>>>> reason not to.
> >>> > > > >>>>>>>
> >>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> >>> > shadowsor@gmail.com>
> >>> > > > >>>>>>> wrote:
> >>> > > > >>>>>>>>
> >>> > > > >>>>>>>> You could do that, but as mentioned I think its a
> mistake
> >>> > > > >>>>>>>> to
> >>> > go
> >>> > > to
> >>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> luns
> >>> > and
> >>> > > > then putting
> >>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a
> QCOW2
> >>> > > > >>>>>>>> or
> >>> > > even
> >>> > > > RAW disk
> >>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops
> along
> >>> > > > >>>>>>>> the
> >>> > > > way, and have
> >>> > > > >>>>>>>> more overhead with the filesystem and its journaling,
> etc.
> >>> > > > >>>>>>>>
> >>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> with
> >>> > CS.
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today
> is by
> >>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location
> of
> >>> > > > >>>>>>>>> the
> >>> > > > share.
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
> >>> > > > >>>>>>>>> discovering
> >>> > > their
> >>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> somewhere
> >>> > > > >>>>>>>>> on
> >>> > > > their file
> >>> > > > >>>>>>>>> system.
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> >>> > > > >>>>>>>>> logging
> >>> > > in,
> >>> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> >>> > current
> >>> > > > code manage
> >>> > > > >>>>>>>>> the rest as it currently does?
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> >>> > > > >>>>>>>>>>
> >>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
> to
> >>> > catch
> >>> > > up
> >>> > > > >>>>>>>>>> on the work done in KVM, but this is basically just
> disk
> >>> > > > snapshots + memory
> >>> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> >>> > handled
> >>> > > > by the SAN,
> >>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> >>> > something
> >>> > > > else. This is
> >>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will
> want to
> >>> > see
> >>> > > > how others are
> >>> > > > >>>>>>>>>> planning theirs.
> >>> > > > >>>>>>>>>>
> >>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> >>> > > shadowsor@gmail.com
> >>> > > > >
> >>> > > > >>>>>>>>>> wrote:
> >>> > > > >>>>>>>>>>>
> >>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> >>> > > > >>>>>>>>>>> style
> >>> > on
> >>> > > > an
> >>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> >>> > > > >>>>>>>>>>> format.
> >>> > > > Otherwise you're
> >>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> creating
> >>> > > > >>>>>>>>>>> a
> >>> > > > QCOW2 disk image,
> >>> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> >>> > > > >>>>>>>>>>>
> >>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> the
> >>> > VM,
> >>> > > > and
> >>> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> >>> > > > >>>>>>>>>>> plugin
> >>> > is
> >>> > > > best. My
> >>> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> >>> > > > >>>>>>>>>>> there
> >>> > > was
> >>> > > > a snapshot
> >>> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> >>> > > > >>>>>>>>>>>
> >>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> >>> > > > shadowsor@gmail.com>
> >>> > > > >>>>>>>>>>> wrote:
> >>> > > > >>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> back
> >>> > end,
> >>> > > > if
> >>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
> could
> >>> > > > >>>>>>>>>>>> call
> >>> > > > your plugin for
> >>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor
> agnostic. As
> >>> > far
> >>> > > > as space, that
> >>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours,
> we
> >>> > carve
> >>> > > > out luns from a
> >>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool
> and is
> >>> > > > independent of the
> >>> > > > >>>>>>>>>>>> LUN size the host sees.
> >>> > > > >>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> Hey Marcus,
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> >>> > > > >>>>>>>>>>>>> won't
> >>> > > > work
> >>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> snapshots?
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> the
> >>> > VDI
> >>> > > > for
> >>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage
> repository
> >>> > > > >>>>>>>>>>>>> as
> >>> > > the
> >>> > > > volume is on.
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> >>> > > > >>>>>>>>>>>>> XenServer
> >>> > > and
> >>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> >>> > snapshots
> >>> > > > in 4.2) is I'd
> >>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> user
> >>> > > > requested for the
> >>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> >>> > > > >>>>>>>>>>>>> thinly
> >>> > > > provisions volumes,
> >>> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs
> to
> >>> > > > >>>>>>>>>>>>> be).
> >>> > > > The CloudStack
> >>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> >>> > until a
> >>> > > > hypervisor
> >>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside
> on
> >>> > > > >>>>>>>>>>>>> the
> >>> > > > SAN volume.
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> >>> > > > >>>>>>>>>>>>> creation
> >>> > of
> >>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> even
> >>> > > > >>>>>>>>>>>>> if
> >>> > > > there were support
> >>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> >>> > > > >>>>>>>>>>>>> iSCSI
> >>> > > > target), then I
> >>> > > > >>>>>>>>>>>>> don't see how using this model will work.
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> this
> >>> > > works
> >>> > > > >>>>>>>>>>>>> with DIR?
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> What do you think?
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> Thanks
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> access
> >>> > > today.
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might
> as
> >>> > well
> >>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>> > > > >>>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I
> believe
> >>> > > > >>>>>>>>>>>>>>> it
> >>> > > just
> >>> > > > >>>>>>>>>>>>>>> acts like a
> >>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> The
> >>> > > > end-user
> >>> > > > >>>>>>>>>>>>>>> is
> >>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all
> KVM
> >>> > hosts
> >>> > > > can
> >>> > > > >>>>>>>>>>>>>>> access,
> >>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> the
> >>> > > > storage.
> >>> > > > >>>>>>>>>>>>>>> It could
> >>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> >>> > > > >>>>>>>>>>>>>>> filesystem,
> >>> > > > >>>>>>>>>>>>>>> cloudstack just
> >>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> >>> > > > >>>>>>>>>>>>>>> images.
> >>> > > > >>>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> the
> >>> > same
> >>> > > > >>>>>>>>>>>>>>> > time.
> >>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> >>> > > > >>>>>>>>>>>>>>> >
> >>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage
> pools:
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> >>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> >>> > > > >>>>>>>>>>>>>>> >> default              active     yes
> >>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
> Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
> pool
> >>> > based
> >>> > > on
> >>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> one
> >>> > LUN,
> >>> > > > so
> >>> > > > >>>>>>>>>>>>>>> >>> there would only
> >>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> >>> > > (libvirt)
> >>> > > > >>>>>>>>>>>>>>> >>> storage pool.
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> >>> > > > >>>>>>>>>>>>>>> >>> iSCSI
> >>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> >>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> >>> > > > >>>>>>>>>>>>>>> >>> libvirt
> >>> > > does
> >>> > > > >>>>>>>>>>>>>>> >>> not support
> >>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
> see
> >>> > > > >>>>>>>>>>>>>>> >>> if
> >>> > > > libvirt
> >>> > > > >>>>>>>>>>>>>>> >>> supports
> >>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> mentioned,
> >>> > since
> >>> > > > >>>>>>>>>>>>>>> >>> each one of its
> >>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> >>> > > > targets/LUNs).
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         @Override
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>         }
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>     }
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> >>> > > > >>>>>>>>>>>>>>> >>>> currently
> >>> > > being
> >>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> >>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> >>> > > > >>>>>>>>>>>>>>> >>>> someone
> >>> > > > >>>>>>>>>>>>>>> >>>> selects the
> >>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> iSCSI,
> >>> > > > >>>>>>>>>>>>>>> >>>> is
> >>> > > > that
> >>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> >>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> >>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
> >>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >>> > > > >>>>>>>>>>>>>>> >>>> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > http://libvirt.org/storage.html#StorageBackendISCSI
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> >>> > server,
> >>> > > > and
> >>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> >>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> >>> > > > >>>>>>>>>>>>>>> >>>>> believe
> >>> > > your
> >>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> >>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> logging
> >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >>> > > and
> >>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> >>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
> work
> >>> > > > >>>>>>>>>>>>>>> >>>>> in
> >>> > the
> >>> > > > Xen
> >>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> >>> > > > >>>>>>>>>>>>>>> >>>>> provides
> >>> > a
> >>> > > > 1:1
> >>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> >>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> device
> >>> > > > >>>>>>>>>>>>>>> >>>>> as
> >>> > a
> >>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> >>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> more
> >>> > about
> >>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> >>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> your
> >>> > own
> >>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> >>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> >>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> >>> > >  We
> >>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> >>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/sources/java/javadoc/Normally,
> >>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> >>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made
> to
> >>> > that
> >>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
> see
> >>> > > > >>>>>>>>>>>>>>> >>>>> how
> >>> > > that
> >>> > > > >>>>>>>>>>>>>>> >>>>> is done for
> >>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> >>> > > > >>>>>>>>>>>>>>> >>>>> java
> >>> > > code
> >>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> >>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> iscsi
> >>> > > storage
> >>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> >>> > > > >>>>>>>>>>>>>>> >>>>> get started.
> >>> > > > >>>>>>>>>>>>>>> >>>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> >>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> >>> > > > >>>>>>>>>>>>>>> >>>>> > more,
> >>> > > but
> >>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> >>> > > > >>>>>>>>>>>>>>> >>>>> > supports
> >>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> >>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
> >>> > > > >>>>>>>>>>>>>>> >>>>> > right?
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> >>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
> the
> >>> > > classes
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> last
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> >>> > Sorensen
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
> >>> > > for
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
> >>> > > > login.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> >>> > and
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> Tutkowski"
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
> >>> > I
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
> storage
> >>> > > > framework
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> >>> > delete
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
> >>> > > > mapping
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
> expected
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >>> > > > admin
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
> volumes
> >>> > would
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
> friendly).
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work,
> I
> >>> > needed
> >>> > > > to
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
> with
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
> >>> > > work
> >>> > > > on
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
> how I
> >>> > will
> >>> > > > need
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
> have to
> >>> > > expect
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
> for
> >>> > this
> >>> > > to
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> --
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> cloud™
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> >
> >>> > > > >>>>>>>>>>>>>>> >>>>> > --
> >>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> Inc.
> >>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> cloud™
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>>
> >>> > > > >>>>>>>>>>>>>>> >>>> --
> >>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>>
> >>> > > > >>>>>>>>>>>>>>> >>> --
> >>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >>
> >>> > > > >>>>>>>>>>>>>>> >> --
> >>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>> --
> >>> > > > >>>>>>>>>>>>>> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>>> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>>
> >>> > > > >>>>>>>>>>>>> --
> >>> > > > >>>>>>>>>>>>> Mike Tutkowski
> >>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>>>>>> o: 303.746.7302
> >>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>>
> >>> > > > >>>>>>>>> --
> >>> > > > >>>>>>>>> Mike Tutkowski
> >>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>>>>> o: 303.746.7302
> >>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> >>> > > > >>>>>>
> >>> > > > >>>>>>
> >>> > > > >>>>>>
> >>> > > > >>>>>>
> >>> > > > >>>>>> --
> >>> > > > >>>>>> Mike Tutkowski
> >>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>>>>> e: mike.tutkowski@solidfire.com
> >>> > > > >>>>>> o: 303.746.7302
> >>> > > > >>>>>> Advancing the way the world uses the cloud™
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>>
> >>> > > > >>> --
> >>> > > > >>> Mike Tutkowski
> >>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> >>> > > > >>> e: mike.tutkowski@solidfire.com
> >>> > > > >>> o: 303.746.7302
> >>> > > > >>> Advancing the way the world uses the cloud™
> >>> > > > >
> >>> > > > >
> >>> > > > >
> >>> > > > >
> >>> > > > > --
> >>> > > > > Mike Tutkowski
> >>> > > > > Senior CloudStack Developer, SolidFire Inc.
> >>> > > > > e: mike.tutkowski@solidfire.com
> >>> > > > > o: 303.746.7302
> >>> > > > > Advancing the way the world uses the cloud™
> >>> > > >
> >>> > >
> >>> > >
> >>> > >
> >>> > > --
> >>> > > *Mike Tutkowski*
> >>> > > *Senior CloudStack Developer, SolidFire Inc.*
> >>> > > e: mike.tutkowski@solidfire.com
> >>> > > o: 303.746.7302
> >>> > > Advancing the way the world uses the
> >>> > > cloud<http://solidfire.com/solution/overview/?video=play>
> >>> > > *™*
> >>> > >
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> *Mike Tutkowski*
> >>> *Senior CloudStack Developer, SolidFire Inc.*
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the
> >>> cloud<http://solidfire.com/solution/overview/?video=play>
> >>> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
I guess whether or not a solidfire device is capable of hosting
multiple disk pools is irrelevant, we'd hope that we could get the
stats (maybe 30TB availabie, and 15TB allocated in LUNs). But if these
stats aren't collected, I can't as an admin define multiple pools and
expect cloudstack to allocate evenly from them or fill one up and move
to the next, because it doesn't know how big it is.

Ultimately this discussion has nothing to do with the KVM stuff
itself, just a tangent, but something to think about.

On Tue, Sep 17, 2013 at 10:13 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> Ok, on most storage pools it shows how many GB free/used when listing
> the pool both via API and in the UI. I'm guessing those are empty then
> for the solid fire storage, but it seems like the user should have to
> define some sort of pool that the luns get carved out of, and you
> should be able to get the stats for that, right? Or is a solid fire
> appliance only one pool per appliance? This isn't about billing, but
> just so cloudstack itself knows whether or not there is space left on
> the storage device, so cloudstack can go on allocating from a
> different primary storage as this one fills up. There are also
> notifications and things. It seems like there should be a call you can
> handle for this, maybe Edison knows.
>
> On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com> wrote:
>> You respond to more than attach and detach, right? Don't you create luns as
>> well? Or are you just referring to the hypervisor stuff?
>>
>> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>>
>>> Hi Marcus,
>>>
>>> I never need to respond to a CreateStoragePool call for either XenServer
>>> or
>>> VMware.
>>>
>>> What happens is I respond only to the Attach- and Detach-volume commands.
>>>
>>> Let's say an attach comes in:
>>>
>>> In this case, I check to see if the storage is "managed." Talking
>>> XenServer
>>> here, if it is, I log in to the LUN that is the disk we want to attach.
>>> After, if this is the first time attaching this disk, I create an SR and a
>>> VDI within the SR. If it is not the first time attaching this disk, the
>>> LUN
>>> already has the SR and VDI on it.
>>>
>>> Once this is done, I let the normal "attach" logic run because this logic
>>> expected an SR and a VDI and now it has it.
>>>
>>> It's the same thing for VMware: Just substitute datastore for SR and VMDK
>>> for VDI.
>>>
>>> Does that make sense?
>>>
>>> Thanks!
>>>
>>>
>>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>>> <sh...@gmail.com>wrote:
>>>
>>> > What do you do with Xen? I imagine the user enter the SAN details when
>>> > registering the pool? A the pool details are basically just instructions
>>> > on
>>> > how to log into a target, correct?
>>> >
>>> > You can choose to log in a KVM host to the target during
>>> > createStoragePool
>>> > and save the pool in a map, or just save the pool info in a map for
>>> > future
>>> > reference by uuid, for when you do need to log in. The createStoragePool
>>> > then just becomes a way to save the pool info to the agent. Personally,
>>> > I'd
>>> > log in on the pool create and look/scan for specific luns when they're
>>> > needed, but I haven't thought it through thoroughly. I just say that
>>> > mainly
>>> > because login only happens once, the first time the pool is used, and
>>> > every
>>> > other storage command is about discovering new luns or maybe
>>> > deleting/disconnecting luns no longer needed. On the other hand, you
>>> > could
>>> > do all of the above: log in on pool create, then also check if you're
>>> > logged in on other commands and log in if you've lost connection.
>>> >
>>> > With Xen, what does your registered pool   show in the UI for avail/used
>>> > capacity, and how does it get that info? I assume there is some sort of
>>> > disk pool that the luns are carved from, and that your plugin is called
>>> > to
>>> > talk to the SAN and expose to the user how much of that pool has been
>>> > allocated. Knowing how you already solves these problems with Xen will
>>> > help
>>> > figure out what to do with KVM.
>>> >
>>> > If this is the case, I think the plugin can continue to handle it rather
>>> > than getting details from the agent. I'm not sure if that means nulls
>>> > are
>>> > OK for these on the agent side or what, I need to look at the storage
>>> > plugin arch more closely.
>>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> > wrote:
>>> >
>>> > > Hey Marcus,
>>> > >
>>> > > I'm reviewing your e-mails as I implement the necessary methods in new
>>> > > classes.
>>> > >
>>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
>>> > > the pool data (host, port, name, path) which would be used to log the
>>> > > host into the initiator."
>>> > >
>>> > > Can you tell me, in my case, since a storage pool (primary storage) is
>>> > > actually the SAN, I wouldn't really be logging into anything at this
>>> > point,
>>> > > correct?
>>> > >
>>> > > Also, what kind of capacity, available, and used bytes make sense to
>>> > report
>>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
>>> > and
>>> > > not an individual LUN)?
>>> > >
>>> > > Thanks!
>>> > >
>>> > >
>>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
>>> > > >wrote:
>>> > >
>>> > > > Ok, KVM will be close to that, of course, because only the
>>> > > > hypervisor
>>> > > > classes differ, the rest is all mgmt server. Creating a volume is
>>> > > > just
>>> > > > a db entry until it's deployed for the first time.
>>> > > > AttachVolumeCommand
>>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>>> > > > StorageAdaptor) to log in the host to the target and then you have a
>>> > > > block device.  Maybe libvirt will do that for you, but my quick read
>>> > > > made it sound like the iscsi libvirt pool type is actually a pool,
>>> > > > not
>>> > > > a lun or volume, so you'll need to figure out if that works or if
>>> > > > you'll have to use iscsiadm commands.
>>> > > >
>>> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>>> > > > doesn't really manage your pool the way you want), you're going to
>>> > > > have to create a version of KVMStoragePool class and a
>>> > > > StorageAdaptor
>>> > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>>> > > > implementing all of the methods, then in KVMStorageManager.java
>>> > > > there's a "_storageMapper" map. This is used to select the correct
>>> > > > adaptor, you can see in this file that every call first pulls the
>>> > > > correct adaptor out of this map via getStorageAdaptor. So you can
>>> > > > see
>>> > > > a comment in this file that says "add other storage adaptors here",
>>> > > > where it puts to this map, this is where you'd register your
>>> > > > adaptor.
>>> > > >
>>> > > > So, referencing StorageAdaptor.java, createStoragePool accepts all
>>> > > > of
>>> > > > the pool data (host, port, name, path) which would be used to log
>>> > > > the
>>> > > > host into the initiator. I *believe* the method getPhysicalDisk will
>>> > > > need to do the work of attaching the lun.  AttachVolumeCommand calls
>>> > > > this and then creates the XML diskdef and attaches it to the VM.
>>> > > > Now,
>>> > > > one thing you need to know is that createStoragePool is called
>>> > > > often,
>>> > > > sometimes just to make sure the pool is there. You may want to
>>> > > > create
>>> > > > a map in your adaptor class and keep track of pools that have been
>>> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
>>> > > > asks
>>> > > > libvirt about which storage pools exist. There are also calls to
>>> > > > refresh the pool stats, and all of the other calls can be seen in
>>> > > > the
>>> > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
>>> > > > but
>>> > > > it's probably a hold-over from 4.1, as I have the vague idea that
>>> > > > volumes are created on the mgmt server via the plugin now, so
>>> > > > whatever
>>> > > > doesn't apply can just be stubbed out (or optionally
>>> > > > extended/reimplemented here, if you don't mind the hosts talking to
>>> > > > the san api).
>>> > > >
>>> > > > There is a difference between attaching new volumes and launching a
>>> > > > VM
>>> > > > with existing volumes.  In the latter case, the VM definition that
>>> > > > was
>>> > > > passed to the KVM agent includes the disks, (StartCommand).
>>> > > >
>>> > > > I'd be interested in how your pool is defined for Xen, I imagine it
>>> > > > would need to be kept the same. Is it just a definition to the SAN
>>> > > > (ip address or some such, port number) and perhaps a volume pool
>>> > > > name?
>>> > > >
>>> > > > > If there is a way for me to update the ACL list on the SAN to have
>>> > > only a
>>> > > > > single KVM host have access to the volume, that would be ideal.
>>> > > >
>>> > > > That depends on your SAN API.  I was under the impression that the
>>> > > > storage plugin framework allowed for acls, or for you to do whatever
>>> > > > you want for create/attach/delete/snapshot, etc. You'd just call
>>> > > > your
>>> > > > SAN API with the host info for the ACLs prior to when the disk is
>>> > > > attached (or the VM is started).  I'd have to look more at the
>>> > > > framework to know the details, in 4.1 I would do this in
>>> > > > getPhysicalDisk just prior to connecting up the LUN.
>>> > > >
>>> > > >
>>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>> > > > <mi...@solidfire.com> wrote:
>>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
>>> > > > > different
>>> > > from
>>> > > > how
>>> > > > > it works with XenServer and VMware.
>>> > > > >
>>> > > > > Just to give you an idea how it works in 4.2 with XenServer:
>>> > > > >
>>> > > > > * The user creates a CS volume (this is just recorded in the
>>> > > > cloud.volumes
>>> > > > > table).
>>> > > > >
>>> > > > > * The user attaches the volume as a disk to a VM for the first
>>> > > > > time
>>> > (if
>>> > > > the
>>> > > > > storage allocator picks the SolidFire plug-in, the storage
>>> > > > > framework
>>> > > > invokes
>>> > > > > a method on the plug-in that creates a volume on the SAN...info
>>> > > > > like
>>> > > the
>>> > > > IQN
>>> > > > > of the SAN volume is recorded in the DB).
>>> > > > >
>>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
>>> > > > > It
>>> > > > > determines based on a flag passed in that the storage in question
>>> > > > > is
>>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
>>> > preallocated
>>> > > > > storage). This tells it to discover the iSCSI target. Once
>>> > > > > discovered
>>> > > it
>>> > > > > determines if the iSCSI target already contains a storage
>>> > > > > repository
>>> > > (it
>>> > > > > would if this were a re-attach situation). If it does contain an
>>> > > > > SR
>>> > > > already,
>>> > > > > then there should already be one VDI, as well. If there is no SR,
>>> > > > > an
>>> > SR
>>> > > > is
>>> > > > > created and a single VDI is created within it (that takes up about
>>> > > > > as
>>> > > > much
>>> > > > > space as was requested for the CloudStack volume).
>>> > > > >
>>> > > > > * The normal attach-volume logic continues (it depends on the
>>> > existence
>>> > > > of
>>> > > > > an SR and a VDI).
>>> > > > >
>>> > > > > The VMware case is essentially the same (mainly just substitute
>>> > > datastore
>>> > > > > for SR and VMDK for VDI).
>>> > > > >
>>> > > > > In both cases, all hosts in the cluster have discovered the iSCSI
>>> > > target,
>>> > > > > but only the host that is currently running the VM that is using
>>> > > > > the
>>> > > VDI
>>> > > > (or
>>> > > > > VMKD) is actually using the disk.
>>> > > > >
>>> > > > > Live Migration should be OK because the hypervisors communicate
>>> > > > > with
>>> > > > > whatever metadata they have on the SR (or datastore).
>>> > > > >
>>> > > > > I see what you're saying with KVM, though.
>>> > > > >
>>> > > > > In that case, the hosts are clustered only in CloudStack's eyes.
>>> > > > > CS
>>> > > > controls
>>> > > > > Live Migration. You don't really need a clustered filesystem on
>>> > > > > the
>>> > > LUN.
>>> > > > The
>>> > > > > LUN could be handed over raw to the VM using it.
>>> > > > >
>>> > > > > If there is a way for me to update the ACL list on the SAN to have
>>> > > only a
>>> > > > > single KVM host have access to the volume, that would be ideal.
>>> > > > >
>>> > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
>>> > > > > the
>>> > > > iSCSI
>>> > > > > target. I'll also need to take the resultant new device and pass
>>> > > > > it
>>> > > into
>>> > > > the
>>> > > > > VM.
>>> > > > >
>>> > > > > Does this sound reasonable? Please call me out on anything I seem
>>> > > > incorrect
>>> > > > > about. :)
>>> > > > >
>>> > > > > Thanks for all the thought on this, Marcus!
>>> > > > >
>>> > > > >
>>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>>> > shadowsor@gmail.com>
>>> > > > > wrote:
>>> > > > >>
>>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
>>> > > attach
>>> > > > >> the disk def to the vm. You may need to do your own
>>> > > > >> StorageAdaptor
>>> > and
>>> > > > run
>>> > > > >> iscsiadm commands to accomplish that, depending on how the
>>> > > > >> libvirt
>>> > > iscsi
>>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>>> > > works
>>> > > > on
>>> > > > >> xen at the momen., nor is it ideal.
>>> > > > >>
>>> > > > >> Your plugin will handle acls as far as which host can see which
>>> > > > >> luns
>>> > > as
>>> > > > >> well, I remember discussing that months ago, so that a disk won't
>>> > > > >> be
>>> > > > >> connected until the hypervisor has exclusive access, so it will
>>> > > > >> be
>>> > > safe
>>> > > > and
>>> > > > >> fence the disk from rogue nodes that cloudstack loses
>>> > > > >> connectivity
>>> > > > with. It
>>> > > > >> should revoke access to everything but the target host... Except
>>> > > > >> for
>>> > > > during
>>> > > > >> migration but we can discuss that later, there's a migration prep
>>> > > > process
>>> > > > >> where the new host can be added to the acls, and the old host can
>>> > > > >> be
>>> > > > removed
>>> > > > >> post migration.
>>> > > > >>
>>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>> > > mike.tutkowski@solidfire.com
>>> > > > >
>>> > > > >> wrote:
>>> > > > >>>
>>> > > > >>> Yeah, that would be ideal.
>>> > > > >>>
>>> > > > >>> So, I would still need to discover the iSCSI target, log in to
>>> > > > >>> it,
>>> > > then
>>> > > > >>> figure out what /dev/sdX was created as a result (and leave it
>>> > > > >>> as
>>> > is
>>> > > -
>>> > > > do
>>> > > > >>> not format it with any file system...clustered or not). I would
>>> > pass
>>> > > > that
>>> > > > >>> device into the VM.
>>> > > > >>>
>>> > > > >>> Kind of accurate?
>>> > > > >>>
>>> > > > >>>
>>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>>> > > shadowsor@gmail.com>
>>> > > > >>> wrote:
>>> > > > >>>>
>>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>>> > There
>>> > > > are
>>> > > > >>>> ones that work for block devices rather than files. You can
>>> > > > >>>> piggy
>>> > > > back off
>>> > > > >>>> of the existing disk definitions and attach it to the vm as a
>>> > block
>>> > > > device.
>>> > > > >>>> The definition is an XML string per libvirt XML format. You may
>>> > want
>>> > > > to use
>>> > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
>>> > > > mentioned,
>>> > > > >>>> there are by-id paths to the block devices, as well as other
>>> > > > >>>> ones
>>> > > > that will
>>> > > > >>>> be consistent and easier for management, not sure how familiar
>>> > > > >>>> you
>>> > > > are with
>>> > > > >>>> device naming on Linux.
>>> > > > >>>>
>>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>> > > > >>>> <sh...@gmail.com>
>>> > > > wrote:
>>> > > > >>>>>
>>> > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
>>> > > inside
>>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>>> > > > hypervisor) as
>>> > > > >>>>> a disk to the VM, rather than attaching some image file that
>>> > > resides
>>> > > > on a
>>> > > > >>>>> filesystem, mounted on the host, living on a target.
>>> > > > >>>>>
>>> > > > >>>>> Actually, if you plan on the storage supporting live migration
>>> > > > >>>>> I
>>> > > > think
>>> > > > >>>>> this is the only way. You can't put a filesystem on it and
>>> > > > >>>>> mount
>>> > it
>>> > > > in two
>>> > > > >>>>> places to facilitate migration unless its a clustered
>>> > > > >>>>> filesystem,
>>> > > in
>>> > > > which
>>> > > > >>>>> case you're back to shared mount point.
>>> > > > >>>>>
>>> > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
>>> > with a
>>> > > > xen
>>> > > > >>>>> specific cluster management, a custom CLVM. They don't use a
>>> > > > filesystem
>>> > > > >>>>> either.
>>> > > > >>>>>
>>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>> > > > >>>>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>
>>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
>>> > > > >>>>>> mean
>>> > > > >>>>>> circumventing the hypervisor? I didn't think we could do that
>>> > > > >>>>>> in
>>> > > CS.
>>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
>>> > > > >>>>>> hypervisor,
>>> > > as
>>> > > > far as I
>>> > > > >>>>>> know.
>>> > > > >>>>>>
>>> > > > >>>>>>
>>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>>> > > > shadowsor@gmail.com>
>>> > > > >>>>>> wrote:
>>> > > > >>>>>>>
>>> > > > >>>>>>> Better to wire up the lun directly to the vm unless there is
>>> > > > >>>>>>> a
>>> > > good
>>> > > > >>>>>>> reason not to.
>>> > > > >>>>>>>
>>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>>> > shadowsor@gmail.com>
>>> > > > >>>>>>> wrote:
>>> > > > >>>>>>>>
>>> > > > >>>>>>>> You could do that, but as mentioned I think its a mistake
>>> > > > >>>>>>>> to
>>> > go
>>> > > to
>>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>>> > and
>>> > > > then putting
>>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
>>> > > > >>>>>>>> or
>>> > > even
>>> > > > RAW disk
>>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
>>> > > > >>>>>>>> the
>>> > > > way, and have
>>> > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
>>> > > > >>>>>>>>
>>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>>> > CS.
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
>>> > > > >>>>>>>>> the
>>> > > > share.
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
>>> > > > >>>>>>>>> discovering
>>> > > their
>>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
>>> > > > >>>>>>>>> on
>>> > > > their file
>>> > > > >>>>>>>>> system.
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
>>> > > > >>>>>>>>> logging
>>> > > in,
>>> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
>>> > current
>>> > > > code manage
>>> > > > >>>>>>>>> the rest as it currently does?
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>>> > > > >>>>>>>>>>
>>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>>> > catch
>>> > > up
>>> > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
>>> > > > snapshots + memory
>>> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
>>> > handled
>>> > > > by the SAN,
>>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
>>> > something
>>> > > > else. This is
>>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>>> > see
>>> > > > how others are
>>> > > > >>>>>>>>>> planning theirs.
>>> > > > >>>>>>>>>>
>>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>>> > > shadowsor@gmail.com
>>> > > > >
>>> > > > >>>>>>>>>> wrote:
>>> > > > >>>>>>>>>>>
>>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>>> > > > >>>>>>>>>>> style
>>> > on
>>> > > > an
>>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>>> > > > >>>>>>>>>>> format.
>>> > > > Otherwise you're
>>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating
>>> > > > >>>>>>>>>>> a
>>> > > > QCOW2 disk image,
>>> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
>>> > > > >>>>>>>>>>>
>>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>>> > VM,
>>> > > > and
>>> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
>>> > > > >>>>>>>>>>> plugin
>>> > is
>>> > > > best. My
>>> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
>>> > > > >>>>>>>>>>> there
>>> > > was
>>> > > > a snapshot
>>> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
>>> > > > >>>>>>>>>>>
>>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>>> > > > shadowsor@gmail.com>
>>> > > > >>>>>>>>>>> wrote:
>>> > > > >>>>>>>>>>>>
>>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>>> > end,
>>> > > > if
>>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
>>> > > > >>>>>>>>>>>> call
>>> > > > your plugin for
>>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>>> > far
>>> > > > as space, that
>>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>>> > carve
>>> > > > out luns from a
>>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>>> > > > independent of the
>>> > > > >>>>>>>>>>>> LUN size the host sees.
>>> > > > >>>>>>>>>>>>
>>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> Hey Marcus,
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>>> > > > >>>>>>>>>>>>> won't
>>> > > > work
>>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>>> > VDI
>>> > > > for
>>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
>>> > > > >>>>>>>>>>>>> as
>>> > > the
>>> > > > volume is on.
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
>>> > > > >>>>>>>>>>>>> XenServer
>>> > > and
>>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>> > snapshots
>>> > > > in 4.2) is I'd
>>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>>> > > > requested for the
>>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>>> > > > >>>>>>>>>>>>> thinly
>>> > > > provisions volumes,
>>> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
>>> > > > >>>>>>>>>>>>> be).
>>> > > > The CloudStack
>>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>>> > until a
>>> > > > hypervisor
>>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
>>> > > > >>>>>>>>>>>>> the
>>> > > > SAN volume.
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>> > > > >>>>>>>>>>>>> creation
>>> > of
>>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
>>> > > > >>>>>>>>>>>>> if
>>> > > > there were support
>>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>>> > > > >>>>>>>>>>>>> iSCSI
>>> > > > target), then I
>>> > > > >>>>>>>>>>>>> don't see how using this model will work.
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>>> > > works
>>> > > > >>>>>>>>>>>>> with DIR?
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> What do you think?
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> Thanks
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>>> > > today.
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>>> > well
>>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> > > > >>>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
>>> > > > >>>>>>>>>>>>>>> it
>>> > > just
>>> > > > >>>>>>>>>>>>>>> acts like a
>>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>>> > > > end-user
>>> > > > >>>>>>>>>>>>>>> is
>>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>>> > hosts
>>> > > > can
>>> > > > >>>>>>>>>>>>>>> access,
>>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>>> > > > storage.
>>> > > > >>>>>>>>>>>>>>> It could
>>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>> > > > >>>>>>>>>>>>>>> filesystem,
>>> > > > >>>>>>>>>>>>>>> cloudstack just
>>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
>>> > > > >>>>>>>>>>>>>>> images.
>>> > > > >>>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>>> > same
>>> > > > >>>>>>>>>>>>>>> > time.
>>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>>> > > > >>>>>>>>>>>>>>> >
>>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
>>> > > > >>>>>>>>>>>>>>> >> default              active     yes
>>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>>> > based
>>> > > on
>>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>>> > LUN,
>>> > > > so
>>> > > > >>>>>>>>>>>>>>> >>> there would only
>>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>>> > > (libvirt)
>>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>>> > > > >>>>>>>>>>>>>>> >>> libvirt
>>> > > does
>>> > > > >>>>>>>>>>>>>>> >>> not support
>>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
>>> > > > >>>>>>>>>>>>>>> >>> if
>>> > > > libvirt
>>> > > > >>>>>>>>>>>>>>> >>> supports
>>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>>> > since
>>> > > > >>>>>>>>>>>>>>> >>> each one of its
>>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>> > > > targets/LUNs).
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         }
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>         }
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>     }
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>> > > > >>>>>>>>>>>>>>> >>>> currently
>>> > > being
>>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>>> > > > >>>>>>>>>>>>>>> >>>> someone
>>> > > > >>>>>>>>>>>>>>> >>>> selects the
>>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
>>> > > > >>>>>>>>>>>>>>> >>>> is
>>> > > > that
>>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>>> > server,
>>> > > > and
>>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>> > > > >>>>>>>>>>>>>>> >>>>> believe
>>> > > your
>>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
>>> > > > >>>>>>>>>>>>>>> >>>>> in
>>> > > and
>>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
>>> > > > >>>>>>>>>>>>>>> >>>>> in
>>> > the
>>> > > > Xen
>>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>> > > > >>>>>>>>>>>>>>> >>>>> provides
>>> > a
>>> > > > 1:1
>>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
>>> > > > >>>>>>>>>>>>>>> >>>>> as
>>> > a
>>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>>> > about
>>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>>> > own
>>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>> > >  We
>>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
>>> > > > >>>>>>>>>>>>>>> >>>>> java
>>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
>>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>>> > that
>>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
>>> > > > >>>>>>>>>>>>>>> >>>>> how
>>> > > that
>>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
>>> > > > >>>>>>>>>>>>>>> >>>>> java
>>> > > code
>>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>>> > > storage
>>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>>> > > > >>>>>>>>>>>>>>> >>>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>>> > > but
>>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>>> > > classes
>>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>>> > Sorensen
>>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>>> > > for
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>>> > > > login.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>> > and
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>>> > I
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>>> > > > framework
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>>> > delete
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>> > > > mapping
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> > > > admin
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>>> > would
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>>> > needed
>>> > > > to
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>>> > > work
>>> > > > on
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>>> > will
>>> > > > need
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>>> > > expect
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>>> > this
>>> > > to
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> >
>>> > > > >>>>>>>>>>>>>>> >>>>> > --
>>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>>
>>> > > > >>>>>>>>>>>>>>> >>>> --
>>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>>
>>> > > > >>>>>>>>>>>>>>> >>> --
>>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >>
>>> > > > >>>>>>>>>>>>>>> >> --
>>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>> --
>>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>>
>>> > > > >>>>>>>>>>>>> --
>>> > > > >>>>>>>>>>>>> Mike Tutkowski
>>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>>>>>> o: 303.746.7302
>>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>>
>>> > > > >>>>>>>>> --
>>> > > > >>>>>>>>> Mike Tutkowski
>>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>>>>> o: 303.746.7302
>>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
>>> > > > >>>>>>
>>> > > > >>>>>>
>>> > > > >>>>>>
>>> > > > >>>>>>
>>> > > > >>>>>> --
>>> > > > >>>>>> Mike Tutkowski
>>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>>> > > > >>>>>> o: 303.746.7302
>>> > > > >>>>>> Advancing the way the world uses the cloud™
>>> > > > >>>
>>> > > > >>>
>>> > > > >>>
>>> > > > >>>
>>> > > > >>> --
>>> > > > >>> Mike Tutkowski
>>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>>> > > > >>> e: mike.tutkowski@solidfire.com
>>> > > > >>> o: 303.746.7302
>>> > > > >>> Advancing the way the world uses the cloud™
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > --
>>> > > > > Mike Tutkowski
>>> > > > > Senior CloudStack Developer, SolidFire Inc.
>>> > > > > e: mike.tutkowski@solidfire.com
>>> > > > > o: 303.746.7302
>>> > > > > Advancing the way the world uses the cloud™
>>> > > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > *Mike Tutkowski*
>>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > e: mike.tutkowski@solidfire.com
>>> > > o: 303.746.7302
>>> > > Advancing the way the world uses the
>>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > > *™*
>>> > >
>>> >
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the
>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Ok, on most storage pools it shows how many GB free/used when listing
the pool both via API and in the UI. I'm guessing those are empty then
for the solid fire storage, but it seems like the user should have to
define some sort of pool that the luns get carved out of, and you
should be able to get the stats for that, right? Or is a solid fire
appliance only one pool per appliance? This isn't about billing, but
just so cloudstack itself knows whether or not there is space left on
the storage device, so cloudstack can go on allocating from a
different primary storage as this one fills up. There are also
notifications and things. It seems like there should be a call you can
handle for this, maybe Edison knows.

On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> You respond to more than attach and detach, right? Don't you create luns as
> well? Or are you just referring to the hypervisor stuff?
>
> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>>
>> Hi Marcus,
>>
>> I never need to respond to a CreateStoragePool call for either XenServer
>> or
>> VMware.
>>
>> What happens is I respond only to the Attach- and Detach-volume commands.
>>
>> Let's say an attach comes in:
>>
>> In this case, I check to see if the storage is "managed." Talking
>> XenServer
>> here, if it is, I log in to the LUN that is the disk we want to attach.
>> After, if this is the first time attaching this disk, I create an SR and a
>> VDI within the SR. If it is not the first time attaching this disk, the
>> LUN
>> already has the SR and VDI on it.
>>
>> Once this is done, I let the normal "attach" logic run because this logic
>> expected an SR and a VDI and now it has it.
>>
>> It's the same thing for VMware: Just substitute datastore for SR and VMDK
>> for VDI.
>>
>> Does that make sense?
>>
>> Thanks!
>>
>>
>> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen
>> <sh...@gmail.com>wrote:
>>
>> > What do you do with Xen? I imagine the user enter the SAN details when
>> > registering the pool? A the pool details are basically just instructions
>> > on
>> > how to log into a target, correct?
>> >
>> > You can choose to log in a KVM host to the target during
>> > createStoragePool
>> > and save the pool in a map, or just save the pool info in a map for
>> > future
>> > reference by uuid, for when you do need to log in. The createStoragePool
>> > then just becomes a way to save the pool info to the agent. Personally,
>> > I'd
>> > log in on the pool create and look/scan for specific luns when they're
>> > needed, but I haven't thought it through thoroughly. I just say that
>> > mainly
>> > because login only happens once, the first time the pool is used, and
>> > every
>> > other storage command is about discovering new luns or maybe
>> > deleting/disconnecting luns no longer needed. On the other hand, you
>> > could
>> > do all of the above: log in on pool create, then also check if you're
>> > logged in on other commands and log in if you've lost connection.
>> >
>> > With Xen, what does your registered pool   show in the UI for avail/used
>> > capacity, and how does it get that info? I assume there is some sort of
>> > disk pool that the luns are carved from, and that your plugin is called
>> > to
>> > talk to the SAN and expose to the user how much of that pool has been
>> > allocated. Knowing how you already solves these problems with Xen will
>> > help
>> > figure out what to do with KVM.
>> >
>> > If this is the case, I think the plugin can continue to handle it rather
>> > than getting details from the agent. I'm not sure if that means nulls
>> > are
>> > OK for these on the agent side or what, I need to look at the storage
>> > plugin arch more closely.
>> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> > wrote:
>> >
>> > > Hey Marcus,
>> > >
>> > > I'm reviewing your e-mails as I implement the necessary methods in new
>> > > classes.
>> > >
>> > > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > > the pool data (host, port, name, path) which would be used to log the
>> > > host into the initiator."
>> > >
>> > > Can you tell me, in my case, since a storage pool (primary storage) is
>> > > actually the SAN, I wouldn't really be logging into anything at this
>> > point,
>> > > correct?
>> > >
>> > > Also, what kind of capacity, available, and used bytes make sense to
>> > report
>> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
>> > and
>> > > not an individual LUN)?
>> > >
>> > > Thanks!
>> > >
>> > >
>> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
>> > > >wrote:
>> > >
>> > > > Ok, KVM will be close to that, of course, because only the
>> > > > hypervisor
>> > > > classes differ, the rest is all mgmt server. Creating a volume is
>> > > > just
>> > > > a db entry until it's deployed for the first time.
>> > > > AttachVolumeCommand
>> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> > > > StorageAdaptor) to log in the host to the target and then you have a
>> > > > block device.  Maybe libvirt will do that for you, but my quick read
>> > > > made it sound like the iscsi libvirt pool type is actually a pool,
>> > > > not
>> > > > a lun or volume, so you'll need to figure out if that works or if
>> > > > you'll have to use iscsiadm commands.
>> > > >
>> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> > > > doesn't really manage your pool the way you want), you're going to
>> > > > have to create a version of KVMStoragePool class and a
>> > > > StorageAdaptor
>> > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> > > > implementing all of the methods, then in KVMStorageManager.java
>> > > > there's a "_storageMapper" map. This is used to select the correct
>> > > > adaptor, you can see in this file that every call first pulls the
>> > > > correct adaptor out of this map via getStorageAdaptor. So you can
>> > > > see
>> > > > a comment in this file that says "add other storage adaptors here",
>> > > > where it puts to this map, this is where you'd register your
>> > > > adaptor.
>> > > >
>> > > > So, referencing StorageAdaptor.java, createStoragePool accepts all
>> > > > of
>> > > > the pool data (host, port, name, path) which would be used to log
>> > > > the
>> > > > host into the initiator. I *believe* the method getPhysicalDisk will
>> > > > need to do the work of attaching the lun.  AttachVolumeCommand calls
>> > > > this and then creates the XML diskdef and attaches it to the VM.
>> > > > Now,
>> > > > one thing you need to know is that createStoragePool is called
>> > > > often,
>> > > > sometimes just to make sure the pool is there. You may want to
>> > > > create
>> > > > a map in your adaptor class and keep track of pools that have been
>> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
>> > > > asks
>> > > > libvirt about which storage pools exist. There are also calls to
>> > > > refresh the pool stats, and all of the other calls can be seen in
>> > > > the
>> > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
>> > > > but
>> > > > it's probably a hold-over from 4.1, as I have the vague idea that
>> > > > volumes are created on the mgmt server via the plugin now, so
>> > > > whatever
>> > > > doesn't apply can just be stubbed out (or optionally
>> > > > extended/reimplemented here, if you don't mind the hosts talking to
>> > > > the san api).
>> > > >
>> > > > There is a difference between attaching new volumes and launching a
>> > > > VM
>> > > > with existing volumes.  In the latter case, the VM definition that
>> > > > was
>> > > > passed to the KVM agent includes the disks, (StartCommand).
>> > > >
>> > > > I'd be interested in how your pool is defined for Xen, I imagine it
>> > > > would need to be kept the same. Is it just a definition to the SAN
>> > > > (ip address or some such, port number) and perhaps a volume pool
>> > > > name?
>> > > >
>> > > > > If there is a way for me to update the ACL list on the SAN to have
>> > > only a
>> > > > > single KVM host have access to the volume, that would be ideal.
>> > > >
>> > > > That depends on your SAN API.  I was under the impression that the
>> > > > storage plugin framework allowed for acls, or for you to do whatever
>> > > > you want for create/attach/delete/snapshot, etc. You'd just call
>> > > > your
>> > > > SAN API with the host info for the ACLs prior to when the disk is
>> > > > attached (or the VM is started).  I'd have to look more at the
>> > > > framework to know the details, in 4.1 I would do this in
>> > > > getPhysicalDisk just prior to connecting up the LUN.
>> > > >
>> > > >
>> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > > > <mi...@solidfire.com> wrote:
>> > > > > OK, yeah, the ACL part will be interesting. That is a bit
>> > > > > different
>> > > from
>> > > > how
>> > > > > it works with XenServer and VMware.
>> > > > >
>> > > > > Just to give you an idea how it works in 4.2 with XenServer:
>> > > > >
>> > > > > * The user creates a CS volume (this is just recorded in the
>> > > > cloud.volumes
>> > > > > table).
>> > > > >
>> > > > > * The user attaches the volume as a disk to a VM for the first
>> > > > > time
>> > (if
>> > > > the
>> > > > > storage allocator picks the SolidFire plug-in, the storage
>> > > > > framework
>> > > > invokes
>> > > > > a method on the plug-in that creates a volume on the SAN...info
>> > > > > like
>> > > the
>> > > > IQN
>> > > > > of the SAN volume is recorded in the DB).
>> > > > >
>> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
>> > > > > It
>> > > > > determines based on a flag passed in that the storage in question
>> > > > > is
>> > > > > "CloudStack-managed" storage (as opposed to "traditional"
>> > preallocated
>> > > > > storage). This tells it to discover the iSCSI target. Once
>> > > > > discovered
>> > > it
>> > > > > determines if the iSCSI target already contains a storage
>> > > > > repository
>> > > (it
>> > > > > would if this were a re-attach situation). If it does contain an
>> > > > > SR
>> > > > already,
>> > > > > then there should already be one VDI, as well. If there is no SR,
>> > > > > an
>> > SR
>> > > > is
>> > > > > created and a single VDI is created within it (that takes up about
>> > > > > as
>> > > > much
>> > > > > space as was requested for the CloudStack volume).
>> > > > >
>> > > > > * The normal attach-volume logic continues (it depends on the
>> > existence
>> > > > of
>> > > > > an SR and a VDI).
>> > > > >
>> > > > > The VMware case is essentially the same (mainly just substitute
>> > > datastore
>> > > > > for SR and VMDK for VDI).
>> > > > >
>> > > > > In both cases, all hosts in the cluster have discovered the iSCSI
>> > > target,
>> > > > > but only the host that is currently running the VM that is using
>> > > > > the
>> > > VDI
>> > > > (or
>> > > > > VMKD) is actually using the disk.
>> > > > >
>> > > > > Live Migration should be OK because the hypervisors communicate
>> > > > > with
>> > > > > whatever metadata they have on the SR (or datastore).
>> > > > >
>> > > > > I see what you're saying with KVM, though.
>> > > > >
>> > > > > In that case, the hosts are clustered only in CloudStack's eyes.
>> > > > > CS
>> > > > controls
>> > > > > Live Migration. You don't really need a clustered filesystem on
>> > > > > the
>> > > LUN.
>> > > > The
>> > > > > LUN could be handed over raw to the VM using it.
>> > > > >
>> > > > > If there is a way for me to update the ACL list on the SAN to have
>> > > only a
>> > > > > single KVM host have access to the volume, that would be ideal.
>> > > > >
>> > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
>> > > > > the
>> > > > iSCSI
>> > > > > target. I'll also need to take the resultant new device and pass
>> > > > > it
>> > > into
>> > > > the
>> > > > > VM.
>> > > > >
>> > > > > Does this sound reasonable? Please call me out on anything I seem
>> > > > incorrect
>> > > > > about. :)
>> > > > >
>> > > > > Thanks for all the thought on this, Marcus!
>> > > > >
>> > > > >
>> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> > shadowsor@gmail.com>
>> > > > > wrote:
>> > > > >>
>> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
>> > > attach
>> > > > >> the disk def to the vm. You may need to do your own
>> > > > >> StorageAdaptor
>> > and
>> > > > run
>> > > > >> iscsiadm commands to accomplish that, depending on how the
>> > > > >> libvirt
>> > > iscsi
>> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>> > > works
>> > > > on
>> > > > >> xen at the momen., nor is it ideal.
>> > > > >>
>> > > > >> Your plugin will handle acls as far as which host can see which
>> > > > >> luns
>> > > as
>> > > > >> well, I remember discussing that months ago, so that a disk won't
>> > > > >> be
>> > > > >> connected until the hypervisor has exclusive access, so it will
>> > > > >> be
>> > > safe
>> > > > and
>> > > > >> fence the disk from rogue nodes that cloudstack loses
>> > > > >> connectivity
>> > > > with. It
>> > > > >> should revoke access to everything but the target host... Except
>> > > > >> for
>> > > > during
>> > > > >> migration but we can discuss that later, there's a migration prep
>> > > > process
>> > > > >> where the new host can be added to the acls, and the old host can
>> > > > >> be
>> > > > removed
>> > > > >> post migration.
>> > > > >>
>> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> > > mike.tutkowski@solidfire.com
>> > > > >
>> > > > >> wrote:
>> > > > >>>
>> > > > >>> Yeah, that would be ideal.
>> > > > >>>
>> > > > >>> So, I would still need to discover the iSCSI target, log in to
>> > > > >>> it,
>> > > then
>> > > > >>> figure out what /dev/sdX was created as a result (and leave it
>> > > > >>> as
>> > is
>> > > -
>> > > > do
>> > > > >>> not format it with any file system...clustered or not). I would
>> > pass
>> > > > that
>> > > > >>> device into the VM.
>> > > > >>>
>> > > > >>> Kind of accurate?
>> > > > >>>
>> > > > >>>
>> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> > > shadowsor@gmail.com>
>> > > > >>> wrote:
>> > > > >>>>
>> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>> > There
>> > > > are
>> > > > >>>> ones that work for block devices rather than files. You can
>> > > > >>>> piggy
>> > > > back off
>> > > > >>>> of the existing disk definitions and attach it to the vm as a
>> > block
>> > > > device.
>> > > > >>>> The definition is an XML string per libvirt XML format. You may
>> > want
>> > > > to use
>> > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
>> > > > mentioned,
>> > > > >>>> there are by-id paths to the block devices, as well as other
>> > > > >>>> ones
>> > > > that will
>> > > > >>>> be consistent and easier for management, not sure how familiar
>> > > > >>>> you
>> > > > are with
>> > > > >>>> device naming on Linux.
>> > > > >>>>
>> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> > > > >>>> <sh...@gmail.com>
>> > > > wrote:
>> > > > >>>>>
>> > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
>> > > inside
>> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>> > > > hypervisor) as
>> > > > >>>>> a disk to the VM, rather than attaching some image file that
>> > > resides
>> > > > on a
>> > > > >>>>> filesystem, mounted on the host, living on a target.
>> > > > >>>>>
>> > > > >>>>> Actually, if you plan on the storage supporting live migration
>> > > > >>>>> I
>> > > > think
>> > > > >>>>> this is the only way. You can't put a filesystem on it and
>> > > > >>>>> mount
>> > it
>> > > > in two
>> > > > >>>>> places to facilitate migration unless its a clustered
>> > > > >>>>> filesystem,
>> > > in
>> > > > which
>> > > > >>>>> case you're back to shared mount point.
>> > > > >>>>>
>> > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
>> > with a
>> > > > xen
>> > > > >>>>> specific cluster management, a custom CLVM. They don't use a
>> > > > filesystem
>> > > > >>>>> either.
>> > > > >>>>>
>> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> > > > >>>>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>
>> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
>> > > > >>>>>> mean
>> > > > >>>>>> circumventing the hypervisor? I didn't think we could do that
>> > > > >>>>>> in
>> > > CS.
>> > > > >>>>>> OpenStack, on the other hand, always circumvents the
>> > > > >>>>>> hypervisor,
>> > > as
>> > > > far as I
>> > > > >>>>>> know.
>> > > > >>>>>>
>> > > > >>>>>>
>> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> > > > shadowsor@gmail.com>
>> > > > >>>>>> wrote:
>> > > > >>>>>>>
>> > > > >>>>>>> Better to wire up the lun directly to the vm unless there is
>> > > > >>>>>>> a
>> > > good
>> > > > >>>>>>> reason not to.
>> > > > >>>>>>>
>> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> > shadowsor@gmail.com>
>> > > > >>>>>>> wrote:
>> > > > >>>>>>>>
>> > > > >>>>>>>> You could do that, but as mentioned I think its a mistake
>> > > > >>>>>>>> to
>> > go
>> > > to
>> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>> > and
>> > > > then putting
>> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
>> > > > >>>>>>>> or
>> > > even
>> > > > RAW disk
>> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
>> > > > >>>>>>>> the
>> > > > way, and have
>> > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
>> > > > >>>>>>>>
>> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> > > > >>>>>>>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>> > CS.
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
>> > > > >>>>>>>>> the
>> > > > share.
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> They can set up their share using Open iSCSI by
>> > > > >>>>>>>>> discovering
>> > > their
>> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
>> > > > >>>>>>>>> on
>> > > > their file
>> > > > >>>>>>>>> system.
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
>> > > > >>>>>>>>> logging
>> > > in,
>> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
>> > current
>> > > > code manage
>> > > > >>>>>>>>> the rest as it currently does?
>> > > > >>>>>>>>>
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> > > > >>>>>>>>> <sh...@gmail.com> wrote:
>> > > > >>>>>>>>>>
>> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>> > catch
>> > > up
>> > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
>> > > > snapshots + memory
>> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
>> > handled
>> > > > by the SAN,
>> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
>> > something
>> > > > else. This is
>> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>> > see
>> > > > how others are
>> > > > >>>>>>>>>> planning theirs.
>> > > > >>>>>>>>>>
>> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> > > shadowsor@gmail.com
>> > > > >
>> > > > >>>>>>>>>> wrote:
>> > > > >>>>>>>>>>>
>> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> > > > >>>>>>>>>>> style
>> > on
>> > > > an
>> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>> > > > >>>>>>>>>>> format.
>> > > > Otherwise you're
>> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating
>> > > > >>>>>>>>>>> a
>> > > > QCOW2 disk image,
>> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
>> > > > >>>>>>>>>>>
>> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>> > VM,
>> > > > and
>> > > > >>>>>>>>>>> handling snapshots on the San side via the storage
>> > > > >>>>>>>>>>> plugin
>> > is
>> > > > best. My
>> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
>> > > > >>>>>>>>>>> there
>> > > was
>> > > > a snapshot
>> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
>> > > > >>>>>>>>>>>
>> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> > > > shadowsor@gmail.com>
>> > > > >>>>>>>>>>> wrote:
>> > > > >>>>>>>>>>>>
>> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>> > end,
>> > > > if
>> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
>> > > > >>>>>>>>>>>> call
>> > > > your plugin for
>> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>> > far
>> > > > as space, that
>> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>> > carve
>> > > > out luns from a
>> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>> > > > independent of the
>> > > > >>>>>>>>>>>> LUN size the host sees.
>> > > > >>>>>>>>>>>>
>> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> Hey Marcus,
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>> > > > >>>>>>>>>>>>> won't
>> > > > work
>> > > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>> > VDI
>> > > > for
>> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
>> > > > >>>>>>>>>>>>> as
>> > > the
>> > > > volume is on.
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
>> > > > >>>>>>>>>>>>> XenServer
>> > > and
>> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> > snapshots
>> > > > in 4.2) is I'd
>> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>> > > > requested for the
>> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>> > > > >>>>>>>>>>>>> thinly
>> > > > provisions volumes,
>> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
>> > > > >>>>>>>>>>>>> be).
>> > > > The CloudStack
>> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>> > until a
>> > > > hypervisor
>> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
>> > > > >>>>>>>>>>>>> the
>> > > > SAN volume.
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> > > > >>>>>>>>>>>>> creation
>> > of
>> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
>> > > > >>>>>>>>>>>>> if
>> > > > there were support
>> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>> > > > >>>>>>>>>>>>> iSCSI
>> > > > target), then I
>> > > > >>>>>>>>>>>>> don't see how using this model will work.
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>> > > works
>> > > > >>>>>>>>>>>>> with DIR?
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> What do you think?
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> Thanks
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>> > > today.
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>> > well
>> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > > >>>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
>> > > > >>>>>>>>>>>>>>> it
>> > > just
>> > > > >>>>>>>>>>>>>>> acts like a
>> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>> > > > end-user
>> > > > >>>>>>>>>>>>>>> is
>> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>> > hosts
>> > > > can
>> > > > >>>>>>>>>>>>>>> access,
>> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>> > > > storage.
>> > > > >>>>>>>>>>>>>>> It could
>> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>> > > > >>>>>>>>>>>>>>> filesystem,
>> > > > >>>>>>>>>>>>>>> cloudstack just
>> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
>> > > > >>>>>>>>>>>>>>> images.
>> > > > >>>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>> > same
>> > > > >>>>>>>>>>>>>>> > time.
>> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> > > > >>>>>>>>>>>>>>> >
>> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
>> > > > >>>>>>>>>>>>>>> >> default              active     yes
>> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>> > based
>> > > on
>> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>> > LUN,
>> > > > so
>> > > > >>>>>>>>>>>>>>> >>> there would only
>> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> > > (libvirt)
>> > > > >>>>>>>>>>>>>>> >>> storage pool.
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>> > > > >>>>>>>>>>>>>>> >>> iSCSI
>> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>> > > > >>>>>>>>>>>>>>> >>> libvirt
>> > > does
>> > > > >>>>>>>>>>>>>>> >>> not support
>> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
>> > > > >>>>>>>>>>>>>>> >>> if
>> > > > libvirt
>> > > > >>>>>>>>>>>>>>> >>> supports
>> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>> > since
>> > > > >>>>>>>>>>>>>>> >>> each one of its
>> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> > > > targets/LUNs).
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         @Override
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>         }
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>     }
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>> > > > >>>>>>>>>>>>>>> >>>> currently
>> > > being
>> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> > > > >>>>>>>>>>>>>>> >>>> someone
>> > > > >>>>>>>>>>>>>>> >>>> selects the
>> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
>> > > > >>>>>>>>>>>>>>> >>>> is
>> > > > that
>> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> Thanks!
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>> > > > >>>>>>>>>>>>>>> >>>> Sorensen
>> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> > > > >>>>>>>>>>>>>>> >>>> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > http://libvirt.org/storage.html#StorageBackendISCSI
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> > server,
>> > > > and
>> > > > >>>>>>>>>>>>>>> >>>>> cannot be
>> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>> > > > >>>>>>>>>>>>>>> >>>>> believe
>> > > your
>> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > > and
>> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
>> > > > >>>>>>>>>>>>>>> >>>>> in
>> > the
>> > > > Xen
>> > > > >>>>>>>>>>>>>>> >>>>> stuff).
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> > > > >>>>>>>>>>>>>>> >>>>> provides
>> > a
>> > > > 1:1
>> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
>> > > > >>>>>>>>>>>>>>> >>>>> as
>> > a
>> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>> > about
>> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>> > own
>> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
>> > > > >>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> > >  We
>> > > > >>>>>>>>>>>>>>> >>>>> can cross that
>> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
>> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>> > that
>> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
>> > > > >>>>>>>>>>>>>>> >>>>> how
>> > > that
>> > > > >>>>>>>>>>>>>>> >>>>> is done for
>> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
>> > > > >>>>>>>>>>>>>>> >>>>> java
>> > > code
>> > > > >>>>>>>>>>>>>>> >>>>> to see if you
>> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>> > > storage
>> > > > >>>>>>>>>>>>>>> >>>>> pools before you
>> > > > >>>>>>>>>>>>>>> >>>>> get started.
>> > > > >>>>>>>>>>>>>>> >>>>>
>> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>> > > > >>>>>>>>>>>>>>> >>>>> Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>> > > > >>>>>>>>>>>>>>> >>>>> > more,
>> > > but
>> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> > > > >>>>>>>>>>>>>>> >>>>> > supports
>> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> > > > >>>>>>>>>>>>>>> >>>>> > targets,
>> > > > >>>>>>>>>>>>>>> >>>>> > right?
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> > > > >>>>>>>>>>>>>>> >>>>> > Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>> > > classes
>> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> > > > >>>>>>>>>>>>>>> >>>>> >> last
>> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> > Sorensen
>> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>> > > > >>>>>>>>>>>>>>> >>>>> >>> iscsi
>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>> > > > >>>>>>>>>>>>>>> >>>>> >>> packages
>> > > for
>> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator
>> > > > login.
>> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
>> > > > >>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> > and
>> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> release
>> > I
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>> > > > framework
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>> > delete
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> > > > mapping
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > > admin
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>> > would
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>> > needed
>> > > > to
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> could
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> might
>> > > work
>> > > > on
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>> > will
>> > > > need
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>> > > expect
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>> > this
>> > > to
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> > > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>> > > > >>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >>
>> > > > >>>>>>>>>>>>>>> >>>>> >> --
>> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> >
>> > > > >>>>>>>>>>>>>>> >>>>> > --
>> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>>
>> > > > >>>>>>>>>>>>>>> >>>> --
>> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>>
>> > > > >>>>>>>>>>>>>>> >>> --
>> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >>
>> > > > >>>>>>>>>>>>>>> >> --
>> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>> --
>> > > > >>>>>>>>>>>>>> Mike Tutkowski
>> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>>> o: 303.746.7302
>> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>>
>> > > > >>>>>>>>>>>>> --
>> > > > >>>>>>>>>>>>> Mike Tutkowski
>> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>>>>>> o: 303.746.7302
>> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > > >>>>>>>>>
>> > > > >>>>>>>>>
>> > > > >>>>>>>>>
>> > > > >>>>>>>>>
>> > > > >>>>>>>>> --
>> > > > >>>>>>>>> Mike Tutkowski
>> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>>>>> o: 303.746.7302
>> > > > >>>>>>>>> Advancing the way the world uses the cloud™
>> > > > >>>>>>
>> > > > >>>>>>
>> > > > >>>>>>
>> > > > >>>>>>
>> > > > >>>>>> --
>> > > > >>>>>> Mike Tutkowski
>> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>>>>> e: mike.tutkowski@solidfire.com
>> > > > >>>>>> o: 303.746.7302
>> > > > >>>>>> Advancing the way the world uses the cloud™
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>>
>> > > > >>> --
>> > > > >>> Mike Tutkowski
>> > > > >>> Senior CloudStack Developer, SolidFire Inc.
>> > > > >>> e: mike.tutkowski@solidfire.com
>> > > > >>> o: 303.746.7302
>> > > > >>> Advancing the way the world uses the cloud™
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > Mike Tutkowski
>> > > > > Senior CloudStack Developer, SolidFire Inc.
>> > > > > e: mike.tutkowski@solidfire.com
>> > > > > o: 303.746.7302
>> > > > > Advancing the way the world uses the cloud™
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > *Mike Tutkowski*
>> > > *Senior CloudStack Developer, SolidFire Inc.*
>> > > e: mike.tutkowski@solidfire.com
>> > > o: 303.746.7302
>> > > Advancing the way the world uses the
>> > > cloud<http://solidfire.com/solution/overview/?video=play>
>> > > *™*
>> > >
>> >
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Yeah, I should have clarified what I was referring to.

As you mentioned in your last sentence, I was just talking about on the
hypervisor side (responding to attach and detach commands).

On the storage side, the storage framework invokes my plug-in when it needs
a volume created, deleted, etc. By the time we get to the hypervisor side
to attach the volume, it already exists on the SAN.

As an example, if you go to attach a volume to a VM for the first time, the
storage framework will ask my plug-in to create a volume on its SAN.


On Tue, Sep 17, 2013 at 8:46 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> You respond to more than attach and detach, right? Don't you create luns as
> well? Or are you just referring to the hypervisor stuff?
> On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Hi Marcus,
> >
> > I never need to respond to a CreateStoragePool call for either XenServer
> or
> > VMware.
> >
> > What happens is I respond only to the Attach- and Detach-volume commands.
> >
> > Let's say an attach comes in:
> >
> > In this case, I check to see if the storage is "managed." Talking
> XenServer
> > here, if it is, I log in to the LUN that is the disk we want to attach.
> > After, if this is the first time attaching this disk, I create an SR and
> a
> > VDI within the SR. If it is not the first time attaching this disk, the
> LUN
> > already has the SR and VDI on it.
> >
> > Once this is done, I let the normal "attach" logic run because this logic
> > expected an SR and a VDI and now it has it.
> >
> > It's the same thing for VMware: Just substitute datastore for SR and VMDK
> > for VDI.
> >
> > Does that make sense?
> >
> > Thanks!
> >
> >
> > On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > What do you do with Xen? I imagine the user enter the SAN details when
> > > registering the pool? A the pool details are basically just
> instructions
> > on
> > > how to log into a target, correct?
> > >
> > > You can choose to log in a KVM host to the target during
> > createStoragePool
> > > and save the pool in a map, or just save the pool info in a map for
> > future
> > > reference by uuid, for when you do need to log in. The
> createStoragePool
> > > then just becomes a way to save the pool info to the agent. Personally,
> > I'd
> > > log in on the pool create and look/scan for specific luns when they're
> > > needed, but I haven't thought it through thoroughly. I just say that
> > mainly
> > > because login only happens once, the first time the pool is used, and
> > every
> > > other storage command is about discovering new luns or maybe
> > > deleting/disconnecting luns no longer needed. On the other hand, you
> > could
> > > do all of the above: log in on pool create, then also check if you're
> > > logged in on other commands and log in if you've lost connection.
> > >
> > > With Xen, what does your registered pool   show in the UI for
> avail/used
> > > capacity, and how does it get that info? I assume there is some sort of
> > > disk pool that the luns are carved from, and that your plugin is called
> > to
> > > talk to the SAN and expose to the user how much of that pool has been
> > > allocated. Knowing how you already solves these problems with Xen will
> > help
> > > figure out what to do with KVM.
> > >
> > > If this is the case, I think the plugin can continue to handle it
> rather
> > > than getting details from the agent. I'm not sure if that means nulls
> are
> > > OK for these on the agent side or what, I need to look at the storage
> > > plugin arch more closely.
> > > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> > > wrote:
> > >
> > > > Hey Marcus,
> > > >
> > > > I'm reviewing your e-mails as I implement the necessary methods in
> new
> > > > classes.
> > > >
> > > > "So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> > > > the pool data (host, port, name, path) which would be used to log the
> > > > host into the initiator."
> > > >
> > > > Can you tell me, in my case, since a storage pool (primary storage)
> is
> > > > actually the SAN, I wouldn't really be logging into anything at this
> > > point,
> > > > correct?
> > > >
> > > > Also, what kind of capacity, available, and used bytes make sense to
> > > report
> > > > for KVMStoragePool (since KVMStoragePool represents the SAN in my
> case
> > > and
> > > > not an individual LUN)?
> > > >
> > > > Thanks!
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> shadowsor@gmail.com
> > > > >wrote:
> > > >
> > > > > Ok, KVM will be close to that, of course, because only the
> hypervisor
> > > > > classes differ, the rest is all mgmt server. Creating a volume is
> > just
> > > > > a db entry until it's deployed for the first time.
> > AttachVolumeCommand
> > > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > > > StorageAdaptor) to log in the host to the target and then you have
> a
> > > > > block device.  Maybe libvirt will do that for you, but my quick
> read
> > > > > made it sound like the iscsi libvirt pool type is actually a pool,
> > not
> > > > > a lun or volume, so you'll need to figure out if that works or if
> > > > > you'll have to use iscsiadm commands.
> > > > >
> > > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > > > doesn't really manage your pool the way you want), you're going to
> > > > > have to create a version of KVMStoragePool class and a
> StorageAdaptor
> > > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > > > implementing all of the methods, then in KVMStorageManager.java
> > > > > there's a "_storageMapper" map. This is used to select the correct
> > > > > adaptor, you can see in this file that every call first pulls the
> > > > > correct adaptor out of this map via getStorageAdaptor. So you can
> see
> > > > > a comment in this file that says "add other storage adaptors here",
> > > > > where it puts to this map, this is where you'd register your
> adaptor.
> > > > >
> > > > > So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> > > > > the pool data (host, port, name, path) which would be used to log
> the
> > > > > host into the initiator. I *believe* the method getPhysicalDisk
> will
> > > > > need to do the work of attaching the lun.  AttachVolumeCommand
> calls
> > > > > this and then creates the XML diskdef and attaches it to the VM.
> Now,
> > > > > one thing you need to know is that createStoragePool is called
> often,
> > > > > sometimes just to make sure the pool is there. You may want to
> create
> > > > > a map in your adaptor class and keep track of pools that have been
> > > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> > asks
> > > > > libvirt about which storage pools exist. There are also calls to
> > > > > refresh the pool stats, and all of the other calls can be seen in
> the
> > > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> > but
> > > > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > > > volumes are created on the mgmt server via the plugin now, so
> > whatever
> > > > > doesn't apply can just be stubbed out (or optionally
> > > > > extended/reimplemented here, if you don't mind the hosts talking to
> > > > > the san api).
> > > > >
> > > > > There is a difference between attaching new volumes and launching a
> > VM
> > > > > with existing volumes.  In the latter case, the VM definition that
> > was
> > > > > passed to the KVM agent includes the disks, (StartCommand).
> > > > >
> > > > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > > > would need to be kept the same. Is it just a definition to the SAN
> > > > > (ip address or some such, port number) and perhaps a volume pool
> > name?
> > > > >
> > > > > > If there is a way for me to update the ACL list on the SAN to
> have
> > > > only a
> > > > > > single KVM host have access to the volume, that would be ideal.
> > > > >
> > > > > That depends on your SAN API.  I was under the impression that the
> > > > > storage plugin framework allowed for acls, or for you to do
> whatever
> > > > > you want for create/attach/delete/snapshot, etc. You'd just call
> your
> > > > > SAN API with the host info for the ACLs prior to when the disk is
> > > > > attached (or the VM is started).  I'd have to look more at the
> > > > > framework to know the details, in 4.1 I would do this in
> > > > > getPhysicalDisk just prior to connecting up the LUN.
> > > > >
> > > > >
> > > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > > > <mi...@solidfire.com> wrote:
> > > > > > OK, yeah, the ACL part will be interesting. That is a bit
> different
> > > > from
> > > > > how
> > > > > > it works with XenServer and VMware.
> > > > > >
> > > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > > > >
> > > > > > * The user creates a CS volume (this is just recorded in the
> > > > > cloud.volumes
> > > > > > table).
> > > > > >
> > > > > > * The user attaches the volume as a disk to a VM for the first
> time
> > > (if
> > > > > the
> > > > > > storage allocator picks the SolidFire plug-in, the storage
> > framework
> > > > > invokes
> > > > > > a method on the plug-in that creates a volume on the SAN...info
> > like
> > > > the
> > > > > IQN
> > > > > > of the SAN volume is recorded in the DB).
> > > > > >
> > > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
> It
> > > > > > determines based on a flag passed in that the storage in question
> > is
> > > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > > preallocated
> > > > > > storage). This tells it to discover the iSCSI target. Once
> > discovered
> > > > it
> > > > > > determines if the iSCSI target already contains a storage
> > repository
> > > > (it
> > > > > > would if this were a re-attach situation). If it does contain an
> SR
> > > > > already,
> > > > > > then there should already be one VDI, as well. If there is no SR,
> > an
> > > SR
> > > > > is
> > > > > > created and a single VDI is created within it (that takes up
> about
> > as
> > > > > much
> > > > > > space as was requested for the CloudStack volume).
> > > > > >
> > > > > > * The normal attach-volume logic continues (it depends on the
> > > existence
> > > > > of
> > > > > > an SR and a VDI).
> > > > > >
> > > > > > The VMware case is essentially the same (mainly just substitute
> > > > datastore
> > > > > > for SR and VMDK for VDI).
> > > > > >
> > > > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > > > target,
> > > > > > but only the host that is currently running the VM that is using
> > the
> > > > VDI
> > > > > (or
> > > > > > VMKD) is actually using the disk.
> > > > > >
> > > > > > Live Migration should be OK because the hypervisors communicate
> > with
> > > > > > whatever metadata they have on the SR (or datastore).
> > > > > >
> > > > > > I see what you're saying with KVM, though.
> > > > > >
> > > > > > In that case, the hosts are clustered only in CloudStack's eyes.
> CS
> > > > > controls
> > > > > > Live Migration. You don't really need a clustered filesystem on
> the
> > > > LUN.
> > > > > The
> > > > > > LUN could be handed over raw to the VM using it.
> > > > > >
> > > > > > If there is a way for me to update the ACL list on the SAN to
> have
> > > > only a
> > > > > > single KVM host have access to the volume, that would be ideal.
> > > > > >
> > > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
> > the
> > > > > iSCSI
> > > > > > target. I'll also need to take the resultant new device and pass
> it
> > > > into
> > > > > the
> > > > > > VM.
> > > > > >
> > > > > > Does this sound reasonable? Please call me out on anything I seem
> > > > > incorrect
> > > > > > about. :)
> > > > > >
> > > > > > Thanks for all the thought on this, Marcus!
> > > > > >
> > > > > >
> > > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > > > > wrote:
> > > > > >>
> > > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > > > attach
> > > > > >> the disk def to the vm. You may need to do your own
> StorageAdaptor
> > > and
> > > > > run
> > > > > >> iscsiadm commands to accomplish that, depending on how the
> libvirt
> > > > iscsi
> > > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how
> it
> > > > works
> > > > > on
> > > > > >> xen at the momen., nor is it ideal.
> > > > > >>
> > > > > >> Your plugin will handle acls as far as which host can see which
> > luns
> > > > as
> > > > > >> well, I remember discussing that months ago, so that a disk
> won't
> > be
> > > > > >> connected until the hypervisor has exclusive access, so it will
> be
> > > > safe
> > > > > and
> > > > > >> fence the disk from rogue nodes that cloudstack loses
> connectivity
> > > > > with. It
> > > > > >> should revoke access to everything but the target host... Except
> > for
> > > > > during
> > > > > >> migration but we can discuss that later, there's a migration
> prep
> > > > > process
> > > > > >> where the new host can be added to the acls, and the old host
> can
> > be
> > > > > removed
> > > > > >> post migration.
> > > > > >>
> > > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > > mike.tutkowski@solidfire.com
> > > > > >
> > > > > >> wrote:
> > > > > >>>
> > > > > >>> Yeah, that would be ideal.
> > > > > >>>
> > > > > >>> So, I would still need to discover the iSCSI target, log in to
> > it,
> > > > then
> > > > > >>> figure out what /dev/sdX was created as a result (and leave it
> as
> > > is
> > > > -
> > > > > do
> > > > > >>> not format it with any file system...clustered or not). I would
> > > pass
> > > > > that
> > > > > >>> device into the VM.
> > > > > >>>
> > > > > >>> Kind of accurate?
> > > > > >>>
> > > > > >>>
> > > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > > shadowsor@gmail.com>
> > > > > >>> wrote:
> > > > > >>>>
> > > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> > > There
> > > > > are
> > > > > >>>> ones that work for block devices rather than files. You can
> > piggy
> > > > > back off
> > > > > >>>> of the existing disk definitions and attach it to the vm as a
> > > block
> > > > > device.
> > > > > >>>> The definition is an XML string per libvirt XML format. You
> may
> > > want
> > > > > to use
> > > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > > > mentioned,
> > > > > >>>> there are by-id paths to the block devices, as well as other
> > ones
> > > > > that will
> > > > > >>>> be consistent and easier for management, not sure how familiar
> > you
> > > > > are with
> > > > > >>>> device naming on Linux.
> > > > > >>>>
> > > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <
> shadowsor@gmail.com
> > >
> > > > > wrote:
> > > > > >>>>>
> > > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > > > inside
> > > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > > > hypervisor) as
> > > > > >>>>> a disk to the VM, rather than attaching some image file that
> > > > resides
> > > > > on a
> > > > > >>>>> filesystem, mounted on the host, living on a target.
> > > > > >>>>>
> > > > > >>>>> Actually, if you plan on the storage supporting live
> migration
> > I
> > > > > think
> > > > > >>>>> this is the only way. You can't put a filesystem on it and
> > mount
> > > it
> > > > > in two
> > > > > >>>>> places to facilitate migration unless its a clustered
> > filesystem,
> > > > in
> > > > > which
> > > > > >>>>> case you're back to shared mount point.
> > > > > >>>>>
> > > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> > > with a
> > > > > xen
> > > > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > > > filesystem
> > > > > >>>>> either.
> > > > > >>>>>
> > > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > > > >>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>
> > > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> > mean
> > > > > >>>>>> circumventing the hypervisor? I didn't think we could do
> that
> > in
> > > > CS.
> > > > > >>>>>> OpenStack, on the other hand, always circumvents the
> > hypervisor,
> > > > as
> > > > > far as I
> > > > > >>>>>> know.
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > > > shadowsor@gmail.com>
> > > > > >>>>>> wrote:
> > > > > >>>>>>>
> > > > > >>>>>>> Better to wire up the lun directly to the vm unless there
> is
> > a
> > > > good
> > > > > >>>>>>> reason not to.
> > > > > >>>>>>>
> > > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com>
> > > > > >>>>>>> wrote:
> > > > > >>>>>>>>
> > > > > >>>>>>>> You could do that, but as mentioned I think its a mistake
> to
> > > go
> > > > to
> > > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> luns
> > > and
> > > > > then putting
> > > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
> or
> > > > even
> > > > > RAW disk
> > > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
> > the
> > > > > way, and have
> > > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > > > >>>>>>>>
> > > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> with
> > > CS.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is
> by
> > > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
> > the
> > > > > share.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> They can set up their share using Open iSCSI by
> discovering
> > > > their
> > > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it
> somewhere
> > on
> > > > > their file
> > > > > >>>>>>>>> system.
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> > logging
> > > > in,
> > > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> > > current
> > > > > code manage
> > > > > >>>>>>>>> the rest as it currently does?
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> > > catch
> > > > up
> > > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > > > snapshots + memory
> > > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> > > handled
> > > > > by the SAN,
> > > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > > something
> > > > > else. This is
> > > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want
> to
> > > see
> > > > > how others are
> > > > > >>>>>>>>>> planning theirs.
> > > > > >>>>>>>>>>
> > > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > > shadowsor@gmail.com
> > > > > >
> > > > > >>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> > style
> > > on
> > > > > an
> > > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> > format.
> > > > > Otherwise you're
> > > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it,
> creating a
> > > > > QCOW2 disk image,
> > > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> the
> > > VM,
> > > > > and
> > > > > >>>>>>>>>>> handling snapshots on the San side via the storage
> plugin
> > > is
> > > > > best. My
> > > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> > there
> > > > was
> > > > > a snapshot
> > > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > > > >>>>>>>>>>>
> > > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > > > shadowsor@gmail.com>
> > > > > >>>>>>>>>>> wrote:
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> back
> > > end,
> > > > > if
> > > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> > call
> > > > > your plugin for
> > > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic.
> As
> > > far
> > > > > as space, that
> > > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> > > carve
> > > > > out luns from a
> > > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and
> is
> > > > > independent of the
> > > > > >>>>>>>>>>>> LUN size the host sees.
> > > > > >>>>>>>>>>>>
> > > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Hey Marcus,
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> > won't
> > > > > work
> > > > > >>>>>>>>>>>>> when you take into consideration hypervisor
> snapshots?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> the
> > > VDI
> > > > > for
> > > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
> > as
> > > > the
> > > > > volume is on.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> > XenServer
> > > > and
> > > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > > snapshots
> > > > > in 4.2) is I'd
> > > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the
> user
> > > > > requested for the
> > > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> thinly
> > > > > provisions volumes,
> > > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
> > be).
> > > > > The CloudStack
> > > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> > > until a
> > > > > hypervisor
> > > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
> > the
> > > > > SAN volume.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> > creation
> > > of
> > > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
> > if
> > > > > there were support
> > > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> > iSCSI
> > > > > target), then I
> > > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> this
> > > > works
> > > > > >>>>>>>>>>>>> with DIR?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> What do you think?
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> Thanks
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> access
> > > > today.
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> > > well
> > > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
> it
> > > > just
> > > > > >>>>>>>>>>>>>>> acts like a
> > > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> The
> > > > > end-user
> > > > > >>>>>>>>>>>>>>> is
> > > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> > > hosts
> > > > > can
> > > > > >>>>>>>>>>>>>>> access,
> > > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> the
> > > > > storage.
> > > > > >>>>>>>>>>>>>>> It could
> > > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> filesystem,
> > > > > >>>>>>>>>>>>>>> cloudstack just
> > > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM
> images.
> > > > > >>>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> the
> > > same
> > > > > >>>>>>>>>>>>>>> > time.
> > > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > > > >>>>>>>>>>>>>>> >
> > > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> > > based
> > > > on
> > > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> one
> > > LUN,
> > > > > so
> > > > > >>>>>>>>>>>>>>> >>> there would only
> > > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > > > (libvirt)
> > > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> iSCSI
> > > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> > libvirt
> > > > does
> > > > > >>>>>>>>>>>>>>> >>> not support
> > > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
> if
> > > > > libvirt
> > > > > >>>>>>>>>>>>>>> >>> supports
> > > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> > > since
> > > > > >>>>>>>>>>>>>>> >>> each one of its
> > > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > > > targets/LUNs).
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>         }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>     }
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> currently
> > > > being
> > > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> > someone
> > > > > >>>>>>>>>>>>>>> >>>> selects the
> > > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> iSCSI,
> > is
> > > > > that
> > > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> Sorensen
> > > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> > > server,
> > > > > and
> > > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> believe
> > > > your
> > > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> logging
> > in
> > > > and
> > > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
> in
> > > the
> > > > > Xen
> > > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> > provides
> > > a
> > > > > 1:1
> > > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
> > as
> > > a
> > > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> > > about
> > > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> your
> > > own
> > > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> > LibvirtStorageAdaptor.java.
> > > >  We
> > > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> > java
> > > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/sources/java/javadoc/Normally,
> > > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> > > that
> > > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> > how
> > > > that
> > > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> > java
> > > > code
> > > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > > > storage
> > > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > > > >>>>>>>>>>>>>>> >>>>>
> > > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> > more,
> > > > but
> > > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> > targets,
> > > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> > Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > > > classes
> > > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> > > Sorensen
> > > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> > iscsi
> > > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> > packages
> > > > for
> > > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> > initiator
> > > > > login.
> > > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > > > >>>>>>>>>>>>>>> >>>>> >>> previously about
> LibvirtStorageAdaptor.java
> > > and
> > > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> > release
> > > I
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > > > framework
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> > > delete
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
> 1:1
> > > > > mapping
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> > the
> > > > > admin
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> > > would
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> > > needed
> > > > > to
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> > could
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> > KVM.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> might
> > > > work
> > > > > on
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> > > will
> > > > > need
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have
> to
> > > > expect
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> > > this
> > > > to
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> > Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> > cloud™
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> cloud™
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> >
> > > > > >>>>>>>>>>>>>>> >>>>> > --
> > > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>>
> > > > > >>>>>>>>>>>>>>> >>>> --
> > > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>>
> > > > > >>>>>>>>>>>>>>> >>> --
> > > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >>
> > > > > >>>>>>>>>>>>>>> >> --
> > > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>> --
> > > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>>
> > > > > >>>>>>>>>>>>> --
> > > > > >>>>>>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>>
> > > > > >>>>>>>>> --
> > > > > >>>>>>>>> Mike Tutkowski
> > > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>>>>> o: 303.746.7302
> > > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>>
> > > > > >>>>>> --
> > > > > >>>>>> Mike Tutkowski
> > > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > > > >>>>>> o: 303.746.7302
> > > > > >>>>>> Advancing the way the world uses the cloud™
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >>> --
> > > > > >>> Mike Tutkowski
> > > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > > > >>> e: mike.tutkowski@solidfire.com
> > > > > >>> o: 303.746.7302
> > > > > >>> Advancing the way the world uses the cloud™
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Mike Tutkowski
> > > > > > Senior CloudStack Developer, SolidFire Inc.
> > > > > > e: mike.tutkowski@solidfire.com
> > > > > > o: 303.746.7302
> > > > > > Advancing the way the world uses the cloud™
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Mike Tutkowski*
> > > > *Senior CloudStack Developer, SolidFire Inc.*
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the
> > > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > > *™*
> > > >
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
You respond to more than attach and detach, right? Don't you create luns as
well? Or are you just referring to the hypervisor stuff?
On Sep 17, 2013 7:51 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hi Marcus,
>
> I never need to respond to a CreateStoragePool call for either XenServer or
> VMware.
>
> What happens is I respond only to the Attach- and Detach-volume commands.
>
> Let's say an attach comes in:
>
> In this case, I check to see if the storage is "managed." Talking XenServer
> here, if it is, I log in to the LUN that is the disk we want to attach.
> After, if this is the first time attaching this disk, I create an SR and a
> VDI within the SR. If it is not the first time attaching this disk, the LUN
> already has the SR and VDI on it.
>
> Once this is done, I let the normal "attach" logic run because this logic
> expected an SR and a VDI and now it has it.
>
> It's the same thing for VMware: Just substitute datastore for SR and VMDK
> for VDI.
>
> Does that make sense?
>
> Thanks!
>
>
> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > What do you do with Xen? I imagine the user enter the SAN details when
> > registering the pool? A the pool details are basically just instructions
> on
> > how to log into a target, correct?
> >
> > You can choose to log in a KVM host to the target during
> createStoragePool
> > and save the pool in a map, or just save the pool info in a map for
> future
> > reference by uuid, for when you do need to log in. The createStoragePool
> > then just becomes a way to save the pool info to the agent. Personally,
> I'd
> > log in on the pool create and look/scan for specific luns when they're
> > needed, but I haven't thought it through thoroughly. I just say that
> mainly
> > because login only happens once, the first time the pool is used, and
> every
> > other storage command is about discovering new luns or maybe
> > deleting/disconnecting luns no longer needed. On the other hand, you
> could
> > do all of the above: log in on pool create, then also check if you're
> > logged in on other commands and log in if you've lost connection.
> >
> > With Xen, what does your registered pool   show in the UI for avail/used
> > capacity, and how does it get that info? I assume there is some sort of
> > disk pool that the luns are carved from, and that your plugin is called
> to
> > talk to the SAN and expose to the user how much of that pool has been
> > allocated. Knowing how you already solves these problems with Xen will
> help
> > figure out what to do with KVM.
> >
> > If this is the case, I think the plugin can continue to handle it rather
> > than getting details from the agent. I'm not sure if that means nulls are
> > OK for these on the agent side or what, I need to look at the storage
> > plugin arch more closely.
> > On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
> > wrote:
> >
> > > Hey Marcus,
> > >
> > > I'm reviewing your e-mails as I implement the necessary methods in new
> > > classes.
> > >
> > > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > the pool data (host, port, name, path) which would be used to log the
> > > host into the initiator."
> > >
> > > Can you tell me, in my case, since a storage pool (primary storage) is
> > > actually the SAN, I wouldn't really be logging into anything at this
> > point,
> > > correct?
> > >
> > > Also, what kind of capacity, available, and used bytes make sense to
> > report
> > > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
> > and
> > > not an individual LUN)?
> > >
> > > Thanks!
> > >
> > >
> > > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
> > > >wrote:
> > >
> > > > Ok, KVM will be close to that, of course, because only the hypervisor
> > > > classes differ, the rest is all mgmt server. Creating a volume is
> just
> > > > a db entry until it's deployed for the first time.
> AttachVolumeCommand
> > > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > > StorageAdaptor) to log in the host to the target and then you have a
> > > > block device.  Maybe libvirt will do that for you, but my quick read
> > > > made it sound like the iscsi libvirt pool type is actually a pool,
> not
> > > > a lun or volume, so you'll need to figure out if that works or if
> > > > you'll have to use iscsiadm commands.
> > > >
> > > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > > doesn't really manage your pool the way you want), you're going to
> > > > have to create a version of KVMStoragePool class and a StorageAdaptor
> > > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > > implementing all of the methods, then in KVMStorageManager.java
> > > > there's a "_storageMapper" map. This is used to select the correct
> > > > adaptor, you can see in this file that every call first pulls the
> > > > correct adaptor out of this map via getStorageAdaptor. So you can see
> > > > a comment in this file that says "add other storage adaptors here",
> > > > where it puts to this map, this is where you'd register your adaptor.
> > > >
> > > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > > the pool data (host, port, name, path) which would be used to log the
> > > > host into the initiator. I *believe* the method getPhysicalDisk will
> > > > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > > > this and then creates the XML diskdef and attaches it to the VM. Now,
> > > > one thing you need to know is that createStoragePool is called often,
> > > > sometimes just to make sure the pool is there. You may want to create
> > > > a map in your adaptor class and keep track of pools that have been
> > > > created, LibvirtStorageAdaptor doesn't have to do this because it
> asks
> > > > libvirt about which storage pools exist. There are also calls to
> > > > refresh the pool stats, and all of the other calls can be seen in the
> > > > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> but
> > > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > > volumes are created on the mgmt server via the plugin now, so
> whatever
> > > > doesn't apply can just be stubbed out (or optionally
> > > > extended/reimplemented here, if you don't mind the hosts talking to
> > > > the san api).
> > > >
> > > > There is a difference between attaching new volumes and launching a
> VM
> > > > with existing volumes.  In the latter case, the VM definition that
> was
> > > > passed to the KVM agent includes the disks, (StartCommand).
> > > >
> > > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > > would need to be kept the same. Is it just a definition to the SAN
> > > > (ip address or some such, port number) and perhaps a volume pool
> name?
> > > >
> > > > > If there is a way for me to update the ACL list on the SAN to have
> > > only a
> > > > > single KVM host have access to the volume, that would be ideal.
> > > >
> > > > That depends on your SAN API.  I was under the impression that the
> > > > storage plugin framework allowed for acls, or for you to do whatever
> > > > you want for create/attach/delete/snapshot, etc. You'd just call your
> > > > SAN API with the host info for the ACLs prior to when the disk is
> > > > attached (or the VM is started).  I'd have to look more at the
> > > > framework to know the details, in 4.1 I would do this in
> > > > getPhysicalDisk just prior to connecting up the LUN.
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > > <mi...@solidfire.com> wrote:
> > > > > OK, yeah, the ACL part will be interesting. That is a bit different
> > > from
> > > > how
> > > > > it works with XenServer and VMware.
> > > > >
> > > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > > >
> > > > > * The user creates a CS volume (this is just recorded in the
> > > > cloud.volumes
> > > > > table).
> > > > >
> > > > > * The user attaches the volume as a disk to a VM for the first time
> > (if
> > > > the
> > > > > storage allocator picks the SolidFire plug-in, the storage
> framework
> > > > invokes
> > > > > a method on the plug-in that creates a volume on the SAN...info
> like
> > > the
> > > > IQN
> > > > > of the SAN volume is recorded in the DB).
> > > > >
> > > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > > > > determines based on a flag passed in that the storage in question
> is
> > > > > "CloudStack-managed" storage (as opposed to "traditional"
> > preallocated
> > > > > storage). This tells it to discover the iSCSI target. Once
> discovered
> > > it
> > > > > determines if the iSCSI target already contains a storage
> repository
> > > (it
> > > > > would if this were a re-attach situation). If it does contain an SR
> > > > already,
> > > > > then there should already be one VDI, as well. If there is no SR,
> an
> > SR
> > > > is
> > > > > created and a single VDI is created within it (that takes up about
> as
> > > > much
> > > > > space as was requested for the CloudStack volume).
> > > > >
> > > > > * The normal attach-volume logic continues (it depends on the
> > existence
> > > > of
> > > > > an SR and a VDI).
> > > > >
> > > > > The VMware case is essentially the same (mainly just substitute
> > > datastore
> > > > > for SR and VMDK for VDI).
> > > > >
> > > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > > target,
> > > > > but only the host that is currently running the VM that is using
> the
> > > VDI
> > > > (or
> > > > > VMKD) is actually using the disk.
> > > > >
> > > > > Live Migration should be OK because the hypervisors communicate
> with
> > > > > whatever metadata they have on the SR (or datastore).
> > > > >
> > > > > I see what you're saying with KVM, though.
> > > > >
> > > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
> > > > controls
> > > > > Live Migration. You don't really need a clustered filesystem on the
> > > LUN.
> > > > The
> > > > > LUN could be handed over raw to the VM using it.
> > > > >
> > > > > If there is a way for me to update the ACL list on the SAN to have
> > > only a
> > > > > single KVM host have access to the volume, that would be ideal.
> > > > >
> > > > > Also, I agree I'll need to use iscsiadm to discover and log in to
> the
> > > > iSCSI
> > > > > target. I'll also need to take the resultant new device and pass it
> > > into
> > > > the
> > > > > VM.
> > > > >
> > > > > Does this sound reasonable? Please call me out on anything I seem
> > > > incorrect
> > > > > about. :)
> > > > >
> > > > > Thanks for all the thought on this, Marcus!
> > > > >
> > > > >
> > > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> > shadowsor@gmail.com>
> > > > > wrote:
> > > > >>
> > > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > > attach
> > > > >> the disk def to the vm. You may need to do your own StorageAdaptor
> > and
> > > > run
> > > > >> iscsiadm commands to accomplish that, depending on how the libvirt
> > > iscsi
> > > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> > > works
> > > > on
> > > > >> xen at the momen., nor is it ideal.
> > > > >>
> > > > >> Your plugin will handle acls as far as which host can see which
> luns
> > > as
> > > > >> well, I remember discussing that months ago, so that a disk won't
> be
> > > > >> connected until the hypervisor has exclusive access, so it will be
> > > safe
> > > > and
> > > > >> fence the disk from rogue nodes that cloudstack loses connectivity
> > > > with. It
> > > > >> should revoke access to everything but the target host... Except
> for
> > > > during
> > > > >> migration but we can discuss that later, there's a migration prep
> > > > process
> > > > >> where the new host can be added to the acls, and the old host can
> be
> > > > removed
> > > > >> post migration.
> > > > >>
> > > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > > mike.tutkowski@solidfire.com
> > > > >
> > > > >> wrote:
> > > > >>>
> > > > >>> Yeah, that would be ideal.
> > > > >>>
> > > > >>> So, I would still need to discover the iSCSI target, log in to
> it,
> > > then
> > > > >>> figure out what /dev/sdX was created as a result (and leave it as
> > is
> > > -
> > > > do
> > > > >>> not format it with any file system...clustered or not). I would
> > pass
> > > > that
> > > > >>> device into the VM.
> > > > >>>
> > > > >>> Kind of accurate?
> > > > >>>
> > > > >>>
> > > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > > >>> wrote:
> > > > >>>>
> > > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> > There
> > > > are
> > > > >>>> ones that work for block devices rather than files. You can
> piggy
> > > > back off
> > > > >>>> of the existing disk definitions and attach it to the vm as a
> > block
> > > > device.
> > > > >>>> The definition is an XML string per libvirt XML format. You may
> > want
> > > > to use
> > > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > > mentioned,
> > > > >>>> there are by-id paths to the block devices, as well as other
> ones
> > > > that will
> > > > >>>> be consistent and easier for management, not sure how familiar
> you
> > > > are with
> > > > >>>> device naming on Linux.
> > > > >>>>
> > > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <shadowsor@gmail.com
> >
> > > > wrote:
> > > > >>>>>
> > > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > > inside
> > > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > > hypervisor) as
> > > > >>>>> a disk to the VM, rather than attaching some image file that
> > > resides
> > > > on a
> > > > >>>>> filesystem, mounted on the host, living on a target.
> > > > >>>>>
> > > > >>>>> Actually, if you plan on the storage supporting live migration
> I
> > > > think
> > > > >>>>> this is the only way. You can't put a filesystem on it and
> mount
> > it
> > > > in two
> > > > >>>>> places to facilitate migration unless its a clustered
> filesystem,
> > > in
> > > > which
> > > > >>>>> case you're back to shared mount point.
> > > > >>>>>
> > > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> > with a
> > > > xen
> > > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > > filesystem
> > > > >>>>> either.
> > > > >>>>>
> > > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > > >>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>
> > > > >>>>>> When you say, "wire up the lun directly to the vm," do you
> mean
> > > > >>>>>> circumventing the hypervisor? I didn't think we could do that
> in
> > > CS.
> > > > >>>>>> OpenStack, on the other hand, always circumvents the
> hypervisor,
> > > as
> > > > far as I
> > > > >>>>>> know.
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > > shadowsor@gmail.com>
> > > > >>>>>> wrote:
> > > > >>>>>>>
> > > > >>>>>>> Better to wire up the lun directly to the vm unless there is
> a
> > > good
> > > > >>>>>>> reason not to.
> > > > >>>>>>>
> > > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> > shadowsor@gmail.com>
> > > > >>>>>>> wrote:
> > > > >>>>>>>>
> > > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
> > go
> > > to
> > > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
> > and
> > > > then putting
> > > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> > > even
> > > > RAW disk
> > > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along
> the
> > > > way, and have
> > > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > > >>>>>>>>
> > > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>
> > > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
> > CS.
> > > > >>>>>>>>>
> > > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> > > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
> the
> > > > share.
> > > > >>>>>>>>>
> > > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
> > > their
> > > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
> on
> > > > their file
> > > > >>>>>>>>> system.
> > > > >>>>>>>>>
> > > > >>>>>>>>> Would it make sense for me to just do that discovery,
> logging
> > > in,
> > > > >>>>>>>>> and mounting behind the scenes for them and letting the
> > current
> > > > code manage
> > > > >>>>>>>>> the rest as it currently does?
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> > catch
> > > up
> > > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > > snapshots + memory
> > > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> > handled
> > > > by the SAN,
> > > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> > something
> > > > else. This is
> > > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
> > see
> > > > how others are
> > > > >>>>>>>>>> planning theirs.
> > > > >>>>>>>>>>
> > > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com
> > > > >
> > > > >>>>>>>>>> wrote:
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> style
> > on
> > > > an
> > > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> format.
> > > > Otherwise you're
> > > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> > > > QCOW2 disk image,
> > > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
> > VM,
> > > > and
> > > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
> > is
> > > > best. My
> > > > >>>>>>>>>>> impression from the storage plugin refactor was that
> there
> > > was
> > > > a snapshot
> > > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > > >>>>>>>>>>>
> > > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > > shadowsor@gmail.com>
> > > > >>>>>>>>>>> wrote:
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
> > end,
> > > > if
> > > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> call
> > > > your plugin for
> > > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
> > far
> > > > as space, that
> > > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> > carve
> > > > out luns from a
> > > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> > > > independent of the
> > > > >>>>>>>>>>>> LUN size the host sees.
> > > > >>>>>>>>>>>>
> > > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Hey Marcus,
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> won't
> > > > work
> > > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
> > VDI
> > > > for
> > > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository
> as
> > > the
> > > > volume is on.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
> XenServer
> > > and
> > > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> > snapshots
> > > > in 4.2) is I'd
> > > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> > > > requested for the
> > > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> > > > provisions volumes,
> > > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
> be).
> > > > The CloudStack
> > > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> > until a
> > > > hypervisor
> > > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
> the
> > > > SAN volume.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
> creation
> > of
> > > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
> if
> > > > there were support
> > > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> iSCSI
> > > > target), then I
> > > > >>>>>>>>>>>>> don't see how using this model will work.
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> > > works
> > > > >>>>>>>>>>>>> with DIR?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> What do you think?
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> Thanks
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> > > today.
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> > well
> > > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> > > just
> > > > >>>>>>>>>>>>>>> acts like a
> > > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> > > > end-user
> > > > >>>>>>>>>>>>>>> is
> > > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> > hosts
> > > > can
> > > > >>>>>>>>>>>>>>> access,
> > > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> > > > storage.
> > > > >>>>>>>>>>>>>>> It could
> > > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> > > > >>>>>>>>>>>>>>> cloudstack just
> > > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> > > > >>>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
> > same
> > > > >>>>>>>>>>>>>>> > time.
> > > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > > >>>>>>>>>>>>>>> >
> > > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > > >>>>>>>>>>>>>>> >> default              active     yes
> > > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> > based
> > > on
> > > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
> > LUN,
> > > > so
> > > > >>>>>>>>>>>>>>> >>> there would only
> > > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > > (libvirt)
> > > > >>>>>>>>>>>>>>> >>> storage pool.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> > > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> libvirt
> > > does
> > > > >>>>>>>>>>>>>>> >>> not support
> > > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> > > > libvirt
> > > > >>>>>>>>>>>>>>> >>> supports
> > > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> > since
> > > > >>>>>>>>>>>>>>> >>> each one of its
> > > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > > targets/LUNs).
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         @Override
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>         }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>     }
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> > > being
> > > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> someone
> > > > >>>>>>>>>>>>>>> >>>> selects the
> > > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
> is
> > > > that
> > > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> > > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > > >>>>>>>>>>>>>>> >>>> wrote:
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>>
> > > http://libvirt.org/storage.html#StorageBackendISCSI
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> > server,
> > > > and
> > > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> > > your
> > > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
> in
> > > and
> > > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
> > the
> > > > Xen
> > > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> provides
> > a
> > > > 1:1
> > > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
> as
> > a
> > > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> > about
> > > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
> > own
> > > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > > >>>>>>>>>>>>>>> >>>>> rather than changing
> LibvirtStorageAdaptor.java.
> > >  We
> > > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> java
> > > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > > >>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/sources/java/javadoc/Normally,
> > > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> > that
> > > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> how
> > > that
> > > > >>>>>>>>>>>>>>> >>>>> is done for
> > > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> java
> > > code
> > > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > > storage
> > > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > > >>>>>>>>>>>>>>> >>>>> get started.
> > > > >>>>>>>>>>>>>>> >>>>>
> > > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> more,
> > > but
> > > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > > >>>>>>>>>>>>>>> >>>>> > supports
> > > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> targets,
> > > > >>>>>>>>>>>>>>> >>>>> > right?
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > > classes
> > > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > > >>>>>>>>>>>>>>> >>>>> >> last
> > > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> > Sorensen
> > > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> iscsi
> > > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> packages
> > > for
> > > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> initiator
> > > > login.
> > > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
> > and
> > > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > > >>>>>>>>>>>>>>> >>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> release
> > I
> > > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > > framework
> > > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> > delete
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> > > > mapping
> > > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> the
> > > > admin
> > > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> > would
> > > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> > needed
> > > > to
> > > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> could
> > > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> KVM.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> > > work
> > > > on
> > > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> > will
> > > > need
> > > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> > > expect
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> > this
> > > to
> > > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> > > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> cloud™
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >>
> > > > >>>>>>>>>>>>>>> >>>>> >> --
> > > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> >
> > > > >>>>>>>>>>>>>>> >>>>> > --
> > > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>>
> > > > >>>>>>>>>>>>>>> >>>> --
> > > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>>
> > > > >>>>>>>>>>>>>>> >>> --
> > > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >>
> > > > >>>>>>>>>>>>>>> >> --
> > > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>> --
> > > > >>>>>>>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>>
> > > > >>>>>>>>>>>>> --
> > > > >>>>>>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>>
> > > > >>>>>>>>> --
> > > > >>>>>>>>> Mike Tutkowski
> > > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>>>>> o: 303.746.7302
> > > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>>
> > > > >>>>>> --
> > > > >>>>>> Mike Tutkowski
> > > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>>>>> e: mike.tutkowski@solidfire.com
> > > > >>>>>> o: 303.746.7302
> > > > >>>>>> Advancing the way the world uses the cloud™
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> --
> > > > >>> Mike Tutkowski
> > > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > > >>> e: mike.tutkowski@solidfire.com
> > > > >>> o: 303.746.7302
> > > > >>> Advancing the way the world uses the cloud™
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Mike Tutkowski
> > > > > Senior CloudStack Developer, SolidFire Inc.
> > > > > e: mike.tutkowski@solidfire.com
> > > > > o: 303.746.7302
> > > > > Advancing the way the world uses the cloud™
> > > >
> > >
> > >
> > >
> > > --
> > > *Mike Tutkowski*
> > > *Senior CloudStack Developer, SolidFire Inc.*
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the
> > > cloud<http://solidfire.com/solution/overview/?video=play>
> > > *™*
> > >
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
"I imagine the user enter the SAN details when
registering the pool?"

When primary storage is added to CS that is based on the SolidFire plug-in,
these details (host, port, etc.) are provided. The primary storage then
represents the SAN and not a preallocated volume (i.e. not a particular
LUN).


On Tue, Sep 17, 2013 at 7:51 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hi Marcus,
>
> I never need to respond to a CreateStoragePool call for either XenServer
> or VMware.
>
> What happens is I respond only to the Attach- and Detach-volume commands.
>
> Let's say an attach comes in:
>
> In this case, I check to see if the storage is "managed." Talking
> XenServer here, if it is, I log in to the LUN that is the disk we want to
> attach. After, if this is the first time attaching this disk, I create an
> SR and a VDI within the SR. If it is not the first time attaching this
> disk, the LUN already has the SR and VDI on it.
>
> Once this is done, I let the normal "attach" logic run because this logic
> expected an SR and a VDI and now it has it.
>
> It's the same thing for VMware: Just substitute datastore for SR and VMDK
> for VDI.
>
> Does that make sense?
>
> Thanks!
>
>
> On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> What do you do with Xen? I imagine the user enter the SAN details when
>> registering the pool? A the pool details are basically just instructions
>> on
>> how to log into a target, correct?
>>
>> You can choose to log in a KVM host to the target during createStoragePool
>> and save the pool in a map, or just save the pool info in a map for future
>> reference by uuid, for when you do need to log in. The createStoragePool
>> then just becomes a way to save the pool info to the agent. Personally,
>> I'd
>> log in on the pool create and look/scan for specific luns when they're
>> needed, but I haven't thought it through thoroughly. I just say that
>> mainly
>> because login only happens once, the first time the pool is used, and
>> every
>> other storage command is about discovering new luns or maybe
>> deleting/disconnecting luns no longer needed. On the other hand, you could
>> do all of the above: log in on pool create, then also check if you're
>> logged in on other commands and log in if you've lost connection.
>>
>> With Xen, what does your registered pool   show in the UI for avail/used
>> capacity, and how does it get that info? I assume there is some sort of
>> disk pool that the luns are carved from, and that your plugin is called to
>> talk to the SAN and expose to the user how much of that pool has been
>> allocated. Knowing how you already solves these problems with Xen will
>> help
>> figure out what to do with KVM.
>>
>> If this is the case, I think the plugin can continue to handle it rather
>> than getting details from the agent. I'm not sure if that means nulls are
>> OK for these on the agent side or what, I need to look at the storage
>> plugin arch more closely.
>> On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>> > Hey Marcus,
>> >
>> > I'm reviewing your e-mails as I implement the necessary methods in new
>> > classes.
>> >
>> > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > the pool data (host, port, name, path) which would be used to log the
>> > host into the initiator."
>> >
>> > Can you tell me, in my case, since a storage pool (primary storage) is
>> > actually the SAN, I wouldn't really be logging into anything at this
>> point,
>> > correct?
>> >
>> > Also, what kind of capacity, available, and used bytes make sense to
>> report
>> > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
>> and
>> > not an individual LUN)?
>> >
>> > Thanks!
>> >
>> >
>> > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
>> > >wrote:
>> >
>> > > Ok, KVM will be close to that, of course, because only the hypervisor
>> > > classes differ, the rest is all mgmt server. Creating a volume is just
>> > > a db entry until it's deployed for the first time. AttachVolumeCommand
>> > > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> > > StorageAdaptor) to log in the host to the target and then you have a
>> > > block device.  Maybe libvirt will do that for you, but my quick read
>> > > made it sound like the iscsi libvirt pool type is actually a pool, not
>> > > a lun or volume, so you'll need to figure out if that works or if
>> > > you'll have to use iscsiadm commands.
>> > >
>> > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> > > doesn't really manage your pool the way you want), you're going to
>> > > have to create a version of KVMStoragePool class and a StorageAdaptor
>> > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> > > implementing all of the methods, then in KVMStorageManager.java
>> > > there's a "_storageMapper" map. This is used to select the correct
>> > > adaptor, you can see in this file that every call first pulls the
>> > > correct adaptor out of this map via getStorageAdaptor. So you can see
>> > > a comment in this file that says "add other storage adaptors here",
>> > > where it puts to this map, this is where you'd register your adaptor.
>> > >
>> > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > > the pool data (host, port, name, path) which would be used to log the
>> > > host into the initiator. I *believe* the method getPhysicalDisk will
>> > > need to do the work of attaching the lun.  AttachVolumeCommand calls
>> > > this and then creates the XML diskdef and attaches it to the VM. Now,
>> > > one thing you need to know is that createStoragePool is called often,
>> > > sometimes just to make sure the pool is there. You may want to create
>> > > a map in your adaptor class and keep track of pools that have been
>> > > created, LibvirtStorageAdaptor doesn't have to do this because it asks
>> > > libvirt about which storage pools exist. There are also calls to
>> > > refresh the pool stats, and all of the other calls can be seen in the
>> > > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>> > > it's probably a hold-over from 4.1, as I have the vague idea that
>> > > volumes are created on the mgmt server via the plugin now, so whatever
>> > > doesn't apply can just be stubbed out (or optionally
>> > > extended/reimplemented here, if you don't mind the hosts talking to
>> > > the san api).
>> > >
>> > > There is a difference between attaching new volumes and launching a VM
>> > > with existing volumes.  In the latter case, the VM definition that was
>> > > passed to the KVM agent includes the disks, (StartCommand).
>> > >
>> > > I'd be interested in how your pool is defined for Xen, I imagine it
>> > > would need to be kept the same. Is it just a definition to the SAN
>> > > (ip address or some such, port number) and perhaps a volume pool name?
>> > >
>> > > > If there is a way for me to update the ACL list on the SAN to have
>> > only a
>> > > > single KVM host have access to the volume, that would be ideal.
>> > >
>> > > That depends on your SAN API.  I was under the impression that the
>> > > storage plugin framework allowed for acls, or for you to do whatever
>> > > you want for create/attach/delete/snapshot, etc. You'd just call your
>> > > SAN API with the host info for the ACLs prior to when the disk is
>> > > attached (or the VM is started).  I'd have to look more at the
>> > > framework to know the details, in 4.1 I would do this in
>> > > getPhysicalDisk just prior to connecting up the LUN.
>> > >
>> > >
>> > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > > <mi...@solidfire.com> wrote:
>> > > > OK, yeah, the ACL part will be interesting. That is a bit different
>> > from
>> > > how
>> > > > it works with XenServer and VMware.
>> > > >
>> > > > Just to give you an idea how it works in 4.2 with XenServer:
>> > > >
>> > > > * The user creates a CS volume (this is just recorded in the
>> > > cloud.volumes
>> > > > table).
>> > > >
>> > > > * The user attaches the volume as a disk to a VM for the first time
>> (if
>> > > the
>> > > > storage allocator picks the SolidFire plug-in, the storage framework
>> > > invokes
>> > > > a method on the plug-in that creates a volume on the SAN...info like
>> > the
>> > > IQN
>> > > > of the SAN volume is recorded in the DB).
>> > > >
>> > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>> > > > determines based on a flag passed in that the storage in question is
>> > > > "CloudStack-managed" storage (as opposed to "traditional"
>> preallocated
>> > > > storage). This tells it to discover the iSCSI target. Once
>> discovered
>> > it
>> > > > determines if the iSCSI target already contains a storage repository
>> > (it
>> > > > would if this were a re-attach situation). If it does contain an SR
>> > > already,
>> > > > then there should already be one VDI, as well. If there is no SR,
>> an SR
>> > > is
>> > > > created and a single VDI is created within it (that takes up about
>> as
>> > > much
>> > > > space as was requested for the CloudStack volume).
>> > > >
>> > > > * The normal attach-volume logic continues (it depends on the
>> existence
>> > > of
>> > > > an SR and a VDI).
>> > > >
>> > > > The VMware case is essentially the same (mainly just substitute
>> > datastore
>> > > > for SR and VMDK for VDI).
>> > > >
>> > > > In both cases, all hosts in the cluster have discovered the iSCSI
>> > target,
>> > > > but only the host that is currently running the VM that is using the
>> > VDI
>> > > (or
>> > > > VMKD) is actually using the disk.
>> > > >
>> > > > Live Migration should be OK because the hypervisors communicate with
>> > > > whatever metadata they have on the SR (or datastore).
>> > > >
>> > > > I see what you're saying with KVM, though.
>> > > >
>> > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
>> > > controls
>> > > > Live Migration. You don't really need a clustered filesystem on the
>> > LUN.
>> > > The
>> > > > LUN could be handed over raw to the VM using it.
>> > > >
>> > > > If there is a way for me to update the ACL list on the SAN to have
>> > only a
>> > > > single KVM host have access to the volume, that would be ideal.
>> > > >
>> > > > Also, I agree I'll need to use iscsiadm to discover and log in to
>> the
>> > > iSCSI
>> > > > target. I'll also need to take the resultant new device and pass it
>> > into
>> > > the
>> > > > VM.
>> > > >
>> > > > Does this sound reasonable? Please call me out on anything I seem
>> > > incorrect
>> > > > about. :)
>> > > >
>> > > > Thanks for all the thought on this, Marcus!
>> > > >
>> > > >
>> > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> > > > wrote:
>> > > >>
>> > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
>> > attach
>> > > >> the disk def to the vm. You may need to do your own StorageAdaptor
>> and
>> > > run
>> > > >> iscsiadm commands to accomplish that, depending on how the libvirt
>> > iscsi
>> > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>> > works
>> > > on
>> > > >> xen at the momen., nor is it ideal.
>> > > >>
>> > > >> Your plugin will handle acls as far as which host can see which
>> luns
>> > as
>> > > >> well, I remember discussing that months ago, so that a disk won't
>> be
>> > > >> connected until the hypervisor has exclusive access, so it will be
>> > safe
>> > > and
>> > > >> fence the disk from rogue nodes that cloudstack loses connectivity
>> > > with. It
>> > > >> should revoke access to everything but the target host... Except
>> for
>> > > during
>> > > >> migration but we can discuss that later, there's a migration prep
>> > > process
>> > > >> where the new host can be added to the acls, and the old host can
>> be
>> > > removed
>> > > >> post migration.
>> > > >>
>> > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> > mike.tutkowski@solidfire.com
>> > > >
>> > > >> wrote:
>> > > >>>
>> > > >>> Yeah, that would be ideal.
>> > > >>>
>> > > >>> So, I would still need to discover the iSCSI target, log in to it,
>> > then
>> > > >>> figure out what /dev/sdX was created as a result (and leave it as
>> is
>> > -
>> > > do
>> > > >>> not format it with any file system...clustered or not). I would
>> pass
>> > > that
>> > > >>> device into the VM.
>> > > >>>
>> > > >>> Kind of accurate?
>> > > >>>
>> > > >>>
>> > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> > shadowsor@gmail.com>
>> > > >>> wrote:
>> > > >>>>
>> > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>> There
>> > > are
>> > > >>>> ones that work for block devices rather than files. You can piggy
>> > > back off
>> > > >>>> of the existing disk definitions and attach it to the vm as a
>> block
>> > > device.
>> > > >>>> The definition is an XML string per libvirt XML format. You may
>> want
>> > > to use
>> > > >>>> an alternate path to the disk rather than just /dev/sdx like I
>> > > mentioned,
>> > > >>>> there are by-id paths to the block devices, as well as other ones
>> > > that will
>> > > >>>> be consistent and easier for management, not sure how familiar
>> you
>> > > are with
>> > > >>>> device naming on Linux.
>> > > >>>>
>> > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
>> > > wrote:
>> > > >>>>>
>> > > >>>>> No, as that would rely on virtualized network/iscsi initiator
>> > inside
>> > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>> > > hypervisor) as
>> > > >>>>> a disk to the VM, rather than attaching some image file that
>> > resides
>> > > on a
>> > > >>>>> filesystem, mounted on the host, living on a target.
>> > > >>>>>
>> > > >>>>> Actually, if you plan on the storage supporting live migration I
>> > > think
>> > > >>>>> this is the only way. You can't put a filesystem on it and
>> mount it
>> > > in two
>> > > >>>>> places to facilitate migration unless its a clustered
>> filesystem,
>> > in
>> > > which
>> > > >>>>> case you're back to shared mount point.
>> > > >>>>>
>> > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
>> with a
>> > > xen
>> > > >>>>> specific cluster management, a custom CLVM. They don't use a
>> > > filesystem
>> > > >>>>> either.
>> > > >>>>>
>> > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> > > >>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>
>> > > >>>>>> When you say, "wire up the lun directly to the vm," do you mean
>> > > >>>>>> circumventing the hypervisor? I didn't think we could do that
>> in
>> > CS.
>> > > >>>>>> OpenStack, on the other hand, always circumvents the
>> hypervisor,
>> > as
>> > > far as I
>> > > >>>>>> know.
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> > > shadowsor@gmail.com>
>> > > >>>>>> wrote:
>> > > >>>>>>>
>> > > >>>>>>> Better to wire up the lun directly to the vm unless there is a
>> > good
>> > > >>>>>>> reason not to.
>> > > >>>>>>>
>> > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
>> shadowsor@gmail.com>
>> > > >>>>>>> wrote:
>> > > >>>>>>>>
>> > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
>> go
>> > to
>> > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>> and
>> > > then putting
>> > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
>> > even
>> > > RAW disk
>> > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
>> > > way, and have
>> > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
>> > > >>>>>>>>
>> > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> > > >>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>
>> > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>> CS.
>> > > >>>>>>>>>
>> > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>> > > >>>>>>>>> selecting SharedMountPoint and specifying the location of
>> the
>> > > share.
>> > > >>>>>>>>>
>> > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
>> > their
>> > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
>> on
>> > > their file
>> > > >>>>>>>>> system.
>> > > >>>>>>>>>
>> > > >>>>>>>>> Would it make sense for me to just do that discovery,
>> logging
>> > in,
>> > > >>>>>>>>> and mounting behind the scenes for them and letting the
>> current
>> > > code manage
>> > > >>>>>>>>> the rest as it currently does?
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> > > >>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>
>> > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>> catch
>> > up
>> > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
>> > > snapshots + memory
>> > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
>> handled
>> > > by the SAN,
>> > > >>>>>>>>>> and then memory dumps can go to secondary storage or
>> something
>> > > else. This is
>> > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>> see
>> > > how others are
>> > > >>>>>>>>>> planning theirs.
>> > > >>>>>>>>>>
>> > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> > shadowsor@gmail.com
>> > > >
>> > > >>>>>>>>>> wrote:
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> style on
>> > > an
>> > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
>> > > Otherwise you're
>> > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
>> > > QCOW2 disk image,
>> > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>> VM,
>> > > and
>> > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
>> is
>> > > best. My
>> > > >>>>>>>>>>> impression from the storage plugin refactor was that there
>> > was
>> > > a snapshot
>> > > >>>>>>>>>>> service that would allow the San to handle snapshots.
>> > > >>>>>>>>>>>
>> > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> > > shadowsor@gmail.com>
>> > > >>>>>>>>>>> wrote:
>> > > >>>>>>>>>>>>
>> > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>> end,
>> > > if
>> > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
>> call
>> > > your plugin for
>> > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>> far
>> > > as space, that
>> > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>> carve
>> > > out luns from a
>> > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>> > > independent of the
>> > > >>>>>>>>>>>> LUN size the host sees.
>> > > >>>>>>>>>>>>
>> > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Hey Marcus,
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>> won't
>> > > work
>> > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>> VDI
>> > > for
>> > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository as
>> > the
>> > > volume is on.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Same idea for VMware, I believe.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> So, what would happen in my case (let's say for
>> XenServer
>> > and
>> > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> snapshots
>> > > in 4.2) is I'd
>> > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>> > > requested for the
>> > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
>> > > provisions volumes,
>> > > >>>>>>>>>>>>> so the space is not actually used unless it needs to
>> be).
>> > > The CloudStack
>> > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>> until a
>> > > hypervisor
>> > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
>> the
>> > > SAN volume.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> creation of
>> > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
>> > > there were support
>> > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>> iSCSI
>> > > target), then I
>> > > >>>>>>>>>>>>> don't see how using this model will work.
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>> > works
>> > > >>>>>>>>>>>>> with DIR?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> What do you think?
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> Thanks
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>> > today.
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>> well
>> > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
>> > just
>> > > >>>>>>>>>>>>>>> acts like a
>> > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>> > > end-user
>> > > >>>>>>>>>>>>>>> is
>> > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>> hosts
>> > > can
>> > > >>>>>>>>>>>>>>> access,
>> > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>> > > storage.
>> > > >>>>>>>>>>>>>>> It could
>> > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>> > > >>>>>>>>>>>>>>> cloudstack just
>> > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>> > > >>>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>> same
>> > > >>>>>>>>>>>>>>> > time.
>> > > >>>>>>>>>>>>>>> > Multiples, in fact.
>> > > >>>>>>>>>>>>>>> >
>> > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
>> > > >>>>>>>>>>>>>>> >> -----------------------------------------
>> > > >>>>>>>>>>>>>>> >> default              active     yes
>> > > >>>>>>>>>>>>>>> >> iSCSI                active     no
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>> based
>> > on
>> > > >>>>>>>>>>>>>>> >>> an iSCSI target.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>> LUN,
>> > > so
>> > > >>>>>>>>>>>>>>> >>> there would only
>> > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> > (libvirt)
>> > > >>>>>>>>>>>>>>> >>> storage pool.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>> > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
>> > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
>> > does
>> > > >>>>>>>>>>>>>>> >>> not support
>> > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
>> > > libvirt
>> > > >>>>>>>>>>>>>>> >>> supports
>> > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>> since
>> > > >>>>>>>>>>>>>>> >>> each one of its
>> > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> > > targets/LUNs).
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         String _poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         @Override
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         public String toString() {
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>             return _poolType;
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>         }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>     }
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
>> > being
>> > > >>>>>>>>>>>>>>> >>>> used, but I'm
>> > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> someone
>> > > >>>>>>>>>>>>>>> >>>> selects the
>> > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
>> is
>> > > that
>> > > >>>>>>>>>>>>>>> >>>> the "netfs" option
>> > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> Thanks!
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>> > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> > > >>>>>>>>>>>>>>> >>>> wrote:
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>>
>> > http://libvirt.org/storage.html#StorageBackendISCSI
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> server,
>> > > and
>> > > >>>>>>>>>>>>>>> >>>>> cannot be
>> > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
>> > your
>> > > >>>>>>>>>>>>>>> >>>>> plugin will take
>> > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
>> in
>> > and
>> > > >>>>>>>>>>>>>>> >>>>> hooking it up to
>> > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
>> the
>> > > Xen
>> > > >>>>>>>>>>>>>>> >>>>> stuff).
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> provides a
>> > > 1:1
>> > > >>>>>>>>>>>>>>> >>>>> mapping, or if
>> > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
>> as a
>> > > >>>>>>>>>>>>>>> >>>>> pool. You may need
>> > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>> about
>> > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
>> > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>> own
>> > > >>>>>>>>>>>>>>> >>>>> storage adaptor
>> > > >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>> >  We
>> > > >>>>>>>>>>>>>>> >>>>> can cross that
>> > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>> > > >>>>>>>>>>>>>>> >>>>> bindings doc.
>> > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
>> > > >>>>>>>>>>>>>>> >>>>> you'll see a
>> > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>> that
>> > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
>> > that
>> > > >>>>>>>>>>>>>>> >>>>> is done for
>> > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
>> > code
>> > > >>>>>>>>>>>>>>> >>>>> to see if you
>> > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>> > storage
>> > > >>>>>>>>>>>>>>> >>>>> pools before you
>> > > >>>>>>>>>>>>>>> >>>>> get started.
>> > > >>>>>>>>>>>>>>> >>>>>
>> > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>> more,
>> > but
>> > > >>>>>>>>>>>>>>> >>>>> > you figure it
>> > > >>>>>>>>>>>>>>> >>>>> > supports
>> > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> targets,
>> > > >>>>>>>>>>>>>>> >>>>> > right?
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>> > classes
>> > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
>> > > >>>>>>>>>>>>>>> >>>>> >> last
>> > > >>>>>>>>>>>>>>> >>>>> >> week or so.
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> Sorensen
>> > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> > > >>>>>>>>>>>>>>> >>>>> >> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>> iscsi
>> > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
>> > for
>> > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
>> > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
>> > > login.
>> > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
>> > > >>>>>>>>>>>>>>> >>>>> >>> sent
>> > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
>> and
>> > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> > > >>>>>>>>>>>>>>> >>>>> >>> storage type
>> > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> > > >>>>>>>>>>>>>>> >>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> release I
>> > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>> > > framework
>> > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> > > >>>>>>>>>>>>>>> >>>>> >>>> times
>> > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>> delete
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
>> > > mapping
>> > > >>>>>>>>>>>>>>> >>>>> >>>> between a
>> > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
>> > > admin
>> > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
>> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>> would
>> > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> > > >>>>>>>>>>>>>>> >>>>> >>>> root and
>> > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>> needed
>> > > to
>> > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>> > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
>> KVM.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
>> > work
>> > > on
>> > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> > > >>>>>>>>>>>>>>> >>>>> >>>> still
>> > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>> will
>> > > need
>> > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> > > >>>>>>>>>>>>>>> >>>>> >>>> the
>> > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>> > expect
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>> this
>> > to
>> > > >>>>>>>>>>>>>>> >>>>> >>>> work?
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
>> > > >>>>>>>>>>>>>>> >>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>> >>>> --
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >>
>> > > >>>>>>>>>>>>>>> >>>>> >> --
>> > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> >
>> > > >>>>>>>>>>>>>>> >>>>> > --
>> > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>>
>> > > >>>>>>>>>>>>>>> >>>> --
>> > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>>
>> > > >>>>>>>>>>>>>>> >>> --
>> > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >>
>> > > >>>>>>>>>>>>>>> >> --
>> > > >>>>>>>>>>>>>>> >> Mike Tutkowski
>> > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>>> >> o: 303.746.7302
>> > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>> --
>> > > >>>>>>>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>>
>> > > >>>>>>>>>>>>> --
>> > > >>>>>>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>>
>> > > >>>>>>>>> --
>> > > >>>>>>>>> Mike Tutkowski
>> > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>>>>> o: 303.746.7302
>> > > >>>>>>>>> Advancing the way the world uses the cloud™
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>>
>> > > >>>>>> --
>> > > >>>>>> Mike Tutkowski
>> > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>>>>> e: mike.tutkowski@solidfire.com
>> > > >>>>>> o: 303.746.7302
>> > > >>>>>> Advancing the way the world uses the cloud™
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> --
>> > > >>> Mike Tutkowski
>> > > >>> Senior CloudStack Developer, SolidFire Inc.
>> > > >>> e: mike.tutkowski@solidfire.com
>> > > >>> o: 303.746.7302
>> > > >>> Advancing the way the world uses the cloud™
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Mike Tutkowski
>> > > > Senior CloudStack Developer, SolidFire Inc.
>> > > > e: mike.tutkowski@solidfire.com
>> > > > o: 303.746.7302
>> > > > Advancing the way the world uses the cloud™
>> > >
>> >
>> >
>> >
>> > --
>> > *Mike Tutkowski*
>> > *Senior CloudStack Developer, SolidFire Inc.*
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the
>> > cloud<http://solidfire.com/solution/overview/?video=play>
>> > *™*
>> >
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hi Marcus,

I never need to respond to a CreateStoragePool call for either XenServer or
VMware.

What happens is I respond only to the Attach- and Detach-volume commands.

Let's say an attach comes in:

In this case, I check to see if the storage is "managed." Talking XenServer
here, if it is, I log in to the LUN that is the disk we want to attach.
After, if this is the first time attaching this disk, I create an SR and a
VDI within the SR. If it is not the first time attaching this disk, the LUN
already has the SR and VDI on it.

Once this is done, I let the normal "attach" logic run because this logic
expected an SR and a VDI and now it has it.

It's the same thing for VMware: Just substitute datastore for SR and VMDK
for VDI.

Does that make sense?

Thanks!


On Tue, Sep 17, 2013 at 7:34 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> What do you do with Xen? I imagine the user enter the SAN details when
> registering the pool? A the pool details are basically just instructions on
> how to log into a target, correct?
>
> You can choose to log in a KVM host to the target during createStoragePool
> and save the pool in a map, or just save the pool info in a map for future
> reference by uuid, for when you do need to log in. The createStoragePool
> then just becomes a way to save the pool info to the agent. Personally, I'd
> log in on the pool create and look/scan for specific luns when they're
> needed, but I haven't thought it through thoroughly. I just say that mainly
> because login only happens once, the first time the pool is used, and every
> other storage command is about discovering new luns or maybe
> deleting/disconnecting luns no longer needed. On the other hand, you could
> do all of the above: log in on pool create, then also check if you're
> logged in on other commands and log in if you've lost connection.
>
> With Xen, what does your registered pool   show in the UI for avail/used
> capacity, and how does it get that info? I assume there is some sort of
> disk pool that the luns are carved from, and that your plugin is called to
> talk to the SAN and expose to the user how much of that pool has been
> allocated. Knowing how you already solves these problems with Xen will help
> figure out what to do with KVM.
>
> If this is the case, I think the plugin can continue to handle it rather
> than getting details from the agent. I'm not sure if that means nulls are
> OK for these on the agent side or what, I need to look at the storage
> plugin arch more closely.
> On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
> > Hey Marcus,
> >
> > I'm reviewing your e-mails as I implement the necessary methods in new
> > classes.
> >
> > "So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > the pool data (host, port, name, path) which would be used to log the
> > host into the initiator."
> >
> > Can you tell me, in my case, since a storage pool (primary storage) is
> > actually the SAN, I wouldn't really be logging into anything at this
> point,
> > correct?
> >
> > Also, what kind of capacity, available, and used bytes make sense to
> report
> > for KVMStoragePool (since KVMStoragePool represents the SAN in my case
> and
> > not an individual LUN)?
> >
> > Thanks!
> >
> >
> > On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
> > >wrote:
> >
> > > Ok, KVM will be close to that, of course, because only the hypervisor
> > > classes differ, the rest is all mgmt server. Creating a volume is just
> > > a db entry until it's deployed for the first time. AttachVolumeCommand
> > > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > > StorageAdaptor) to log in the host to the target and then you have a
> > > block device.  Maybe libvirt will do that for you, but my quick read
> > > made it sound like the iscsi libvirt pool type is actually a pool, not
> > > a lun or volume, so you'll need to figure out if that works or if
> > > you'll have to use iscsiadm commands.
> > >
> > > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > > doesn't really manage your pool the way you want), you're going to
> > > have to create a version of KVMStoragePool class and a StorageAdaptor
> > > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > > implementing all of the methods, then in KVMStorageManager.java
> > > there's a "_storageMapper" map. This is used to select the correct
> > > adaptor, you can see in this file that every call first pulls the
> > > correct adaptor out of this map via getStorageAdaptor. So you can see
> > > a comment in this file that says "add other storage adaptors here",
> > > where it puts to this map, this is where you'd register your adaptor.
> > >
> > > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > > the pool data (host, port, name, path) which would be used to log the
> > > host into the initiator. I *believe* the method getPhysicalDisk will
> > > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > > this and then creates the XML diskdef and attaches it to the VM. Now,
> > > one thing you need to know is that createStoragePool is called often,
> > > sometimes just to make sure the pool is there. You may want to create
> > > a map in your adaptor class and keep track of pools that have been
> > > created, LibvirtStorageAdaptor doesn't have to do this because it asks
> > > libvirt about which storage pools exist. There are also calls to
> > > refresh the pool stats, and all of the other calls can be seen in the
> > > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> > > it's probably a hold-over from 4.1, as I have the vague idea that
> > > volumes are created on the mgmt server via the plugin now, so whatever
> > > doesn't apply can just be stubbed out (or optionally
> > > extended/reimplemented here, if you don't mind the hosts talking to
> > > the san api).
> > >
> > > There is a difference between attaching new volumes and launching a VM
> > > with existing volumes.  In the latter case, the VM definition that was
> > > passed to the KVM agent includes the disks, (StartCommand).
> > >
> > > I'd be interested in how your pool is defined for Xen, I imagine it
> > > would need to be kept the same. Is it just a definition to the SAN
> > > (ip address or some such, port number) and perhaps a volume pool name?
> > >
> > > > If there is a way for me to update the ACL list on the SAN to have
> > only a
> > > > single KVM host have access to the volume, that would be ideal.
> > >
> > > That depends on your SAN API.  I was under the impression that the
> > > storage plugin framework allowed for acls, or for you to do whatever
> > > you want for create/attach/delete/snapshot, etc. You'd just call your
> > > SAN API with the host info for the ACLs prior to when the disk is
> > > attached (or the VM is started).  I'd have to look more at the
> > > framework to know the details, in 4.1 I would do this in
> > > getPhysicalDisk just prior to connecting up the LUN.
> > >
> > >
> > > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > > <mi...@solidfire.com> wrote:
> > > > OK, yeah, the ACL part will be interesting. That is a bit different
> > from
> > > how
> > > > it works with XenServer and VMware.
> > > >
> > > > Just to give you an idea how it works in 4.2 with XenServer:
> > > >
> > > > * The user creates a CS volume (this is just recorded in the
> > > cloud.volumes
> > > > table).
> > > >
> > > > * The user attaches the volume as a disk to a VM for the first time
> (if
> > > the
> > > > storage allocator picks the SolidFire plug-in, the storage framework
> > > invokes
> > > > a method on the plug-in that creates a volume on the SAN...info like
> > the
> > > IQN
> > > > of the SAN volume is recorded in the DB).
> > > >
> > > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > > > determines based on a flag passed in that the storage in question is
> > > > "CloudStack-managed" storage (as opposed to "traditional"
> preallocated
> > > > storage). This tells it to discover the iSCSI target. Once discovered
> > it
> > > > determines if the iSCSI target already contains a storage repository
> > (it
> > > > would if this were a re-attach situation). If it does contain an SR
> > > already,
> > > > then there should already be one VDI, as well. If there is no SR, an
> SR
> > > is
> > > > created and a single VDI is created within it (that takes up about as
> > > much
> > > > space as was requested for the CloudStack volume).
> > > >
> > > > * The normal attach-volume logic continues (it depends on the
> existence
> > > of
> > > > an SR and a VDI).
> > > >
> > > > The VMware case is essentially the same (mainly just substitute
> > datastore
> > > > for SR and VMDK for VDI).
> > > >
> > > > In both cases, all hosts in the cluster have discovered the iSCSI
> > target,
> > > > but only the host that is currently running the VM that is using the
> > VDI
> > > (or
> > > > VMKD) is actually using the disk.
> > > >
> > > > Live Migration should be OK because the hypervisors communicate with
> > > > whatever metadata they have on the SR (or datastore).
> > > >
> > > > I see what you're saying with KVM, though.
> > > >
> > > > In that case, the hosts are clustered only in CloudStack's eyes. CS
> > > controls
> > > > Live Migration. You don't really need a clustered filesystem on the
> > LUN.
> > > The
> > > > LUN could be handed over raw to the VM using it.
> > > >
> > > > If there is a way for me to update the ACL list on the SAN to have
> > only a
> > > > single KVM host have access to the volume, that would be ideal.
> > > >
> > > > Also, I agree I'll need to use iscsiadm to discover and log in to the
> > > iSCSI
> > > > target. I'll also need to take the resultant new device and pass it
> > into
> > > the
> > > > VM.
> > > >
> > > > Does this sound reasonable? Please call me out on anything I seem
> > > incorrect
> > > > about. :)
> > > >
> > > > Thanks for all the thought on this, Marcus!
> > > >
> > > >
> > > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> > > > wrote:
> > > >>
> > > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> > attach
> > > >> the disk def to the vm. You may need to do your own StorageAdaptor
> and
> > > run
> > > >> iscsiadm commands to accomplish that, depending on how the libvirt
> > iscsi
> > > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> > works
> > > on
> > > >> xen at the momen., nor is it ideal.
> > > >>
> > > >> Your plugin will handle acls as far as which host can see which luns
> > as
> > > >> well, I remember discussing that months ago, so that a disk won't be
> > > >> connected until the hypervisor has exclusive access, so it will be
> > safe
> > > and
> > > >> fence the disk from rogue nodes that cloudstack loses connectivity
> > > with. It
> > > >> should revoke access to everything but the target host... Except for
> > > during
> > > >> migration but we can discuss that later, there's a migration prep
> > > process
> > > >> where the new host can be added to the acls, and the old host can be
> > > removed
> > > >> post migration.
> > > >>
> > > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> > mike.tutkowski@solidfire.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Yeah, that would be ideal.
> > > >>>
> > > >>> So, I would still need to discover the iSCSI target, log in to it,
> > then
> > > >>> figure out what /dev/sdX was created as a result (and leave it as
> is
> > -
> > > do
> > > >>> not format it with any file system...clustered or not). I would
> pass
> > > that
> > > >>> device into the VM.
> > > >>>
> > > >>> Kind of accurate?
> > > >>>
> > > >>>
> > > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> > shadowsor@gmail.com>
> > > >>> wrote:
> > > >>>>
> > > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> There
> > > are
> > > >>>> ones that work for block devices rather than files. You can piggy
> > > back off
> > > >>>> of the existing disk definitions and attach it to the vm as a
> block
> > > device.
> > > >>>> The definition is an XML string per libvirt XML format. You may
> want
> > > to use
> > > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > > mentioned,
> > > >>>> there are by-id paths to the block devices, as well as other ones
> > > that will
> > > >>>> be consistent and easier for management, not sure how familiar you
> > > are with
> > > >>>> device naming on Linux.
> > > >>>>
> > > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
> > > wrote:
> > > >>>>>
> > > >>>>> No, as that would rely on virtualized network/iscsi initiator
> > inside
> > > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > > hypervisor) as
> > > >>>>> a disk to the VM, rather than attaching some image file that
> > resides
> > > on a
> > > >>>>> filesystem, mounted on the host, living on a target.
> > > >>>>>
> > > >>>>> Actually, if you plan on the storage supporting live migration I
> > > think
> > > >>>>> this is the only way. You can't put a filesystem on it and mount
> it
> > > in two
> > > >>>>> places to facilitate migration unless its a clustered filesystem,
> > in
> > > which
> > > >>>>> case you're back to shared mount point.
> > > >>>>>
> > > >>>>> As far as I'm aware, the xenserver SR style is basically LVM
> with a
> > > xen
> > > >>>>> specific cluster management, a custom CLVM. They don't use a
> > > filesystem
> > > >>>>> either.
> > > >>>>>
> > > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > > >>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>
> > > >>>>>> When you say, "wire up the lun directly to the vm," do you mean
> > > >>>>>> circumventing the hypervisor? I didn't think we could do that in
> > CS.
> > > >>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
> > as
> > > far as I
> > > >>>>>> know.
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > > shadowsor@gmail.com>
> > > >>>>>> wrote:
> > > >>>>>>>
> > > >>>>>>> Better to wire up the lun directly to the vm unless there is a
> > good
> > > >>>>>>> reason not to.
> > > >>>>>>>
> > > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <
> shadowsor@gmail.com>
> > > >>>>>>> wrote:
> > > >>>>>>>>
> > > >>>>>>>> You could do that, but as mentioned I think its a mistake to
> go
> > to
> > > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
> and
> > > then putting
> > > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> > even
> > > RAW disk
> > > >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
> > > way, and have
> > > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > > >>>>>>>>
> > > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > > >>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>
> > > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
> CS.
> > > >>>>>>>>>
> > > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> > > >>>>>>>>> selecting SharedMountPoint and specifying the location of the
> > > share.
> > > >>>>>>>>>
> > > >>>>>>>>> They can set up their share using Open iSCSI by discovering
> > their
> > > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
> > > their file
> > > >>>>>>>>> system.
> > > >>>>>>>>>
> > > >>>>>>>>> Would it make sense for me to just do that discovery, logging
> > in,
> > > >>>>>>>>> and mounting behind the scenes for them and letting the
> current
> > > code manage
> > > >>>>>>>>> the rest as it currently does?
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > > >>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>
> > > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> catch
> > up
> > > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > > snapshots + memory
> > > >>>>>>>>>> dump. I still think disk snapshots would preferably be
> handled
> > > by the SAN,
> > > >>>>>>>>>> and then memory dumps can go to secondary storage or
> something
> > > else. This is
> > > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to
> see
> > > how others are
> > > >>>>>>>>>> planning theirs.
> > > >>>>>>>>>>
> > > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> > shadowsor@gmail.com
> > > >
> > > >>>>>>>>>> wrote:
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style
> on
> > > an
> > > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
> > > Otherwise you're
> > > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> > > QCOW2 disk image,
> > > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
> VM,
> > > and
> > > >>>>>>>>>>> handling snapshots on the San side via the storage plugin
> is
> > > best. My
> > > >>>>>>>>>>> impression from the storage plugin refactor was that there
> > was
> > > a snapshot
> > > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > > >>>>>>>>>>>
> > > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > > shadowsor@gmail.com>
> > > >>>>>>>>>>> wrote:
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
> end,
> > > if
> > > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
> > > your plugin for
> > > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
> far
> > > as space, that
> > > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> carve
> > > out luns from a
> > > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> > > independent of the
> > > >>>>>>>>>>>> LUN size the host sees.
> > > >>>>>>>>>>>>
> > > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Hey Marcus,
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
> > > work
> > > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
> VDI
> > > for
> > > >>>>>>>>>>>>> the snapshot is placed on the same storage repository as
> > the
> > > volume is on.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
> > and
> > > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> snapshots
> > > in 4.2) is I'd
> > > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> > > requested for the
> > > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> > > provisions volumes,
> > > >>>>>>>>>>>>> so the space is not actually used unless it needs to be).
> > > The CloudStack
> > > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> until a
> > > hypervisor
> > > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
> > > SAN volume.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> If this is also how KVM behaves and there is no creation
> of
> > > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
> > > there were support
> > > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
> > > target), then I
> > > >>>>>>>>>>>>> don't see how using this model will work.
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> > works
> > > >>>>>>>>>>>>> with DIR?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> What do you think?
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> Thanks
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> > today.
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> well
> > > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> > just
> > > >>>>>>>>>>>>>>> acts like a
> > > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> > > end-user
> > > >>>>>>>>>>>>>>> is
> > > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> hosts
> > > can
> > > >>>>>>>>>>>>>>> access,
> > > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> > > storage.
> > > >>>>>>>>>>>>>>> It could
> > > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> > > >>>>>>>>>>>>>>> cloudstack just
> > > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> > > >>>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
> same
> > > >>>>>>>>>>>>>>> > time.
> > > >>>>>>>>>>>>>>> > Multiples, in fact.
> > > >>>>>>>>>>>>>>> >
> > > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > > >>>>>>>>>>>>>>> >> -----------------------------------------
> > > >>>>>>>>>>>>>>> >> default              active     yes
> > > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> based
> > on
> > > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
> LUN,
> > > so
> > > >>>>>>>>>>>>>>> >>> there would only
> > > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> > (libvirt)
> > > >>>>>>>>>>>>>>> >>> storage pool.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> > > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
> > does
> > > >>>>>>>>>>>>>>> >>> not support
> > > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> > > libvirt
> > > >>>>>>>>>>>>>>> >>> supports
> > > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> since
> > > >>>>>>>>>>>>>>> >>> each one of its
> > > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > > targets/LUNs).
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         @Override
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>         }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>     }
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> > being
> > > >>>>>>>>>>>>>>> >>>> used, but I'm
> > > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
> > > >>>>>>>>>>>>>>> >>>> selects the
> > > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
> > > that
> > > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> Thanks!
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> > > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > > >>>>>>>>>>>>>>> >>>> wrote:
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>>
> > http://libvirt.org/storage.html#StorageBackendISCSI
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> server,
> > > and
> > > >>>>>>>>>>>>>>> >>>>> cannot be
> > > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> > your
> > > >>>>>>>>>>>>>>> >>>>> plugin will take
> > > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
> > and
> > > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
> the
> > > Xen
> > > >>>>>>>>>>>>>>> >>>>> stuff).
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides
> a
> > > 1:1
> > > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as
> a
> > > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> about
> > > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
> own
> > > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > > >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
> >  We
> > > >>>>>>>>>>>>>>> >>>>> can cross that
> > > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
> > > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/Normally,
> > > >>>>>>>>>>>>>>> >>>>> you'll see a
> > > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> that
> > > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
> > that
> > > >>>>>>>>>>>>>>> >>>>> is done for
> > > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
> > code
> > > >>>>>>>>>>>>>>> >>>>> to see if you
> > > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> > storage
> > > >>>>>>>>>>>>>>> >>>>> pools before you
> > > >>>>>>>>>>>>>>> >>>>> get started.
> > > >>>>>>>>>>>>>>> >>>>>
> > > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
> > but
> > > >>>>>>>>>>>>>>> >>>>> > you figure it
> > > >>>>>>>>>>>>>>> >>>>> > supports
> > > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
> > > >>>>>>>>>>>>>>> >>>>> > right?
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> > classes
> > > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > > >>>>>>>>>>>>>>> >>>>> >> last
> > > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> Sorensen
> > > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
> > > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
> > for
> > > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
> > > login.
> > > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > > >>>>>>>>>>>>>>> >>>>> >>> sent
> > > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
> and
> > > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > > >>>>>>>>>>>>>>> >>>>> >>>
> > > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release
> I
> > > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > > framework
> > > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > > >>>>>>>>>>>>>>> >>>>> >>>> times
> > > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> delete
> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> > > mapping
> > > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
> > > admin
> > > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> would
> > > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> needed
> > > to
> > > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
> > > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> > work
> > > on
> > > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > > >>>>>>>>>>>>>>> >>>>> >>>> still
> > > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> will
> > > need
> > > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > > >>>>>>>>>>>>>>> >>>>> >>>> the
> > > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> > expect
> > > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> this
> > to
> > > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > > >>>>>>>>>>>>>>> >>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>> >>>> --
> > > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >>
> > > >>>>>>>>>>>>>>> >>>>> >> --
> > > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> >
> > > >>>>>>>>>>>>>>> >>>>> > --
> > > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>>
> > > >>>>>>>>>>>>>>> >>>> --
> > > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>>
> > > >>>>>>>>>>>>>>> >>> --
> > > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >>
> > > >>>>>>>>>>>>>>> >> --
> > > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>>
> > > >>>>>>>>>>>>>> --
> > > >>>>>>>>>>>>>> Mike Tutkowski
> > > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>>> o: 303.746.7302
> > > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>>
> > > >>>>>>>>>>>>> --
> > > >>>>>>>>>>>>> Mike Tutkowski
> > > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>>>>>> o: 303.746.7302
> > > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> --
> > > >>>>>>>>> Mike Tutkowski
> > > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>>>>> o: 303.746.7302
> > > >>>>>>>>> Advancing the way the world uses the cloud™
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>>
> > > >>>>>> --
> > > >>>>>> Mike Tutkowski
> > > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > > >>>>>> e: mike.tutkowski@solidfire.com
> > > >>>>>> o: 303.746.7302
> > > >>>>>> Advancing the way the world uses the cloud™
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> Mike Tutkowski
> > > >>> Senior CloudStack Developer, SolidFire Inc.
> > > >>> e: mike.tutkowski@solidfire.com
> > > >>> o: 303.746.7302
> > > >>> Advancing the way the world uses the cloud™
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Mike Tutkowski
> > > > Senior CloudStack Developer, SolidFire Inc.
> > > > e: mike.tutkowski@solidfire.com
> > > > o: 303.746.7302
> > > > Advancing the way the world uses the cloud™
> > >
> >
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *™*
> >
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
What do you do with Xen? I imagine the user enter the SAN details when
registering the pool? A the pool details are basically just instructions on
how to log into a target, correct?

You can choose to log in a KVM host to the target during createStoragePool
and save the pool in a map, or just save the pool info in a map for future
reference by uuid, for when you do need to log in. The createStoragePool
then just becomes a way to save the pool info to the agent. Personally, I'd
log in on the pool create and look/scan for specific luns when they're
needed, but I haven't thought it through thoroughly. I just say that mainly
because login only happens once, the first time the pool is used, and every
other storage command is about discovering new luns or maybe
deleting/disconnecting luns no longer needed. On the other hand, you could
do all of the above: log in on pool create, then also check if you're
logged in on other commands and log in if you've lost connection.

With Xen, what does your registered pool   show in the UI for avail/used
capacity, and how does it get that info? I assume there is some sort of
disk pool that the luns are carved from, and that your plugin is called to
talk to the SAN and expose to the user how much of that pool has been
allocated. Knowing how you already solves these problems with Xen will help
figure out what to do with KVM.

If this is the case, I think the plugin can continue to handle it rather
than getting details from the agent. I'm not sure if that means nulls are
OK for these on the agent side or what, I need to look at the storage
plugin arch more closely.
On Sep 17, 2013 7:08 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hey Marcus,
>
> I'm reviewing your e-mails as I implement the necessary methods in new
> classes.
>
> "So, referencing StorageAdaptor.java, createStoragePool accepts all of
> the pool data (host, port, name, path) which would be used to log the
> host into the initiator."
>
> Can you tell me, in my case, since a storage pool (primary storage) is
> actually the SAN, I wouldn't really be logging into anything at this point,
> correct?
>
> Also, what kind of capacity, available, and used bytes make sense to report
> for KVMStoragePool (since KVMStoragePool represents the SAN in my case and
> not an individual LUN)?
>
> Thanks!
>
>
> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <shadowsor@gmail.com
> >wrote:
>
> > Ok, KVM will be close to that, of course, because only the hypervisor
> > classes differ, the rest is all mgmt server. Creating a volume is just
> > a db entry until it's deployed for the first time. AttachVolumeCommand
> > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > StorageAdaptor) to log in the host to the target and then you have a
> > block device.  Maybe libvirt will do that for you, but my quick read
> > made it sound like the iscsi libvirt pool type is actually a pool, not
> > a lun or volume, so you'll need to figure out if that works or if
> > you'll have to use iscsiadm commands.
> >
> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > doesn't really manage your pool the way you want), you're going to
> > have to create a version of KVMStoragePool class and a StorageAdaptor
> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > implementing all of the methods, then in KVMStorageManager.java
> > there's a "_storageMapper" map. This is used to select the correct
> > adaptor, you can see in this file that every call first pulls the
> > correct adaptor out of this map via getStorageAdaptor. So you can see
> > a comment in this file that says "add other storage adaptors here",
> > where it puts to this map, this is where you'd register your adaptor.
> >
> > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > the pool data (host, port, name, path) which would be used to log the
> > host into the initiator. I *believe* the method getPhysicalDisk will
> > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > this and then creates the XML diskdef and attaches it to the VM. Now,
> > one thing you need to know is that createStoragePool is called often,
> > sometimes just to make sure the pool is there. You may want to create
> > a map in your adaptor class and keep track of pools that have been
> > created, LibvirtStorageAdaptor doesn't have to do this because it asks
> > libvirt about which storage pools exist. There are also calls to
> > refresh the pool stats, and all of the other calls can be seen in the
> > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> > it's probably a hold-over from 4.1, as I have the vague idea that
> > volumes are created on the mgmt server via the plugin now, so whatever
> > doesn't apply can just be stubbed out (or optionally
> > extended/reimplemented here, if you don't mind the hosts talking to
> > the san api).
> >
> > There is a difference between attaching new volumes and launching a VM
> > with existing volumes.  In the latter case, the VM definition that was
> > passed to the KVM agent includes the disks, (StartCommand).
> >
> > I'd be interested in how your pool is defined for Xen, I imagine it
> > would need to be kept the same. Is it just a definition to the SAN
> > (ip address or some such, port number) and perhaps a volume pool name?
> >
> > > If there is a way for me to update the ACL list on the SAN to have
> only a
> > > single KVM host have access to the volume, that would be ideal.
> >
> > That depends on your SAN API.  I was under the impression that the
> > storage plugin framework allowed for acls, or for you to do whatever
> > you want for create/attach/delete/snapshot, etc. You'd just call your
> > SAN API with the host info for the ACLs prior to when the disk is
> > attached (or the VM is started).  I'd have to look more at the
> > framework to know the details, in 4.1 I would do this in
> > getPhysicalDisk just prior to connecting up the LUN.
> >
> >
> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> > > OK, yeah, the ACL part will be interesting. That is a bit different
> from
> > how
> > > it works with XenServer and VMware.
> > >
> > > Just to give you an idea how it works in 4.2 with XenServer:
> > >
> > > * The user creates a CS volume (this is just recorded in the
> > cloud.volumes
> > > table).
> > >
> > > * The user attaches the volume as a disk to a VM for the first time (if
> > the
> > > storage allocator picks the SolidFire plug-in, the storage framework
> > invokes
> > > a method on the plug-in that creates a volume on the SAN...info like
> the
> > IQN
> > > of the SAN volume is recorded in the DB).
> > >
> > > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > > determines based on a flag passed in that the storage in question is
> > > "CloudStack-managed" storage (as opposed to "traditional" preallocated
> > > storage). This tells it to discover the iSCSI target. Once discovered
> it
> > > determines if the iSCSI target already contains a storage repository
> (it
> > > would if this were a re-attach situation). If it does contain an SR
> > already,
> > > then there should already be one VDI, as well. If there is no SR, an SR
> > is
> > > created and a single VDI is created within it (that takes up about as
> > much
> > > space as was requested for the CloudStack volume).
> > >
> > > * The normal attach-volume logic continues (it depends on the existence
> > of
> > > an SR and a VDI).
> > >
> > > The VMware case is essentially the same (mainly just substitute
> datastore
> > > for SR and VMDK for VDI).
> > >
> > > In both cases, all hosts in the cluster have discovered the iSCSI
> target,
> > > but only the host that is currently running the VM that is using the
> VDI
> > (or
> > > VMKD) is actually using the disk.
> > >
> > > Live Migration should be OK because the hypervisors communicate with
> > > whatever metadata they have on the SR (or datastore).
> > >
> > > I see what you're saying with KVM, though.
> > >
> > > In that case, the hosts are clustered only in CloudStack's eyes. CS
> > controls
> > > Live Migration. You don't really need a clustered filesystem on the
> LUN.
> > The
> > > LUN could be handed over raw to the VM using it.
> > >
> > > If there is a way for me to update the ACL list on the SAN to have
> only a
> > > single KVM host have access to the volume, that would be ideal.
> > >
> > > Also, I agree I'll need to use iscsiadm to discover and log in to the
> > iSCSI
> > > target. I'll also need to take the resultant new device and pass it
> into
> > the
> > > VM.
> > >
> > > Does this sound reasonable? Please call me out on anything I seem
> > incorrect
> > > about. :)
> > >
> > > Thanks for all the thought on this, Marcus!
> > >
> > >
> > > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
> > > wrote:
> > >>
> > >> Perfect. You'll have a domain def ( the VM), a disk def, and the
> attach
> > >> the disk def to the vm. You may need to do your own StorageAdaptor and
> > run
> > >> iscsiadm commands to accomplish that, depending on how the libvirt
> iscsi
> > >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> works
> > on
> > >> xen at the momen., nor is it ideal.
> > >>
> > >> Your plugin will handle acls as far as which host can see which luns
> as
> > >> well, I remember discussing that months ago, so that a disk won't be
> > >> connected until the hypervisor has exclusive access, so it will be
> safe
> > and
> > >> fence the disk from rogue nodes that cloudstack loses connectivity
> > with. It
> > >> should revoke access to everything but the target host... Except for
> > during
> > >> migration but we can discuss that later, there's a migration prep
> > process
> > >> where the new host can be added to the acls, and the old host can be
> > removed
> > >> post migration.
> > >>
> > >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com
> > >
> > >> wrote:
> > >>>
> > >>> Yeah, that would be ideal.
> > >>>
> > >>> So, I would still need to discover the iSCSI target, log in to it,
> then
> > >>> figure out what /dev/sdX was created as a result (and leave it as is
> -
> > do
> > >>> not format it with any file system...clustered or not). I would pass
> > that
> > >>> device into the VM.
> > >>>
> > >>> Kind of accurate?
> > >>>
> > >>>
> > >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> > >>> wrote:
> > >>>>
> > >>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There
> > are
> > >>>> ones that work for block devices rather than files. You can piggy
> > back off
> > >>>> of the existing disk definitions and attach it to the vm as a block
> > device.
> > >>>> The definition is an XML string per libvirt XML format. You may want
> > to use
> > >>>> an alternate path to the disk rather than just /dev/sdx like I
> > mentioned,
> > >>>> there are by-id paths to the block devices, as well as other ones
> > that will
> > >>>> be consistent and easier for management, not sure how familiar you
> > are with
> > >>>> device naming on Linux.
> > >>>>
> > >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
> > wrote:
> > >>>>>
> > >>>>> No, as that would rely on virtualized network/iscsi initiator
> inside
> > >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> > hypervisor) as
> > >>>>> a disk to the VM, rather than attaching some image file that
> resides
> > on a
> > >>>>> filesystem, mounted on the host, living on a target.
> > >>>>>
> > >>>>> Actually, if you plan on the storage supporting live migration I
> > think
> > >>>>> this is the only way. You can't put a filesystem on it and mount it
> > in two
> > >>>>> places to facilitate migration unless its a clustered filesystem,
> in
> > which
> > >>>>> case you're back to shared mount point.
> > >>>>>
> > >>>>> As far as I'm aware, the xenserver SR style is basically LVM with a
> > xen
> > >>>>> specific cluster management, a custom CLVM. They don't use a
> > filesystem
> > >>>>> either.
> > >>>>>
> > >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> > >>>>> <mi...@solidfire.com> wrote:
> > >>>>>>
> > >>>>>> When you say, "wire up the lun directly to the vm," do you mean
> > >>>>>> circumventing the hypervisor? I didn't think we could do that in
> CS.
> > >>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
> as
> > far as I
> > >>>>>> know.
> > >>>>>>
> > >>>>>>
> > >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> > shadowsor@gmail.com>
> > >>>>>> wrote:
> > >>>>>>>
> > >>>>>>> Better to wire up the lun directly to the vm unless there is a
> good
> > >>>>>>> reason not to.
> > >>>>>>>
> > >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
> > >>>>>>> wrote:
> > >>>>>>>>
> > >>>>>>>> You could do that, but as mentioned I think its a mistake to go
> to
> > >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and
> > then putting
> > >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> even
> > RAW disk
> > >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
> > way, and have
> > >>>>>>>> more overhead with the filesystem and its journaling, etc.
> > >>>>>>>>
> > >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> > >>>>>>>> <mi...@solidfire.com> wrote:
> > >>>>>>>>>
> > >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
> > >>>>>>>>>
> > >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> > >>>>>>>>> selecting SharedMountPoint and specifying the location of the
> > share.
> > >>>>>>>>>
> > >>>>>>>>> They can set up their share using Open iSCSI by discovering
> their
> > >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
> > their file
> > >>>>>>>>> system.
> > >>>>>>>>>
> > >>>>>>>>> Would it make sense for me to just do that discovery, logging
> in,
> > >>>>>>>>> and mounting behind the scenes for them and letting the current
> > code manage
> > >>>>>>>>> the rest as it currently does?
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> > >>>>>>>>> <sh...@gmail.com> wrote:
> > >>>>>>>>>>
> > >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch
> up
> > >>>>>>>>>> on the work done in KVM, but this is basically just disk
> > snapshots + memory
> > >>>>>>>>>> dump. I still think disk snapshots would preferably be handled
> > by the SAN,
> > >>>>>>>>>> and then memory dumps can go to secondary storage or something
> > else. This is
> > >>>>>>>>>> relatively new ground with CS and KVM, so we will want to see
> > how others are
> > >>>>>>>>>> planning theirs.
> > >>>>>>>>>>
> > >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> shadowsor@gmail.com
> > >
> > >>>>>>>>>> wrote:
> > >>>>>>>>>>>
> > >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on
> > an
> > >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
> > Otherwise you're
> > >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> > QCOW2 disk image,
> > >>>>>>>>>>> and that seems unnecessary and a performance killer.
> > >>>>>>>>>>>
> > >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM,
> > and
> > >>>>>>>>>>> handling snapshots on the San side via the storage plugin is
> > best. My
> > >>>>>>>>>>> impression from the storage plugin refactor was that there
> was
> > a snapshot
> > >>>>>>>>>>> service that would allow the San to handle snapshots.
> > >>>>>>>>>>>
> > >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> > shadowsor@gmail.com>
> > >>>>>>>>>>> wrote:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end,
> > if
> > >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
> > your plugin for
> > >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far
> > as space, that
> > >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve
> > out luns from a
> > >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> > independent of the
> > >>>>>>>>>>>> LUN size the host sees.
> > >>>>>>>>>>>>
> > >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> > >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Hey Marcus,
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
> > work
> > >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI
> > for
> > >>>>>>>>>>>>> the snapshot is placed on the same storage repository as
> the
> > volume is on.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Same idea for VMware, I believe.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
> and
> > >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots
> > in 4.2) is I'd
> > >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> > requested for the
> > >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> > provisions volumes,
> > >>>>>>>>>>>>> so the space is not actually used unless it needs to be).
> > The CloudStack
> > >>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a
> > hypervisor
> > >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
> > SAN volume.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
> > >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
> > there were support
> > >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
> > target), then I
> > >>>>>>>>>>>>> don't see how using this model will work.
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> works
> > >>>>>>>>>>>>> with DIR?
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> What do you think?
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> Thanks
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> > >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> today.
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
> > >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> > >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> just
> > >>>>>>>>>>>>>>> acts like a
> > >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> > end-user
> > >>>>>>>>>>>>>>> is
> > >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts
> > can
> > >>>>>>>>>>>>>>> access,
> > >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> > storage.
> > >>>>>>>>>>>>>>> It could
> > >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> > >>>>>>>>>>>>>>> cloudstack just
> > >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> > >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> > >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
> > >>>>>>>>>>>>>>> > time.
> > >>>>>>>>>>>>>>> > Multiples, in fact.
> > >>>>>>>>>>>>>>> >
> > >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> > >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> > >>>>>>>>>>>>>>> >> Name                 State      Autostart
> > >>>>>>>>>>>>>>> >> -----------------------------------------
> > >>>>>>>>>>>>>>> >> default              active     yes
> > >>>>>>>>>>>>>>> >> iSCSI                active     no
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> > >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> I see what you're saying now.
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based
> on
> > >>>>>>>>>>>>>>> >>> an iSCSI target.
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN,
> > so
> > >>>>>>>>>>>>>>> >>> there would only
> > >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> (libvirt)
> > >>>>>>>>>>>>>>> >>> storage pool.
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> > >>>>>>>>>>>>>>> >>> targets/LUNs on the
> > >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
> does
> > >>>>>>>>>>>>>>> >>> not support
> > >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> > libvirt
> > >>>>>>>>>>>>>>> >>> supports
> > >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
> > >>>>>>>>>>>>>>> >>> each one of its
> > >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> > targets/LUNs).
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> > >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>     public enum poolType {
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> > >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> > >>>>>>>>>>>>>>> >>>> RBD("rbd");
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         String _poolType;
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         }
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         @Override
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         public String toString() {
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>             return _poolType;
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>         }
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>     }
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> being
> > >>>>>>>>>>>>>>> >>>> used, but I'm
> > >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
> > >>>>>>>>>>>>>>> >>>> selects the
> > >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
> > that
> > >>>>>>>>>>>>>>> >>>> the "netfs" option
> > >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> Thanks!
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> > >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> > >>>>>>>>>>>>>>> >>>> wrote:
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>> Take a look at this:
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/storage.html#StorageBackendISCSI
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server,
> > and
> > >>>>>>>>>>>>>>> >>>>> cannot be
> > >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> your
> > >>>>>>>>>>>>>>> >>>>> plugin will take
> > >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
> and
> > >>>>>>>>>>>>>>> >>>>> hooking it up to
> > >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the
> > Xen
> > >>>>>>>>>>>>>>> >>>>> stuff).
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a
> > 1:1
> > >>>>>>>>>>>>>>> >>>>> mapping, or if
> > >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
> > >>>>>>>>>>>>>>> >>>>> pool. You may need
> > >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
> > >>>>>>>>>>>>>>> >>>>> this. Let us know.
> > >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
> > >>>>>>>>>>>>>>> >>>>> storage adaptor
> > >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>  We
> > >>>>>>>>>>>>>>> >>>>> can cross that
> > >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
> > >>>>>>>>>>>>>>> >>>>> bindings doc.
> > >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/ Normally,
> > >>>>>>>>>>>>>>> >>>>> you'll see a
> > >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
> > >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> > >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
> that
> > >>>>>>>>>>>>>>> >>>>> is done for
> > >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
> code
> > >>>>>>>>>>>>>>> >>>>> to see if you
> > >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> storage
> > >>>>>>>>>>>>>>> >>>>> pools before you
> > >>>>>>>>>>>>>>> >>>>> get started.
> > >>>>>>>>>>>>>>> >>>>>
> > >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
> but
> > >>>>>>>>>>>>>>> >>>>> > you figure it
> > >>>>>>>>>>>>>>> >>>>> > supports
> > >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
> > >>>>>>>>>>>>>>> >>>>> > right?
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> classes
> > >>>>>>>>>>>>>>> >>>>> >> you pointed out
> > >>>>>>>>>>>>>>> >>>>> >> last
> > >>>>>>>>>>>>>>> >>>>> >> week or so.
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
> > >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> > >>>>>>>>>>>>>>> >>>>> >> wrote:
> > >>>>>>>>>>>>>>> >>>>> >>>
> > >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
> > >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> > >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
> for
> > >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> > >>>>>>>>>>>>>>> >>>>> >>> you'd call
> > >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
> > login.
> > >>>>>>>>>>>>>>> >>>>> >>> See the info I
> > >>>>>>>>>>>>>>> >>>>> >>> sent
> > >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
> > >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> > >>>>>>>>>>>>>>> >>>>> >>> storage type
> > >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> > >>>>>>>>>>>>>>> >>>>> >>>
> > >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> > >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> > >>>>>>>>>>>>>>> >>>>> >>> wrote:
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
> > >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> > >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> > framework
> > >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> > >>>>>>>>>>>>>>> >>>>> >>>> times
> > >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
> > >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> > >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> > >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> > mapping
> > >>>>>>>>>>>>>>> >>>>> >>>> between a
> > >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> > >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
> > admin
> > >>>>>>>>>>>>>>> >>>>> >>>> to create large
> > >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
> > >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> > >>>>>>>>>>>>>>> >>>>> >>>> root and
> > >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed
> > to
> > >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
> > >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> > >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> work
> > on
> > >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> > >>>>>>>>>>>>>>> >>>>> >>>> still
> > >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will
> > need
> > >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> > >>>>>>>>>>>>>>> >>>>> >>>> the
> > >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> expect
> > >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> > >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this
> to
> > >>>>>>>>>>>>>>> >>>>> >>>> work?
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> > >>>>>>>>>>>>>>> >>>>> >>>> Mike
> > >>>>>>>>>>>>>>> >>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>> >>>> --
> > >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> > >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >>
> > >>>>>>>>>>>>>>> >>>>> >> --
> > >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> > >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> >
> > >>>>>>>>>>>>>>> >>>>> > --
> > >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> > >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>>
> > >>>>>>>>>>>>>>> >>>> --
> > >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> > >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> > >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>>
> > >>>>>>>>>>>>>>> >>> --
> > >>>>>>>>>>>>>>> >>> Mike Tutkowski
> > >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >>> o: 303.746.7302
> > >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >>
> > >>>>>>>>>>>>>>> >> --
> > >>>>>>>>>>>>>>> >> Mike Tutkowski
> > >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>>> >> o: 303.746.7302
> > >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> --
> > >>>>>>>>>>>>>> Mike Tutkowski
> > >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>>> o: 303.746.7302
> > >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> --
> > >>>>>>>>>>>>> Mike Tutkowski
> > >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>>>>>> o: 303.746.7302
> > >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>> --
> > >>>>>>>>> Mike Tutkowski
> > >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>>>>> e: mike.tutkowski@solidfire.com
> > >>>>>>>>> o: 303.746.7302
> > >>>>>>>>> Advancing the way the world uses the cloud™
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> --
> > >>>>>> Mike Tutkowski
> > >>>>>> Senior CloudStack Developer, SolidFire Inc.
> > >>>>>> e: mike.tutkowski@solidfire.com
> > >>>>>> o: 303.746.7302
> > >>>>>> Advancing the way the world uses the cloud™
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Mike Tutkowski
> > >>> Senior CloudStack Developer, SolidFire Inc.
> > >>> e: mike.tutkowski@solidfire.com
> > >>> o: 303.746.7302
> > >>> Advancing the way the world uses the cloud™
> > >
> > >
> > >
> > >
> > > --
> > > Mike Tutkowski
> > > Senior CloudStack Developer, SolidFire Inc.
> > > e: mike.tutkowski@solidfire.com
> > > o: 303.746.7302
> > > Advancing the way the world uses the cloud™
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

I'm reviewing your e-mails as I implement the necessary methods in new
classes.

"So, referencing StorageAdaptor.java, createStoragePool accepts all of
the pool data (host, port, name, path) which would be used to log the
host into the initiator."

Can you tell me, in my case, since a storage pool (primary storage) is
actually the SAN, I wouldn't really be logging into anything at this point,
correct?

Also, what kind of capacity, available, and used bytes make sense to report
for KVMStoragePool (since KVMStoragePool represents the SAN in my case and
not an individual LUN)?

Thanks!


On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Ok, KVM will be close to that, of course, because only the hypervisor
> classes differ, the rest is all mgmt server. Creating a volume is just
> a db entry until it's deployed for the first time. AttachVolumeCommand
> on the agent side (LibvirtStorageAdaptor.java is analogous to
> CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> StorageAdaptor) to log in the host to the target and then you have a
> block device.  Maybe libvirt will do that for you, but my quick read
> made it sound like the iscsi libvirt pool type is actually a pool, not
> a lun or volume, so you'll need to figure out if that works or if
> you'll have to use iscsiadm commands.
>
> If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> doesn't really manage your pool the way you want), you're going to
> have to create a version of KVMStoragePool class and a StorageAdaptor
> class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> implementing all of the methods, then in KVMStorageManager.java
> there's a "_storageMapper" map. This is used to select the correct
> adaptor, you can see in this file that every call first pulls the
> correct adaptor out of this map via getStorageAdaptor. So you can see
> a comment in this file that says "add other storage adaptors here",
> where it puts to this map, this is where you'd register your adaptor.
>
> So, referencing StorageAdaptor.java, createStoragePool accepts all of
> the pool data (host, port, name, path) which would be used to log the
> host into the initiator. I *believe* the method getPhysicalDisk will
> need to do the work of attaching the lun.  AttachVolumeCommand calls
> this and then creates the XML diskdef and attaches it to the VM. Now,
> one thing you need to know is that createStoragePool is called often,
> sometimes just to make sure the pool is there. You may want to create
> a map in your adaptor class and keep track of pools that have been
> created, LibvirtStorageAdaptor doesn't have to do this because it asks
> libvirt about which storage pools exist. There are also calls to
> refresh the pool stats, and all of the other calls can be seen in the
> StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> it's probably a hold-over from 4.1, as I have the vague idea that
> volumes are created on the mgmt server via the plugin now, so whatever
> doesn't apply can just be stubbed out (or optionally
> extended/reimplemented here, if you don't mind the hosts talking to
> the san api).
>
> There is a difference between attaching new volumes and launching a VM
> with existing volumes.  In the latter case, the VM definition that was
> passed to the KVM agent includes the disks, (StartCommand).
>
> I'd be interested in how your pool is defined for Xen, I imagine it
> would need to be kept the same. Is it just a definition to the SAN
> (ip address or some such, port number) and perhaps a volume pool name?
>
> > If there is a way for me to update the ACL list on the SAN to have only a
> > single KVM host have access to the volume, that would be ideal.
>
> That depends on your SAN API.  I was under the impression that the
> storage plugin framework allowed for acls, or for you to do whatever
> you want for create/attach/delete/snapshot, etc. You'd just call your
> SAN API with the host info for the ACLs prior to when the disk is
> attached (or the VM is started).  I'd have to look more at the
> framework to know the details, in 4.1 I would do this in
> getPhysicalDisk just prior to connecting up the LUN.
>
>
> On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > OK, yeah, the ACL part will be interesting. That is a bit different from
> how
> > it works with XenServer and VMware.
> >
> > Just to give you an idea how it works in 4.2 with XenServer:
> >
> > * The user creates a CS volume (this is just recorded in the
> cloud.volumes
> > table).
> >
> > * The user attaches the volume as a disk to a VM for the first time (if
> the
> > storage allocator picks the SolidFire plug-in, the storage framework
> invokes
> > a method on the plug-in that creates a volume on the SAN...info like the
> IQN
> > of the SAN volume is recorded in the DB).
> >
> > * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> > determines based on a flag passed in that the storage in question is
> > "CloudStack-managed" storage (as opposed to "traditional" preallocated
> > storage). This tells it to discover the iSCSI target. Once discovered it
> > determines if the iSCSI target already contains a storage repository (it
> > would if this were a re-attach situation). If it does contain an SR
> already,
> > then there should already be one VDI, as well. If there is no SR, an SR
> is
> > created and a single VDI is created within it (that takes up about as
> much
> > space as was requested for the CloudStack volume).
> >
> > * The normal attach-volume logic continues (it depends on the existence
> of
> > an SR and a VDI).
> >
> > The VMware case is essentially the same (mainly just substitute datastore
> > for SR and VMDK for VDI).
> >
> > In both cases, all hosts in the cluster have discovered the iSCSI target,
> > but only the host that is currently running the VM that is using the VDI
> (or
> > VMKD) is actually using the disk.
> >
> > Live Migration should be OK because the hypervisors communicate with
> > whatever metadata they have on the SR (or datastore).
> >
> > I see what you're saying with KVM, though.
> >
> > In that case, the hosts are clustered only in CloudStack's eyes. CS
> controls
> > Live Migration. You don't really need a clustered filesystem on the LUN.
> The
> > LUN could be handed over raw to the VM using it.
> >
> > If there is a way for me to update the ACL list on the SAN to have only a
> > single KVM host have access to the volume, that would be ideal.
> >
> > Also, I agree I'll need to use iscsiadm to discover and log in to the
> iSCSI
> > target. I'll also need to take the resultant new device and pass it into
> the
> > VM.
> >
> > Does this sound reasonable? Please call me out on anything I seem
> incorrect
> > about. :)
> >
> > Thanks for all the thought on this, Marcus!
> >
> >
> > On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
> > wrote:
> >>
> >> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
> >> the disk def to the vm. You may need to do your own StorageAdaptor and
> run
> >> iscsiadm commands to accomplish that, depending on how the libvirt iscsi
> >> works. My impression is that a 1:1:1 pool/lun/volume isn't how it works
> on
> >> xen at the momen., nor is it ideal.
> >>
> >> Your plugin will handle acls as far as which host can see which luns as
> >> well, I remember discussing that months ago, so that a disk won't be
> >> connected until the hypervisor has exclusive access, so it will be safe
> and
> >> fence the disk from rogue nodes that cloudstack loses connectivity
> with. It
> >> should revoke access to everything but the target host... Except for
> during
> >> migration but we can discuss that later, there's a migration prep
> process
> >> where the new host can be added to the acls, and the old host can be
> removed
> >> post migration.
> >>
> >> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com
> >
> >> wrote:
> >>>
> >>> Yeah, that would be ideal.
> >>>
> >>> So, I would still need to discover the iSCSI target, log in to it, then
> >>> figure out what /dev/sdX was created as a result (and leave it as is -
> do
> >>> not format it with any file system...clustered or not). I would pass
> that
> >>> device into the VM.
> >>>
> >>> Kind of accurate?
> >>>
> >>>
> >>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>
> >>> wrote:
> >>>>
> >>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There
> are
> >>>> ones that work for block devices rather than files. You can piggy
> back off
> >>>> of the existing disk definitions and attach it to the vm as a block
> device.
> >>>> The definition is an XML string per libvirt XML format. You may want
> to use
> >>>> an alternate path to the disk rather than just /dev/sdx like I
> mentioned,
> >>>> there are by-id paths to the block devices, as well as other ones
> that will
> >>>> be consistent and easier for management, not sure how familiar you
> are with
> >>>> device naming on Linux.
> >>>>
> >>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
> wrote:
> >>>>>
> >>>>> No, as that would rely on virtualized network/iscsi initiator inside
> >>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> hypervisor) as
> >>>>> a disk to the VM, rather than attaching some image file that resides
> on a
> >>>>> filesystem, mounted on the host, living on a target.
> >>>>>
> >>>>> Actually, if you plan on the storage supporting live migration I
> think
> >>>>> this is the only way. You can't put a filesystem on it and mount it
> in two
> >>>>> places to facilitate migration unless its a clustered filesystem, in
> which
> >>>>> case you're back to shared mount point.
> >>>>>
> >>>>> As far as I'm aware, the xenserver SR style is basically LVM with a
> xen
> >>>>> specific cluster management, a custom CLVM. They don't use a
> filesystem
> >>>>> either.
> >>>>>
> >>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >>>>> <mi...@solidfire.com> wrote:
> >>>>>>
> >>>>>> When you say, "wire up the lun directly to the vm," do you mean
> >>>>>> circumventing the hypervisor? I didn't think we could do that in CS.
> >>>>>> OpenStack, on the other hand, always circumvents the hypervisor, as
> far as I
> >>>>>> know.
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> Better to wire up the lun directly to the vm unless there is a good
> >>>>>>> reason not to.
> >>>>>>>
> >>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> You could do that, but as mentioned I think its a mistake to go to
> >>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and
> then putting
> >>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or even
> RAW disk
> >>>>>>>> image on that filesystem. You'll lose a lot of iops along the
> way, and have
> >>>>>>>> more overhead with the filesystem and its journaling, etc.
> >>>>>>>>
> >>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>
> >>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
> >>>>>>>>>
> >>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> >>>>>>>>> selecting SharedMountPoint and specifying the location of the
> share.
> >>>>>>>>>
> >>>>>>>>> They can set up their share using Open iSCSI by discovering their
> >>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
> their file
> >>>>>>>>> system.
> >>>>>>>>>
> >>>>>>>>> Would it make sense for me to just do that discovery, logging in,
> >>>>>>>>> and mounting behind the scenes for them and letting the current
> code manage
> >>>>>>>>> the rest as it currently does?
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up
> >>>>>>>>>> on the work done in KVM, but this is basically just disk
> snapshots + memory
> >>>>>>>>>> dump. I still think disk snapshots would preferably be handled
> by the SAN,
> >>>>>>>>>> and then memory dumps can go to secondary storage or something
> else. This is
> >>>>>>>>>> relatively new ground with CS and KVM, so we will want to see
> how others are
> >>>>>>>>>> planning theirs.
> >>>>>>>>>>
> >>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <shadowsor@gmail.com
> >
> >>>>>>>>>> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on
> an
> >>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
> Otherwise you're
> >>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> QCOW2 disk image,
> >>>>>>>>>>> and that seems unnecessary and a performance killer.
> >>>>>>>>>>>
> >>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM,
> and
> >>>>>>>>>>> handling snapshots on the San side via the storage plugin is
> best. My
> >>>>>>>>>>> impression from the storage plugin refactor was that there was
> a snapshot
> >>>>>>>>>>> service that would allow the San to handle snapshots.
> >>>>>>>>>>>
> >>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> shadowsor@gmail.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end,
> if
> >>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
> your plugin for
> >>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far
> as space, that
> >>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve
> out luns from a
> >>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> independent of the
> >>>>>>>>>>>> LUN size the host sees.
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Hey Marcus,
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
> work
> >>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI
> for
> >>>>>>>>>>>>> the snapshot is placed on the same storage repository as the
> volume is on.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Same idea for VMware, I believe.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
> >>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots
> in 4.2) is I'd
> >>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> requested for the
> >>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> provisions volumes,
> >>>>>>>>>>>>> so the space is not actually used unless it needs to be).
> The CloudStack
> >>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a
> hypervisor
> >>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
> SAN volume.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
> >>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
> there were support
> >>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
> target), then I
> >>>>>>>>>>>>> don't see how using this model will work.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
> >>>>>>>>>>>>> with DIR?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Thanks
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
> >>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
> >>>>>>>>>>>>>>> acts like a
> >>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> end-user
> >>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts
> can
> >>>>>>>>>>>>>>> access,
> >>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> storage.
> >>>>>>>>>>>>>>> It could
> >>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> >>>>>>>>>>>>>>> cloudstack just
> >>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
> >>>>>>>>>>>>>>> > time.
> >>>>>>>>>>>>>>> > Multiples, in fact.
> >>>>>>>>>>>>>>> >
> >>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >>>>>>>>>>>>>>> >> Name                 State      Autostart
> >>>>>>>>>>>>>>> >> -----------------------------------------
> >>>>>>>>>>>>>>> >> default              active     yes
> >>>>>>>>>>>>>>> >> iSCSI                active     no
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> >>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> I see what you're saying now.
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on
> >>>>>>>>>>>>>>> >>> an iSCSI target.
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN,
> so
> >>>>>>>>>>>>>>> >>> there would only
> >>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
> >>>>>>>>>>>>>>> >>> storage pool.
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> >>>>>>>>>>>>>>> >>> targets/LUNs on the
> >>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
> >>>>>>>>>>>>>>> >>> not support
> >>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> libvirt
> >>>>>>>>>>>>>>> >>> supports
> >>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
> >>>>>>>>>>>>>>> >>> each one of its
> >>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> targets/LUNs).
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> >>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>     public enum poolType {
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >>>>>>>>>>>>>>> >>>> RBD("rbd");
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         String _poolType;
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         }
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         @Override
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         public String toString() {
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>             return _poolType;
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>         }
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>     }
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
> >>>>>>>>>>>>>>> >>>> used, but I'm
> >>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
> >>>>>>>>>>>>>>> >>>> selects the
> >>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
> that
> >>>>>>>>>>>>>>> >>>> the "netfs" option
> >>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> Thanks!
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> >>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >>>>>>>>>>>>>>> >>>> wrote:
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> Take a look at this:
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server,
> and
> >>>>>>>>>>>>>>> >>>>> cannot be
> >>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
> >>>>>>>>>>>>>>> >>>>> plugin will take
> >>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
> >>>>>>>>>>>>>>> >>>>> hooking it up to
> >>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the
> Xen
> >>>>>>>>>>>>>>> >>>>> stuff).
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a
> 1:1
> >>>>>>>>>>>>>>> >>>>> mapping, or if
> >>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
> >>>>>>>>>>>>>>> >>>>> pool. You may need
> >>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
> >>>>>>>>>>>>>>> >>>>> this. Let us know.
> >>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
> >>>>>>>>>>>>>>> >>>>> storage adaptor
> >>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
> >>>>>>>>>>>>>>> >>>>> can cross that
> >>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
> >>>>>>>>>>>>>>> >>>>> bindings doc.
> >>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
> >>>>>>>>>>>>>>> >>>>> you'll see a
> >>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
> >>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that
> >>>>>>>>>>>>>>> >>>>> is done for
> >>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code
> >>>>>>>>>>>>>>> >>>>> to see if you
> >>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
> >>>>>>>>>>>>>>> >>>>> pools before you
> >>>>>>>>>>>>>>> >>>>> get started.
> >>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> >>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
> >>>>>>>>>>>>>>> >>>>> > you figure it
> >>>>>>>>>>>>>>> >>>>> > supports
> >>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
> >>>>>>>>>>>>>>> >>>>> > right?
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> >>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
> >>>>>>>>>>>>>>> >>>>> >> you pointed out
> >>>>>>>>>>>>>>> >>>>> >> last
> >>>>>>>>>>>>>>> >>>>> >> week or so.
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
> >>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >>>>>>>>>>>>>>> >>>>> >> wrote:
> >>>>>>>>>>>>>>> >>>>> >>>
> >>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
> >>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
> >>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >>>>>>>>>>>>>>> >>>>> >>> you'd call
> >>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
> login.
> >>>>>>>>>>>>>>> >>>>> >>> See the info I
> >>>>>>>>>>>>>>> >>>>> >>> sent
> >>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
> >>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >>>>>>>>>>>>>>> >>>>> >>> storage type
> >>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >>>>>>>>>>>>>>> >>>>> >>>
> >>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> >>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >>>>>>>>>>>>>>> >>>>> >>> wrote:
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
> >>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> framework
> >>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >>>>>>>>>>>>>>> >>>>> >>>> times
> >>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
> >>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> mapping
> >>>>>>>>>>>>>>> >>>>> >>>> between a
> >>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
> admin
> >>>>>>>>>>>>>>> >>>>> >>>> to create large
> >>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
> >>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >>>>>>>>>>>>>>> >>>>> >>>> root and
> >>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed
> to
> >>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >>>>>>>>>>>>>>> >>>>> >>>> the
> >>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
> >>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work
> on
> >>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >>>>>>>>>>>>>>> >>>>> >>>> still
> >>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will
> need
> >>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >>>>>>>>>>>>>>> >>>>> >>>> the
> >>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
> >>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
> >>>>>>>>>>>>>>> >>>>> >>>> work?
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >>>>>>>>>>>>>>> >>>>> >>>> Mike
> >>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>> >>>>> >>>> --
> >>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>> >>>>> >> --
> >>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>> >>>>> > --
> >>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>> >>>> --
> >>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>> >>> --
> >>>>>>>>>>>>>>> >>> Mike Tutkowski
> >>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >>> o: 303.746.7302
> >>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>> >> --
> >>>>>>>>>>>>>>> >> Mike Tutkowski
> >>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> >> o: 303.746.7302
> >>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> --
> >>>>>>>>>>>>>> Mike Tutkowski
> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> --
> >>>>>>>>>>>>> Mike Tutkowski
> >>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> Mike Tutkowski
> >>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>> o: 303.746.7302
> >>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Mike Tutkowski
> >>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>> e: mike.tutkowski@solidfire.com
> >>>>>> o: 303.746.7302
> >>>>>> Advancing the way the world uses the cloud™
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Mike Tutkowski
> >>> Senior CloudStack Developer, SolidFire Inc.
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud™
> >
> >
> >
> >
> > --
> > Mike Tutkowski
> > Senior CloudStack Developer, SolidFire Inc.
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud™
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, the by-path stuff will work fine, and is preferred because the
device names will be more likely to match across hosts. I'm not 100%
sure those commands are everything, but they look close enough and you
can probably test them rather easily, I don't play with the commands
daily. You may need to require the user to install the iscsi initiator
utils in any provided documentation...

On Tue, Sep 17, 2013 at 12:17 AM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> It looks like these are the main iscsiadm commands I would need (unless you
> see an error below):
>
> // discover iSCSI targets (is there a way to just discover the one IQN I
> want to discover?)
>
> sudo iscsiadm -m discovery -t sendtargets -p 192.168.233.10
>
> // log in to iSCSI target
>
> sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
> 192.168.233.10 -l
>
> // log out of iSCSI target
>
> sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
> 192.168.233.10 -u
>
> // un-discover the IP address
>
> sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
> 192.168.233.10:3260 -o delete
>
> Do you know if it is OK for me to use the by-path way of identifying the
> newly added SCSI disk when I pass in that XML to attach the disk?
>
> OpenStack is using Libvirt and they appear to use by-path:
>
>         host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s" %
>                        (iscsi_properties['target_portal'],
>                         iscsi_properties['target_iqn'],
>                         iscsi_properties.get('target_lun', 0)))
>
>
>
>
> On Mon, Sep 16, 2013 at 12:32 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
>>
>> That's right
>>
>> On Sep 16, 2013 12:31 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>>
>>> I understand what you're saying now, Marcus.
>>>
>>> I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
>>> (looking into that still), but I see what you mean: If it is, we don't need
>>> a new adaptor; otherwise, we do.
>>>
>>> If Libivirt's iSCSI Storage Pool does work, I could update the current
>>> adaptor, if need be, to make use of it.
>>>
>>>
>>> On Mon, Sep 16, 2013 at 12:24 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>>>
>>>> Well, you'd use neither of the two pool types, because you are not
>>>> letting libvirt handle the pool, you are doing it with your own pool and
>>>> adaptor class. Libvirt will be unaware of everything but the disk XML you
>>>> attach to a vm. You'd only use those if libvirts functions were
>>>> advantageous, I.e. if it already did everything you want. Since neither of
>>>> those seem to provide both iscsi and the 1:1 mapping you want that's why we
>>>> are talking about your own pool/adaptor.
>>>>
>>>> You can log into the target via your implementation of getPhysicalDisk
>>>> as you mention in AttachVolumeCommand, or log in during your implementation
>>>> of createStoragePool and simply rescan for luns in getPhysicalDisk.
>>>> Presumably in most cases the host will be logged in already and new luns
>>>> have been created in the meantime.
>>>>
>>>> On Sep 16, 2013 12:09 PM, "Mike Tutkowski"
>>>> <mi...@solidfire.com> wrote:
>>>>>
>>>>> Hey Marcus,
>>>>>
>>>>> Thanks for that clarification.
>>>>>
>>>>> Sorry if this is a redundant question:
>>>>>
>>>>> When the AttachVolumeCommand comes in, it sounds like we thought the
>>>>> best approach would be for me to discover and log in to the iSCSI target
>>>>> using iscsiadm.
>>>>>
>>>>> This will create a new device: /dev/sdX.
>>>>>
>>>>> We would then pass this new device into the VM (passing XML into the
>>>>> appropriate Libvirt API).
>>>>>
>>>>> If this is an accurate understanding, can you tell me: Do you think we
>>>>> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>>>>>
>>>>> I believe I recall you leaning toward a Disk Storage Pool because we
>>>>> will have already discovered the iSCSI target and, as such, will already
>>>>> have a device to pass into the VM.
>>>>>
>>>>> It seems like either way would work.
>>>>>
>>>>> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand
>>>>> if they would do the work of discovering the iSCSI target for me (and maybe
>>>>> avoid me having to use iscsiadm).
>>>>>
>>>>> Thanks for the clarification! :)
>>>>>
>>>>>
>>>>> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> It will still register the pool.  You still have a primary storage
>>>>>> pool that you registered, whether it's local, cluster or zone wide.
>>>>>> NFS is optionally zone wide as well (I'm assuming customers can launch
>>>>>> your storage only cluster-wide if they choose for resource
>>>>>> partitioning), but it registers the pool in Libvirt prior to use.
>>>>>>
>>>>>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>>>>>> both pool and volume info. It first looks up the pool:
>>>>>>
>>>>>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>>>>                     cmd.getPooltype(),
>>>>>>                     cmd.getPoolUuid());
>>>>>>
>>>>>> Then it looks up the disk from that pool:
>>>>>>
>>>>>>     KVMPhysicalDisk disk =
>>>>>> primary.getPhysicalDisk(cmd.getVolumePath());
>>>>>>
>>>>>> Most of the commands only pass volume info like this (getVolumePath
>>>>>> generally means the uuid of the volume), since it looks up the pool
>>>>>> separately. If you don't save the pool info in a map in your custom
>>>>>> class when createStoragePool is called, then getStoragePool won't be
>>>>>> able to find it. This is a simple thing in your implementation of
>>>>>> createStoragePool, just thought I'd mention it because it is key. Just
>>>>>> create a map of pool uuid and pool object and save them so they're
>>>>>> available across all implementations of that class.
>>>>>>
>>>>>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>>>>>> <mi...@solidfire.com> wrote:
>>>>>> > Thanks, Marcus
>>>>>> >
>>>>>> > About this:
>>>>>> >
>>>>>> > "When the agent connects to the
>>>>>> > management server, it registers all pools in the cluster with the
>>>>>> > agent."
>>>>>> >
>>>>>> > So, my plug-in allows you to create zone-wide primary storage. This
>>>>>> > just
>>>>>> > means that any cluster can use the SAN (the SAN was registered as
>>>>>> > primary
>>>>>> > storage as opposed to a preallocated volume from the SAN). Once you
>>>>>> > create a
>>>>>> > primary storage based on this plug-in, the storage framework will
>>>>>> > invoke the
>>>>>> > plug-in, as needed, to create and delete volumes on the SAN. For
>>>>>> > example,
>>>>>> > you could have one SolidFire primary storage (zone wide) and
>>>>>> > currently have
>>>>>> > 100 volumes created on the SAN to support it.
>>>>>> >
>>>>>> > In this case, what will the management server be registering with
>>>>>> > the agent
>>>>>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>>>>>> > passed
>>>>>> > in, that will be too vague as it does not contain information on
>>>>>> > what
>>>>>> > volumes have been created for the agent.
>>>>>> >
>>>>>> > Thanks
>>>>>> >
>>>>>> >
>>>>>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen
>>>>>> > <sh...@gmail.com>
>>>>>> > wrote:
>>>>>> >>
>>>>>> >> Yes, see my previous email from the 13th. You can create your own
>>>>>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt
>>>>>> >> ones
>>>>>> >> have. The previous email outlines how to add your own
>>>>>> >> StorageAdaptor
>>>>>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>>>>>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>>>>>> >> getPhysicalDisk I believe will be the one you use to actually
>>>>>> >> attach a
>>>>>> >> lun.
>>>>>> >>
>>>>>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>>>>>> >> management server, it registers all pools in the cluster with the
>>>>>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>>>>>> >> pool object (with all of the settings for your SAN). This in turn
>>>>>> >> calls _storagePoolMgr.createStoragePool, which will route through
>>>>>> >> KVMStoragePoolManager to your storage adapter that you've
>>>>>> >> registered.
>>>>>> >> The last argument to createStoragePool is the pool type, which is
>>>>>> >> used
>>>>>> >> to select a StorageAdaptor.
>>>>>> >>
>>>>>> >> From then on, most calls will only pass the volume info, and the
>>>>>> >> volume will have the uuid of the storage pool. For this reason,
>>>>>> >> your
>>>>>> >> adaptor class needs to have a static Map variable that contains
>>>>>> >> pool
>>>>>> >> uuid and pool object. Whenever they call createStoragePool on your
>>>>>> >> adaptor you add that pool to the map so that subsequent volume
>>>>>> >> calls
>>>>>> >> can look up the pool details for the volume by pool uuid. With the
>>>>>> >> Libvirt adaptor, libvirt keeps track of that for you.
>>>>>> >>
>>>>>> >> When createStoragePool is called, you can log into the iscsi target
>>>>>> >> (or make sure you are already logged in, as it can be called over
>>>>>> >> again at any time), and when attach volume commands are fired off,
>>>>>> >> you
>>>>>> >> can attach individual LUNs that are asked for, or rescan (say that
>>>>>> >> the
>>>>>> >> plugin created a new ACL just prior to calling attach), or whatever
>>>>>> >> is
>>>>>> >> necessary.
>>>>>> >>
>>>>>> >> KVM is a bit more work, but you can do anything you want. Actually,
>>>>>> >> I
>>>>>> >> think you can call host scripts with Xen, but having the agent
>>>>>> >> there
>>>>>> >> that runs your own code gives you the flexibility to do whatever.
>>>>>> >>
>>>>>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>> >> > I see right now LibvirtComputingResource.java has the following
>>>>>> >> > method
>>>>>> >> > that
>>>>>> >> > I might be able to leverage (it's probably not called at present
>>>>>> >> > and
>>>>>> >> > would
>>>>>> >> > need to be implemented in my case to discover my iSCSI target and
>>>>>> >> > log in
>>>>>> >> > to
>>>>>> >> > it):
>>>>>> >> >
>>>>>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>>>>>> >> >
>>>>>> >> >         return new Answer(cmd, true, "success");
>>>>>> >> >
>>>>>> >> >     }
>>>>>> >> >
>>>>>> >> > I would probably be able to call the KVMStorageManager to have it
>>>>>> >> > use my
>>>>>> >> > StorageAdaptor to do what's necessary here.
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> >
>>>>>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>>>>>> >> > <mi...@solidfire.com> wrote:
>>>>>> >> >>
>>>>>> >> >> Hey Marcus,
>>>>>> >> >>
>>>>>> >> >> When I implemented support in the XenServer and VMware plug-ins
>>>>>> >> >> for
>>>>>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>>>>>> >> >> methods in
>>>>>> >> >> both plug-ins.
>>>>>> >> >>
>>>>>> >> >> The code there was changed to check the AttachVolumeCommand
>>>>>> >> >> instance
>>>>>> >> >> for a
>>>>>> >> >> "managed" property.
>>>>>> >> >>
>>>>>> >> >> If managed was false, the normal attach/detach logic would just
>>>>>> >> >> run and
>>>>>> >> >> the volume would be attached or detached.
>>>>>> >> >>
>>>>>> >> >> If managed was true, new 4.2 logic would run to create (let's
>>>>>> >> >> talk
>>>>>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>>>>>> >> >> reattach an
>>>>>> >> >> existing VDI inside an existing SR, if this wasn't the first
>>>>>> >> >> time the
>>>>>> >> >> volume
>>>>>> >> >> was attached). If managed was true and we were detaching the
>>>>>> >> >> volume,
>>>>>> >> >> the SR
>>>>>> >> >> would be detached from the XenServer hosts.
>>>>>> >> >>
>>>>>> >> >> I am currently walking through the execute(AttachVolumeCommand)
>>>>>> >> >> in
>>>>>> >> >> LibvirtComputingResource.java.
>>>>>> >> >>
>>>>>> >> >> I see how the XML is constructed to describe whether a disk
>>>>>> >> >> should be
>>>>>> >> >> attached or detached. I also see how we call in to get a
>>>>>> >> >> StorageAdapter
>>>>>> >> >> (and
>>>>>> >> >> how I will likely need to write a new one of these).
>>>>>> >> >>
>>>>>> >> >> So, talking in XenServer terminology again, I was wondering if
>>>>>> >> >> you
>>>>>> >> >> think
>>>>>> >> >> the approach we took in 4.2 with creating and deleting SRs in
>>>>>> >> >> the
>>>>>> >> >> execute(AttachVolumeCommand) method would work here or if there
>>>>>> >> >> is some
>>>>>> >> >> other way I should be looking at this for KVM?
>>>>>> >> >>
>>>>>> >> >> As it is right now for KVM, storage has to be set up ahead of
>>>>>> >> >> time.
>>>>>> >> >> Assuming this is the case, there probably isn't currently a
>>>>>> >> >> place I can
>>>>>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>>>>>> >> >> This is
>>>>>> >> >> why
>>>>>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>>>>>> >> >> XenServer
>>>>>> >> >> and
>>>>>> >> >> VMware, but I wanted to see if you have an alternative way that
>>>>>> >> >> might
>>>>>> >> >> be
>>>>>> >> >> better for KVM.
>>>>>> >> >>
>>>>>> >> >> One possible way to do this would be to modify VolumeManagerImpl
>>>>>> >> >> (or
>>>>>> >> >> whatever its equivalent is in 4.3) before it issues an
>>>>>> >> >> attach-volume
>>>>>> >> >> command
>>>>>> >> >> to KVM to check to see if the volume is to be attached to
>>>>>> >> >> managed
>>>>>> >> >> storage.
>>>>>> >> >> If it is, then (before calling the attach-volume command in KVM)
>>>>>> >> >> call
>>>>>> >> >> the
>>>>>> >> >> create-storage-pool command in KVM (or whatever it might be
>>>>>> >> >> called).
>>>>>> >> >>
>>>>>> >> >> Just wanted to get some of your thoughts on this.
>>>>>> >> >>
>>>>>> >> >> Thanks!
>>>>>> >> >>
>>>>>> >> >>
>>>>>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>>>>>> >> >> <mi...@solidfire.com> wrote:
>>>>>> >> >>>
>>>>>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>>>>>> >> >>> codebase
>>>>>> >> >>> and having to merge my code into it in 4.2.
>>>>>> >> >>>
>>>>>> >> >>> Thanks for all the details, Marcus! :)
>>>>>> >> >>>
>>>>>> >> >>> I can start digging into what you were talking about now.
>>>>>> >> >>>
>>>>>> >> >>>
>>>>>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>>>>>> >> >>> <sh...@gmail.com>
>>>>>> >> >>> wrote:
>>>>>> >> >>>>
>>>>>> >> >>>> Looks like things might be slightly different now in 4.2, with
>>>>>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less
>>>>>> >> >>>> like some
>>>>>> >> >>>> of the commands were ripped out verbatim from
>>>>>> >> >>>> LibvirtComputingResource
>>>>>> >> >>>> and placed here, so in general what I've said is probably
>>>>>> >> >>>> still true,
>>>>>> >> >>>> just that the location of things like AttachVolumeCommand
>>>>>> >> >>>> might be
>>>>>> >> >>>> different, in this file rather than
>>>>>> >> >>>> LibvirtComputingResource.java.
>>>>>> >> >>>>
>>>>>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>>>>>> >> >>>> <sh...@gmail.com>
>>>>>> >> >>>> wrote:
>>>>>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>>>>>> >> >>>> > hypervisor
>>>>>> >> >>>> > classes differ, the rest is all mgmt server. Creating a
>>>>>> >> >>>> > volume is
>>>>>> >> >>>> > just
>>>>>> >> >>>> > a db entry until it's deployed for the first time.
>>>>>> >> >>>> > AttachVolumeCommand
>>>>>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous
>>>>>> >> >>>> > to
>>>>>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via
>>>>>> >> >>>> > a KVM
>>>>>> >> >>>> > StorageAdaptor) to log in the host to the target and then
>>>>>> >> >>>> > you have
>>>>>> >> >>>> > a
>>>>>> >> >>>> > block device.  Maybe libvirt will do that for you, but my
>>>>>> >> >>>> > quick
>>>>>> >> >>>> > read
>>>>>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>>>>>> >> >>>> > pool,
>>>>>> >> >>>> > not
>>>>>> >> >>>> > a lun or volume, so you'll need to figure out if that works
>>>>>> >> >>>> > or if
>>>>>> >> >>>> > you'll have to use iscsiadm commands.
>>>>>> >> >>>> >
>>>>>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>>>>>> >> >>>> > Libvirt
>>>>>> >> >>>> > doesn't really manage your pool the way you want), you're
>>>>>> >> >>>> > going to
>>>>>> >> >>>> > have to create a version of KVMStoragePool class and a
>>>>>> >> >>>> > StorageAdaptor
>>>>>> >> >>>> > class (see LibvirtStoragePool.java and
>>>>>> >> >>>> > LibvirtStorageAdaptor.java),
>>>>>> >> >>>> > implementing all of the methods, then in
>>>>>> >> >>>> > KVMStorageManager.java
>>>>>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>>>>>> >> >>>> > correct
>>>>>> >> >>>> > adaptor, you can see in this file that every call first
>>>>>> >> >>>> > pulls the
>>>>>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So
>>>>>> >> >>>> > you can
>>>>>> >> >>>> > see
>>>>>> >> >>>> > a comment in this file that says "add other storage adaptors
>>>>>> >> >>>> > here",
>>>>>> >> >>>> > where it puts to this map, this is where you'd register your
>>>>>> >> >>>> > adaptor.
>>>>>> >> >>>> >
>>>>>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool
>>>>>> >> >>>> > accepts all
>>>>>> >> >>>> > of
>>>>>> >> >>>> > the pool data (host, port, name, path) which would be used
>>>>>> >> >>>> > to log
>>>>>> >> >>>> > the
>>>>>> >> >>>> > host into the initiator. I *believe* the method
>>>>>> >> >>>> > getPhysicalDisk
>>>>>> >> >>>> > will
>>>>>> >> >>>> > need to do the work of attaching the lun.
>>>>>> >> >>>> > AttachVolumeCommand
>>>>>> >> >>>> > calls
>>>>>> >> >>>> > this and then creates the XML diskdef and attaches it to the
>>>>>> >> >>>> > VM.
>>>>>> >> >>>> > Now,
>>>>>> >> >>>> > one thing you need to know is that createStoragePool is
>>>>>> >> >>>> > called
>>>>>> >> >>>> > often,
>>>>>> >> >>>> > sometimes just to make sure the pool is there. You may want
>>>>>> >> >>>> > to
>>>>>> >> >>>> > create
>>>>>> >> >>>> > a map in your adaptor class and keep track of pools that
>>>>>> >> >>>> > have been
>>>>>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this
>>>>>> >> >>>> > because it
>>>>>> >> >>>> > asks
>>>>>> >> >>>> > libvirt about which storage pools exist. There are also
>>>>>> >> >>>> > calls to
>>>>>> >> >>>> > refresh the pool stats, and all of the other calls can be
>>>>>> >> >>>> > seen in
>>>>>> >> >>>> > the
>>>>>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk,
>>>>>> >> >>>> > clone, etc,
>>>>>> >> >>>> > but
>>>>>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>>>>>> >> >>>> > that
>>>>>> >> >>>> > volumes are created on the mgmt server via the plugin now,
>>>>>> >> >>>> > so
>>>>>> >> >>>> > whatever
>>>>>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>>>>>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>>>>>> >> >>>> > talking to
>>>>>> >> >>>> > the san api).
>>>>>> >> >>>> >
>>>>>> >> >>>> > There is a difference between attaching new volumes and
>>>>>> >> >>>> > launching a
>>>>>> >> >>>> > VM
>>>>>> >> >>>> > with existing volumes.  In the latter case, the VM
>>>>>> >> >>>> > definition that
>>>>>> >> >>>> > was
>>>>>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>>>>>> >> >>>> >
>>>>>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>>>>>> >> >>>> > imagine it
>>>>>> >> >>>> > would need to be kept the same. Is it just a definition to
>>>>>> >> >>>> > the SAN
>>>>>> >> >>>> > (ip address or some such, port number) and perhaps a volume
>>>>>> >> >>>> > pool
>>>>>> >> >>>> > name?
>>>>>> >> >>>> >
>>>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>>>> >> >>>> >> to have
>>>>>> >> >>>> >> only a
>>>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>>>> >> >>>> >> ideal.
>>>>>> >> >>>> >
>>>>>> >> >>>> > That depends on your SAN API.  I was under the impression
>>>>>> >> >>>> > that the
>>>>>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>>>>>> >> >>>> > whatever
>>>>>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just
>>>>>> >> >>>> > call
>>>>>> >> >>>> > your
>>>>>> >> >>>> > SAN API with the host info for the ACLs prior to when the
>>>>>> >> >>>> > disk is
>>>>>> >> >>>> > attached (or the VM is started).  I'd have to look more at
>>>>>> >> >>>> > the
>>>>>> >> >>>> > framework to know the details, in 4.1 I would do this in
>>>>>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>>>>>> >> >>>> >
>>>>>> >> >>>> >
>>>>>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>>>> >> >>>> > <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>>>>>> >> >>>> >> different
>>>>>> >> >>>> >> from how
>>>>>> >> >>>> >> it works with XenServer and VMware.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> Just to give you an idea how it works in 4.2 with
>>>>>> >> >>>> >> XenServer:
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> * The user creates a CS volume (this is just recorded in
>>>>>> >> >>>> >> the
>>>>>> >> >>>> >> cloud.volumes
>>>>>> >> >>>> >> table).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> * The user attaches the volume as a disk to a VM for the
>>>>>> >> >>>> >> first
>>>>>> >> >>>> >> time
>>>>>> >> >>>> >> (if the
>>>>>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>>>>>> >> >>>> >> framework
>>>>>> >> >>>> >> invokes
>>>>>> >> >>>> >> a method on the plug-in that creates a volume on the
>>>>>> >> >>>> >> SAN...info
>>>>>> >> >>>> >> like
>>>>>> >> >>>> >> the IQN
>>>>>> >> >>>> >> of the SAN volume is recorded in the DB).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>>>>>> >> >>>> >> executed.
>>>>>> >> >>>> >> It
>>>>>> >> >>>> >> determines based on a flag passed in that the storage in
>>>>>> >> >>>> >> question
>>>>>> >> >>>> >> is
>>>>>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>>>>> >> >>>> >> preallocated
>>>>>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>>>>>> >> >>>> >> discovered
>>>>>> >> >>>> >> it
>>>>>> >> >>>> >> determines if the iSCSI target already contains a storage
>>>>>> >> >>>> >> repository
>>>>>> >> >>>> >> (it
>>>>>> >> >>>> >> would if this were a re-attach situation). If it does
>>>>>> >> >>>> >> contain an
>>>>>> >> >>>> >> SR
>>>>>> >> >>>> >> already,
>>>>>> >> >>>> >> then there should already be one VDI, as well. If there is
>>>>>> >> >>>> >> no SR,
>>>>>> >> >>>> >> an
>>>>>> >> >>>> >> SR is
>>>>>> >> >>>> >> created and a single VDI is created within it (that takes
>>>>>> >> >>>> >> up about
>>>>>> >> >>>> >> as
>>>>>> >> >>>> >> much
>>>>>> >> >>>> >> space as was requested for the CloudStack volume).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> * The normal attach-volume logic continues (it depends on
>>>>>> >> >>>> >> the
>>>>>> >> >>>> >> existence of
>>>>>> >> >>>> >> an SR and a VDI).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> The VMware case is essentially the same (mainly just
>>>>>> >> >>>> >> substitute
>>>>>> >> >>>> >> datastore
>>>>>> >> >>>> >> for SR and VMDK for VDI).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>>>>>> >> >>>> >> iSCSI
>>>>>> >> >>>> >> target,
>>>>>> >> >>>> >> but only the host that is currently running the VM that is
>>>>>> >> >>>> >> using
>>>>>> >> >>>> >> the
>>>>>> >> >>>> >> VDI (or
>>>>>> >> >>>> >> VMKD) is actually using the disk.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> Live Migration should be OK because the hypervisors
>>>>>> >> >>>> >> communicate
>>>>>> >> >>>> >> with
>>>>>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> I see what you're saying with KVM, though.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>>>>>> >> >>>> >> eyes.
>>>>>> >> >>>> >> CS
>>>>>> >> >>>> >> controls
>>>>>> >> >>>> >> Live Migration. You don't really need a clustered
>>>>>> >> >>>> >> filesystem on
>>>>>> >> >>>> >> the
>>>>>> >> >>>> >> LUN. The
>>>>>> >> >>>> >> LUN could be handed over raw to the VM using it.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>>>> >> >>>> >> to have
>>>>>> >> >>>> >> only a
>>>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>>>> >> >>>> >> ideal.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log
>>>>>> >> >>>> >> in to
>>>>>> >> >>>> >> the
>>>>>> >> >>>> >> iSCSI
>>>>>> >> >>>> >> target. I'll also need to take the resultant new device and
>>>>>> >> >>>> >> pass
>>>>>> >> >>>> >> it
>>>>>> >> >>>> >> into the
>>>>>> >> >>>> >> VM.
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> Does this sound reasonable? Please call me out on anything
>>>>>> >> >>>> >> I seem
>>>>>> >> >>>> >> incorrect
>>>>>> >> >>>> >> about. :)
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> Thanks for all the thought on this, Marcus!
>>>>>> >> >>>> >>
>>>>>> >> >>>> >>
>>>>>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>>>>> >> >>>> >> <sh...@gmail.com>
>>>>>> >> >>>> >> wrote:
>>>>>> >> >>>> >>>
>>>>>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def,
>>>>>> >> >>>> >>> and the
>>>>>> >> >>>> >>> attach
>>>>>> >> >>>> >>> the disk def to the vm. You may need to do your own
>>>>>> >> >>>> >>> StorageAdaptor
>>>>>> >> >>>> >>> and run
>>>>>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>>>>>> >> >>>> >>> libvirt
>>>>>> >> >>>> >>> iscsi
>>>>>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>>>>>> >> >>>> >>> how it
>>>>>> >> >>>> >>> works on
>>>>>> >> >>>> >>> xen at the momen., nor is it ideal.
>>>>>> >> >>>> >>>
>>>>>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>>>>>> >> >>>> >>> which
>>>>>> >> >>>> >>> luns
>>>>>> >> >>>> >>> as
>>>>>> >> >>>> >>> well, I remember discussing that months ago, so that a
>>>>>> >> >>>> >>> disk won't
>>>>>> >> >>>> >>> be
>>>>>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>>>>>> >> >>>> >>> will
>>>>>> >> >>>> >>> be
>>>>>> >> >>>> >>> safe and
>>>>>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>>>>>> >> >>>> >>> connectivity
>>>>>> >> >>>> >>> with. It
>>>>>> >> >>>> >>> should revoke access to everything but the target host...
>>>>>> >> >>>> >>> Except
>>>>>> >> >>>> >>> for
>>>>>> >> >>>> >>> during
>>>>>> >> >>>> >>> migration but we can discuss that later, there's a
>>>>>> >> >>>> >>> migration prep
>>>>>> >> >>>> >>> process
>>>>>> >> >>>> >>> where the new host can be added to the acls, and the old
>>>>>> >> >>>> >>> host can
>>>>>> >> >>>> >>> be
>>>>>> >> >>>> >>> removed
>>>>>> >> >>>> >>> post migration.
>>>>>> >> >>>> >>>
>>>>>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>>>>> >> >>>> >>> <mi...@solidfire.com>
>>>>>> >> >>>> >>> wrote:
>>>>>> >> >>>> >>>>
>>>>>> >> >>>> >>>> Yeah, that would be ideal.
>>>>>> >> >>>> >>>>
>>>>>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log
>>>>>> >> >>>> >>>> in to
>>>>>> >> >>>> >>>> it,
>>>>>> >> >>>> >>>> then
>>>>>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and
>>>>>> >> >>>> >>>> leave it
>>>>>> >> >>>> >>>> as
>>>>>> >> >>>> >>>> is - do
>>>>>> >> >>>> >>>> not format it with any file system...clustered or not). I
>>>>>> >> >>>> >>>> would
>>>>>> >> >>>> >>>> pass that
>>>>>> >> >>>> >>>> device into the VM.
>>>>>> >> >>>> >>>>
>>>>>> >> >>>> >>>> Kind of accurate?
>>>>>> >> >>>> >>>>
>>>>>> >> >>>> >>>>
>>>>>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>>>>> >> >>>> >>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>> wrote:
>>>>>> >> >>>> >>>>>
>>>>>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>>>>>> >> >>>> >>>>> definitions.
>>>>>> >> >>>> >>>>> There are
>>>>>> >> >>>> >>>>> ones that work for block devices rather than files. You
>>>>>> >> >>>> >>>>> can
>>>>>> >> >>>> >>>>> piggy
>>>>>> >> >>>> >>>>> back off
>>>>>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm
>>>>>> >> >>>> >>>>> as a
>>>>>> >> >>>> >>>>> block device.
>>>>>> >> >>>> >>>>> The definition is an XML string per libvirt XML format.
>>>>>> >> >>>> >>>>> You may
>>>>>> >> >>>> >>>>> want to use
>>>>>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>>>>>> >> >>>> >>>>> like I
>>>>>> >> >>>> >>>>> mentioned,
>>>>>> >> >>>> >>>>> there are by-id paths to the block devices, as well as
>>>>>> >> >>>> >>>>> other
>>>>>> >> >>>> >>>>> ones
>>>>>> >> >>>> >>>>> that will
>>>>>> >> >>>> >>>>> be consistent and easier for management, not sure how
>>>>>> >> >>>> >>>>> familiar
>>>>>> >> >>>> >>>>> you
>>>>>> >> >>>> >>>>> are with
>>>>>> >> >>>> >>>>> device naming on Linux.
>>>>>> >> >>>> >>>>>
>>>>>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>>>> >> >>>> >>>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>> wrote:
>>>>>> >> >>>> >>>>>>
>>>>>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>>>>>> >> >>>> >>>>>> initiator
>>>>>> >> >>>> >>>>>> inside
>>>>>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>>>>>> >> >>>> >>>>>> lun on
>>>>>> >> >>>> >>>>>> hypervisor) as
>>>>>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>>>>>> >> >>>> >>>>>> that
>>>>>> >> >>>> >>>>>> resides on a
>>>>>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>>>>>> >> >>>> >>>>>>
>>>>>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>>>>>> >> >>>> >>>>>> migration
>>>>>> >> >>>> >>>>>> I
>>>>>> >> >>>> >>>>>> think
>>>>>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it
>>>>>> >> >>>> >>>>>> and
>>>>>> >> >>>> >>>>>> mount
>>>>>> >> >>>> >>>>>> it in two
>>>>>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>>>>>> >> >>>> >>>>>> filesystem,
>>>>>> >> >>>> >>>>>> in which
>>>>>> >> >>>> >>>>>> case you're back to shared mount point.
>>>>>> >> >>>> >>>>>>
>>>>>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is
>>>>>> >> >>>> >>>>>> basically LVM
>>>>>> >> >>>> >>>>>> with
>>>>>> >> >>>> >>>>>> a xen
>>>>>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't
>>>>>> >> >>>> >>>>>> use a
>>>>>> >> >>>> >>>>>> filesystem
>>>>>> >> >>>> >>>>>> either.
>>>>>> >> >>>> >>>>>>
>>>>>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>>> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>
>>>>>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do
>>>>>> >> >>>> >>>>>>> you
>>>>>> >> >>>> >>>>>>> mean
>>>>>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could
>>>>>> >> >>>> >>>>>>> do that
>>>>>> >> >>>> >>>>>>> in
>>>>>> >> >>>> >>>>>>> CS.
>>>>>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>>>>>> >> >>>> >>>>>>> hypervisor,
>>>>>> >> >>>> >>>>>>> as far as I
>>>>>> >> >>>> >>>>>>> know.
>>>>>> >> >>>> >>>>>>>
>>>>>> >> >>>> >>>>>>>
>>>>>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>>>>> >> >>>> >>>>>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>> wrote:
>>>>>> >> >>>> >>>>>>>>
>>>>>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>>>>>> >> >>>> >>>>>>>> there is
>>>>>> >> >>>> >>>>>>>> a
>>>>>> >> >>>> >>>>>>>> good
>>>>>> >> >>>> >>>>>>>> reason not to.
>>>>>> >> >>>> >>>>>>>>
>>>>>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>>>>> >> >>>> >>>>>>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>>> wrote:
>>>>>> >> >>>> >>>>>>>>>
>>>>>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>>>>>> >> >>>> >>>>>>>>> mistake
>>>>>> >> >>>> >>>>>>>>> to
>>>>>> >> >>>> >>>>>>>>> go to
>>>>>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
>>>>>> >> >>>> >>>>>>>>> to luns
>>>>>> >> >>>> >>>>>>>>> and then putting
>>>>>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>>>>>> >> >>>> >>>>>>>>> QCOW2
>>>>>> >> >>>> >>>>>>>>> or
>>>>>> >> >>>> >>>>>>>>> even RAW disk
>>>>>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>>>>>> >> >>>> >>>>>>>>> along
>>>>>> >> >>>> >>>>>>>>> the
>>>>>> >> >>>> >>>>>>>>> way, and have
>>>>>> >> >>>> >>>>>>>>> more overhead with the filesystem and its
>>>>>> >> >>>> >>>>>>>>> journaling, etc.
>>>>>> >> >>>> >>>>>>>>>
>>>>>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>>>> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in
>>>>>> >> >>>> >>>>>>>>>> KVM with
>>>>>> >> >>>> >>>>>>>>>> CS.
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS
>>>>>> >> >>>> >>>>>>>>>> today is by
>>>>>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the
>>>>>> >> >>>> >>>>>>>>>> location of
>>>>>> >> >>>> >>>>>>>>>> the
>>>>>> >> >>>> >>>>>>>>>> share.
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>>>>>> >> >>>> >>>>>>>>>> discovering
>>>>>> >> >>>> >>>>>>>>>> their
>>>>>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>>>>>> >> >>>> >>>>>>>>>> somewhere
>>>>>> >> >>>> >>>>>>>>>> on
>>>>>> >> >>>> >>>>>>>>>> their file
>>>>>> >> >>>> >>>>>>>>>> system.
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that
>>>>>> >> >>>> >>>>>>>>>> discovery,
>>>>>> >> >>>> >>>>>>>>>> logging
>>>>>> >> >>>> >>>>>>>>>> in,
>>>>>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting
>>>>>> >> >>>> >>>>>>>>>> the
>>>>>> >> >>>> >>>>>>>>>> current code manage
>>>>>> >> >>>> >>>>>>>>>> the rest as it currently does?
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>>>> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
>>>>>> >> >>>> >>>>>>>>>>> need to
>>>>>> >> >>>> >>>>>>>>>>> catch up
>>>>>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically
>>>>>> >> >>>> >>>>>>>>>>> just disk
>>>>>> >> >>>> >>>>>>>>>>> snapshots + memory
>>>>>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would
>>>>>> >> >>>> >>>>>>>>>>> preferably be
>>>>>> >> >>>> >>>>>>>>>>> handled by the SAN,
>>>>>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage
>>>>>> >> >>>> >>>>>>>>>>> or
>>>>>> >> >>>> >>>>>>>>>>> something else. This is
>>>>>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>>>>>> >> >>>> >>>>>>>>>>> want to
>>>>>> >> >>>> >>>>>>>>>>> see how others are
>>>>>> >> >>>> >>>>>>>>>>> planning theirs.
>>>>>> >> >>>> >>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>>>>> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>>>>>> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a
>>>>>> >> >>>> >>>>>>>>>>>> vdi
>>>>>> >> >>>> >>>>>>>>>>>> style
>>>>>> >> >>>> >>>>>>>>>>>> on an
>>>>>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a
>>>>>> >> >>>> >>>>>>>>>>>> RAW
>>>>>> >> >>>> >>>>>>>>>>>> format.
>>>>>> >> >>>> >>>>>>>>>>>> Otherwise you're
>>>>>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>>>>>> >> >>>> >>>>>>>>>>>> creating
>>>>>> >> >>>> >>>>>>>>>>>> a
>>>>>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>>>>>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance
>>>>>> >> >>>> >>>>>>>>>>>> killer.
>>>>>> >> >>>> >>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>>>>>> >> >>>> >>>>>>>>>>>> to the
>>>>>> >> >>>> >>>>>>>>>>>> VM, and
>>>>>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the
>>>>>> >> >>>> >>>>>>>>>>>> storage
>>>>>> >> >>>> >>>>>>>>>>>> plugin
>>>>>> >> >>>> >>>>>>>>>>>> is best. My
>>>>>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was
>>>>>> >> >>>> >>>>>>>>>>>> that
>>>>>> >> >>>> >>>>>>>>>>>> there
>>>>>> >> >>>> >>>>>>>>>>>> was a snapshot
>>>>>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle
>>>>>> >> >>>> >>>>>>>>>>>> snapshots.
>>>>>> >> >>>> >>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>>>>> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>>>>>>> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the
>>>>>> >> >>>> >>>>>>>>>>>>> SAN back
>>>>>> >> >>>> >>>>>>>>>>>>> end, if
>>>>>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>>>>>> >> >>>> >>>>>>>>>>>>> could
>>>>>> >> >>>> >>>>>>>>>>>>> call
>>>>>> >> >>>> >>>>>>>>>>>>> your plugin for
>>>>>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>>>>>> >> >>>> >>>>>>>>>>>>> agnostic. As
>>>>>> >> >>>> >>>>>>>>>>>>> far as space, that
>>>>>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With
>>>>>> >> >>>> >>>>>>>>>>>>> ours, we
>>>>>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>>>>>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>>>>>> >> >>>> >>>>>>>>>>>>> and is
>>>>>> >> >>>> >>>>>>>>>>>>> independent of the
>>>>>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>>>>>> >> >>>> >>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>>>> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>>>>>> >> >>>> >>>>>>>>>>>>>> libvirt
>>>>>> >> >>>> >>>>>>>>>>>>>> won't
>>>>>> >> >>>> >>>>>>>>>>>>>> work
>>>>>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>>>>>> >> >>>> >>>>>>>>>>>>>> snapshots?
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor
>>>>>> >> >>>> >>>>>>>>>>>>>> snapshot, the
>>>>>> >> >>>> >>>>>>>>>>>>>> VDI for
>>>>>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>>>>>> >> >>>> >>>>>>>>>>>>>> repository
>>>>>> >> >>>> >>>>>>>>>>>>>> as
>>>>>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>>>>>> >> >>>> >>>>>>>>>>>>>> XenServer
>>>>>> >> >>>> >>>>>>>>>>>>>> and
>>>>>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support
>>>>>> >> >>>> >>>>>>>>>>>>>> hypervisor
>>>>>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>>>>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what
>>>>>> >> >>>> >>>>>>>>>>>>>> the user
>>>>>> >> >>>> >>>>>>>>>>>>>> requested for the
>>>>>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our
>>>>>> >> >>>> >>>>>>>>>>>>>> SAN
>>>>>> >> >>>> >>>>>>>>>>>>>> thinly
>>>>>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>>>>>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it
>>>>>> >> >>>> >>>>>>>>>>>>>> needs to
>>>>>> >> >>>> >>>>>>>>>>>>>> be).
>>>>>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>>>>>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN
>>>>>> >> >>>> >>>>>>>>>>>>>> volume
>>>>>> >> >>>> >>>>>>>>>>>>>> until
>>>>>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>>>>>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also
>>>>>> >> >>>> >>>>>>>>>>>>>> reside on
>>>>>> >> >>>> >>>>>>>>>>>>>> the
>>>>>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>>>>> >> >>>> >>>>>>>>>>>>>> creation
>>>>>> >> >>>> >>>>>>>>>>>>>> of
>>>>>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
>>>>>> >> >>>> >>>>>>>>>>>>>> (which, even
>>>>>> >> >>>> >>>>>>>>>>>>>> if
>>>>>> >> >>>> >>>>>>>>>>>>>> there were support
>>>>>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>>>>>> >> >>>> >>>>>>>>>>>>>> per
>>>>>> >> >>>> >>>>>>>>>>>>>> iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>> target), then I
>>>>>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current
>>>>>> >> >>>> >>>>>>>>>>>>>> way this
>>>>>> >> >>>> >>>>>>>>>>>>>> works
>>>>>> >> >>>> >>>>>>>>>>>>>> with DIR?
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> What do you think?
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> Thanks
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>> access
>>>>>> >> >>>> >>>>>>>>>>>>>>> today.
>>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I
>>>>>> >> >>>> >>>>>>>>>>>>>>> might as
>>>>>> >> >>>> >>>>>>>>>>>>>>> well
>>>>>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
>>>>>> >> >>>> >>>>>>>>>>>>>>> Sorensen
>>>>>> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> believe
>>>>>> >> >>>> >>>>>>>>>>>>>>>> it
>>>>>> >> >>>> >>>>>>>>>>>>>>>> just
>>>>>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> that. The
>>>>>> >> >>>> >>>>>>>>>>>>>>>> end-user
>>>>>> >> >>>> >>>>>>>>>>>>>>>> is
>>>>>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> all KVM
>>>>>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>>>>>> >> >>>> >>>>>>>>>>>>>>>> access,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>>>>>> >> >>>> >>>>>>>>>>>>>>>> providing the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> storage.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> It could
>>>>>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>>>>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>>>>>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>>>>>> >> >>>> >>>>>>>>>>>>>>>> images.
>>>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>>>>>> >> >>>> >>>>>>>>>>>>>>>> Sorensen
>>>>>> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > at the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > same
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > time.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> pools:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> pool
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> have one
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> destroys
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> to see
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> mentioned,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> at.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> when
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> iSCSI,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> logging
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> work
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> device
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bit more
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> write your
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> made to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> test
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> iscsi
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > libvirt
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > iSCSI
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> of the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Marcus
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Tutkowski"
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 4.2
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> storage
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> establish a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> QoS.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expected
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> friendly).
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work, I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> they
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> with
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> how I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> have to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> it for
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire
>>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>>>> >> >>>> >>>>>>
>>>
>>> ...
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
It looks like these are the main iscsiadm commands I would need (unless you
see an error below):

// discover iSCSI targets (is there a way to just discover the one IQN I
want to discover?)

sudo iscsiadm -m discovery -t sendtargets -p 192.168.233.10

// log in to iSCSI target

sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
192.168.233.10 -l

// log out of iSCSI target

sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
192.168.233.10 -u

// un-discover the IP address

sudo iscsiadm -m node -T iqn.2012-03.com.solidfire:storagepool1 -p
192.168.233.10:3260 -o delete
Do you know if it is OK for me to use the by-path way of identifying the
newly added SCSI disk when I pass in that XML to attach the disk?

OpenStack is using Libvirt and they appear to use by-path:

        host_device = ("/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s" %
                       (iscsi_properties['target_portal'],
                        iscsi_properties['target_iqn'],
                        iscsi_properties.get('target_lun', 0)))




On Mon, Sep 16, 2013 at 12:32 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> That's right
> On Sep 16, 2013 12:31 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> I understand what you're saying now, Marcus.
>>
>> I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
>> (looking into that still), but I see what you mean: If it is, we don't need
>> a new adaptor; otherwise, we do.
>>
>> If Libivirt's iSCSI Storage Pool does work, I could update the current
>> adaptor, if need be, to make use of it.
>>
>>
>> On Mon, Sep 16, 2013 at 12:24 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Well, you'd use neither of the two pool types, because you are not
>>> letting libvirt handle the pool, you are doing it with your own pool and
>>> adaptor class. Libvirt will be unaware of everything but the disk XML you
>>> attach to a vm. You'd only use those if libvirts functions were
>>> advantageous, I.e. if it already did everything you want. Since neither of
>>> those seem to provide both iscsi and the 1:1 mapping you want that's why we
>>> are talking about your own pool/adaptor.
>>>
>>> You can log into the target via your implementation of getPhysicalDisk
>>> as you mention in AttachVolumeCommand, or log in during your implementation
>>> of createStoragePool and simply rescan for luns in getPhysicalDisk.
>>> Presumably in most cases the host will be logged in already and new luns
>>> have been created in the meantime.
>>> On Sep 16, 2013 12:09 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>
>>>> Hey Marcus,
>>>>
>>>> Thanks for that clarification.
>>>>
>>>> Sorry if this is a redundant question:
>>>>
>>>> When the AttachVolumeCommand comes in, it sounds like we thought the
>>>> best approach would be for me to discover and log in to the iSCSI target
>>>> using iscsiadm.
>>>>
>>>> This will create a new device: /dev/sdX.
>>>>
>>>> We would then pass this new device into the VM (passing XML into the
>>>> appropriate Libvirt API).
>>>>
>>>> If this is an accurate understanding, can you tell me: Do you think we
>>>> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>>>>
>>>> I believe I recall you leaning toward a Disk Storage Pool because we
>>>> will have already discovered the iSCSI target and, as such, will already
>>>> have a device to pass into the VM.
>>>>
>>>> It seems like either way would work.
>>>>
>>>> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand
>>>> if they would do the work of discovering the iSCSI target for me (and maybe
>>>> avoid me having to use iscsiadm).
>>>>
>>>> Thanks for the clarification! :)
>>>>
>>>>
>>>> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> It will still register the pool.  You still have a primary storage
>>>>> pool that you registered, whether it's local, cluster or zone wide.
>>>>> NFS is optionally zone wide as well (I'm assuming customers can launch
>>>>> your storage only cluster-wide if they choose for resource
>>>>> partitioning), but it registers the pool in Libvirt prior to use.
>>>>>
>>>>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>>>>> both pool and volume info. It first looks up the pool:
>>>>>
>>>>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>>>                     cmd.getPooltype(),
>>>>>                     cmd.getPoolUuid());
>>>>>
>>>>> Then it looks up the disk from that pool:
>>>>>
>>>>>     KVMPhysicalDisk disk =
>>>>> primary.getPhysicalDisk(cmd.getVolumePath());
>>>>>
>>>>> Most of the commands only pass volume info like this (getVolumePath
>>>>> generally means the uuid of the volume), since it looks up the pool
>>>>> separately. If you don't save the pool info in a map in your custom
>>>>> class when createStoragePool is called, then getStoragePool won't be
>>>>> able to find it. This is a simple thing in your implementation of
>>>>> createStoragePool, just thought I'd mention it because it is key. Just
>>>>> create a map of pool uuid and pool object and save them so they're
>>>>> available across all implementations of that class.
>>>>>
>>>>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>>>>> <mi...@solidfire.com> wrote:
>>>>> > Thanks, Marcus
>>>>> >
>>>>> > About this:
>>>>> >
>>>>> > "When the agent connects to the
>>>>> > management server, it registers all pools in the cluster with the
>>>>> > agent."
>>>>> >
>>>>> > So, my plug-in allows you to create zone-wide primary storage. This
>>>>> just
>>>>> > means that any cluster can use the SAN (the SAN was registered as
>>>>> primary
>>>>> > storage as opposed to a preallocated volume from the SAN). Once you
>>>>> create a
>>>>> > primary storage based on this plug-in, the storage framework will
>>>>> invoke the
>>>>> > plug-in, as needed, to create and delete volumes on the SAN. For
>>>>> example,
>>>>> > you could have one SolidFire primary storage (zone wide) and
>>>>> currently have
>>>>> > 100 volumes created on the SAN to support it.
>>>>> >
>>>>> > In this case, what will the management server be registering with
>>>>> the agent
>>>>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>>>>> passed
>>>>> > in, that will be too vague as it does not contain information on what
>>>>> > volumes have been created for the agent.
>>>>> >
>>>>> > Thanks
>>>>> >
>>>>> >
>>>>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> Yes, see my previous email from the 13th. You can create your own
>>>>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt
>>>>> ones
>>>>> >> have. The previous email outlines how to add your own StorageAdaptor
>>>>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>>>>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>>>>> >> getPhysicalDisk I believe will be the one you use to actually
>>>>> attach a
>>>>> >> lun.
>>>>> >>
>>>>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>>>>> >> management server, it registers all pools in the cluster with the
>>>>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>>>>> >> pool object (with all of the settings for your SAN). This in turn
>>>>> >> calls _storagePoolMgr.createStoragePool, which will route through
>>>>> >> KVMStoragePoolManager to your storage adapter that you've
>>>>> registered.
>>>>> >> The last argument to createStoragePool is the pool type, which is
>>>>> used
>>>>> >> to select a StorageAdaptor.
>>>>> >>
>>>>> >> From then on, most calls will only pass the volume info, and the
>>>>> >> volume will have the uuid of the storage pool. For this reason, your
>>>>> >> adaptor class needs to have a static Map variable that contains pool
>>>>> >> uuid and pool object. Whenever they call createStoragePool on your
>>>>> >> adaptor you add that pool to the map so that subsequent volume calls
>>>>> >> can look up the pool details for the volume by pool uuid. With the
>>>>> >> Libvirt adaptor, libvirt keeps track of that for you.
>>>>> >>
>>>>> >> When createStoragePool is called, you can log into the iscsi target
>>>>> >> (or make sure you are already logged in, as it can be called over
>>>>> >> again at any time), and when attach volume commands are fired off,
>>>>> you
>>>>> >> can attach individual LUNs that are asked for, or rescan (say that
>>>>> the
>>>>> >> plugin created a new ACL just prior to calling attach), or whatever
>>>>> is
>>>>> >> necessary.
>>>>> >>
>>>>> >> KVM is a bit more work, but you can do anything you want. Actually,
>>>>> I
>>>>> >> think you can call host scripts with Xen, but having the agent there
>>>>> >> that runs your own code gives you the flexibility to do whatever.
>>>>> >>
>>>>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>>>>> >> <mi...@solidfire.com> wrote:
>>>>> >> > I see right now LibvirtComputingResource.java has the following
>>>>> method
>>>>> >> > that
>>>>> >> > I might be able to leverage (it's probably not called at present
>>>>> and
>>>>> >> > would
>>>>> >> > need to be implemented in my case to discover my iSCSI target and
>>>>> log in
>>>>> >> > to
>>>>> >> > it):
>>>>> >> >
>>>>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>>>>> >> >
>>>>> >> >         return new Answer(cmd, true, "success");
>>>>> >> >
>>>>> >> >     }
>>>>> >> >
>>>>> >> > I would probably be able to call the KVMStorageManager to have it
>>>>> use my
>>>>> >> > StorageAdaptor to do what's necessary here.
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>>>>> >> > <mi...@solidfire.com> wrote:
>>>>> >> >>
>>>>> >> >> Hey Marcus,
>>>>> >> >>
>>>>> >> >> When I implemented support in the XenServer and VMware plug-ins
>>>>> for
>>>>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>>>>> >> >> methods in
>>>>> >> >> both plug-ins.
>>>>> >> >>
>>>>> >> >> The code there was changed to check the AttachVolumeCommand
>>>>> instance
>>>>> >> >> for a
>>>>> >> >> "managed" property.
>>>>> >> >>
>>>>> >> >> If managed was false, the normal attach/detach logic would just
>>>>> run and
>>>>> >> >> the volume would be attached or detached.
>>>>> >> >>
>>>>> >> >> If managed was true, new 4.2 logic would run to create (let's
>>>>> talk
>>>>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>>>>> >> >> reattach an
>>>>> >> >> existing VDI inside an existing SR, if this wasn't the first
>>>>> time the
>>>>> >> >> volume
>>>>> >> >> was attached). If managed was true and we were detaching the
>>>>> volume,
>>>>> >> >> the SR
>>>>> >> >> would be detached from the XenServer hosts.
>>>>> >> >>
>>>>> >> >> I am currently walking through the execute(AttachVolumeCommand)
>>>>> in
>>>>> >> >> LibvirtComputingResource.java.
>>>>> >> >>
>>>>> >> >> I see how the XML is constructed to describe whether a disk
>>>>> should be
>>>>> >> >> attached or detached. I also see how we call in to get a
>>>>> StorageAdapter
>>>>> >> >> (and
>>>>> >> >> how I will likely need to write a new one of these).
>>>>> >> >>
>>>>> >> >> So, talking in XenServer terminology again, I was wondering if
>>>>> you
>>>>> >> >> think
>>>>> >> >> the approach we took in 4.2 with creating and deleting SRs in the
>>>>> >> >> execute(AttachVolumeCommand) method would work here or if there
>>>>> is some
>>>>> >> >> other way I should be looking at this for KVM?
>>>>> >> >>
>>>>> >> >> As it is right now for KVM, storage has to be set up ahead of
>>>>> time.
>>>>> >> >> Assuming this is the case, there probably isn't currently a
>>>>> place I can
>>>>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>>>>> This is
>>>>> >> >> why
>>>>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>>>>> XenServer
>>>>> >> >> and
>>>>> >> >> VMware, but I wanted to see if you have an alternative way that
>>>>> might
>>>>> >> >> be
>>>>> >> >> better for KVM.
>>>>> >> >>
>>>>> >> >> One possible way to do this would be to modify VolumeManagerImpl
>>>>> (or
>>>>> >> >> whatever its equivalent is in 4.3) before it issues an
>>>>> attach-volume
>>>>> >> >> command
>>>>> >> >> to KVM to check to see if the volume is to be attached to managed
>>>>> >> >> storage.
>>>>> >> >> If it is, then (before calling the attach-volume command in KVM)
>>>>> call
>>>>> >> >> the
>>>>> >> >> create-storage-pool command in KVM (or whatever it might be
>>>>> called).
>>>>> >> >>
>>>>> >> >> Just wanted to get some of your thoughts on this.
>>>>> >> >>
>>>>> >> >> Thanks!
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>>>>> >> >> <mi...@solidfire.com> wrote:
>>>>> >> >>>
>>>>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>>>>> codebase
>>>>> >> >>> and having to merge my code into it in 4.2.
>>>>> >> >>>
>>>>> >> >>> Thanks for all the details, Marcus! :)
>>>>> >> >>>
>>>>> >> >>> I can start digging into what you were talking about now.
>>>>> >> >>>
>>>>> >> >>>
>>>>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>>>>> >> >>> <sh...@gmail.com>
>>>>> >> >>> wrote:
>>>>> >> >>>>
>>>>> >> >>>> Looks like things might be slightly different now in 4.2, with
>>>>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less
>>>>> like some
>>>>> >> >>>> of the commands were ripped out verbatim from
>>>>> >> >>>> LibvirtComputingResource
>>>>> >> >>>> and placed here, so in general what I've said is probably
>>>>> still true,
>>>>> >> >>>> just that the location of things like AttachVolumeCommand
>>>>> might be
>>>>> >> >>>> different, in this file rather than
>>>>> LibvirtComputingResource.java.
>>>>> >> >>>>
>>>>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>>>>> >> >>>> <sh...@gmail.com>
>>>>> >> >>>> wrote:
>>>>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>>>>> >> >>>> > hypervisor
>>>>> >> >>>> > classes differ, the rest is all mgmt server. Creating a
>>>>> volume is
>>>>> >> >>>> > just
>>>>> >> >>>> > a db entry until it's deployed for the first time.
>>>>> >> >>>> > AttachVolumeCommand
>>>>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>>>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via
>>>>> a KVM
>>>>> >> >>>> > StorageAdaptor) to log in the host to the target and then
>>>>> you have
>>>>> >> >>>> > a
>>>>> >> >>>> > block device.  Maybe libvirt will do that for you, but my
>>>>> quick
>>>>> >> >>>> > read
>>>>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>>>>> pool,
>>>>> >> >>>> > not
>>>>> >> >>>> > a lun or volume, so you'll need to figure out if that works
>>>>> or if
>>>>> >> >>>> > you'll have to use iscsiadm commands.
>>>>> >> >>>> >
>>>>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>>>>> Libvirt
>>>>> >> >>>> > doesn't really manage your pool the way you want), you're
>>>>> going to
>>>>> >> >>>> > have to create a version of KVMStoragePool class and a
>>>>> >> >>>> > StorageAdaptor
>>>>> >> >>>> > class (see LibvirtStoragePool.java and
>>>>> LibvirtStorageAdaptor.java),
>>>>> >> >>>> > implementing all of the methods, then in
>>>>> KVMStorageManager.java
>>>>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>>>>> correct
>>>>> >> >>>> > adaptor, you can see in this file that every call first
>>>>> pulls the
>>>>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So
>>>>> you can
>>>>> >> >>>> > see
>>>>> >> >>>> > a comment in this file that says "add other storage adaptors
>>>>> here",
>>>>> >> >>>> > where it puts to this map, this is where you'd register your
>>>>> >> >>>> > adaptor.
>>>>> >> >>>> >
>>>>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool
>>>>> accepts all
>>>>> >> >>>> > of
>>>>> >> >>>> > the pool data (host, port, name, path) which would be used
>>>>> to log
>>>>> >> >>>> > the
>>>>> >> >>>> > host into the initiator. I *believe* the method
>>>>> getPhysicalDisk
>>>>> >> >>>> > will
>>>>> >> >>>> > need to do the work of attaching the lun.
>>>>>  AttachVolumeCommand
>>>>> >> >>>> > calls
>>>>> >> >>>> > this and then creates the XML diskdef and attaches it to the
>>>>> VM.
>>>>> >> >>>> > Now,
>>>>> >> >>>> > one thing you need to know is that createStoragePool is
>>>>> called
>>>>> >> >>>> > often,
>>>>> >> >>>> > sometimes just to make sure the pool is there. You may want
>>>>> to
>>>>> >> >>>> > create
>>>>> >> >>>> > a map in your adaptor class and keep track of pools that
>>>>> have been
>>>>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this
>>>>> because it
>>>>> >> >>>> > asks
>>>>> >> >>>> > libvirt about which storage pools exist. There are also
>>>>> calls to
>>>>> >> >>>> > refresh the pool stats, and all of the other calls can be
>>>>> seen in
>>>>> >> >>>> > the
>>>>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk,
>>>>> clone, etc,
>>>>> >> >>>> > but
>>>>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>>>>> that
>>>>> >> >>>> > volumes are created on the mgmt server via the plugin now, so
>>>>> >> >>>> > whatever
>>>>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>>>>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>>>>> talking to
>>>>> >> >>>> > the san api).
>>>>> >> >>>> >
>>>>> >> >>>> > There is a difference between attaching new volumes and
>>>>> launching a
>>>>> >> >>>> > VM
>>>>> >> >>>> > with existing volumes.  In the latter case, the VM
>>>>> definition that
>>>>> >> >>>> > was
>>>>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>>>>> >> >>>> >
>>>>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>>>>> imagine it
>>>>> >> >>>> > would need to be kept the same. Is it just a definition to
>>>>> the SAN
>>>>> >> >>>> > (ip address or some such, port number) and perhaps a volume
>>>>> pool
>>>>> >> >>>> > name?
>>>>> >> >>>> >
>>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>>> to have
>>>>> >> >>>> >> only a
>>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>>> ideal.
>>>>> >> >>>> >
>>>>> >> >>>> > That depends on your SAN API.  I was under the impression
>>>>> that the
>>>>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>>>>> >> >>>> > whatever
>>>>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just
>>>>> call
>>>>> >> >>>> > your
>>>>> >> >>>> > SAN API with the host info for the ACLs prior to when the
>>>>> disk is
>>>>> >> >>>> > attached (or the VM is started).  I'd have to look more at
>>>>> the
>>>>> >> >>>> > framework to know the details, in 4.1 I would do this in
>>>>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>>>>> >> >>>> >
>>>>> >> >>>> >
>>>>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>>> >> >>>> > <mi...@solidfire.com> wrote:
>>>>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>>>>> >> >>>> >> different
>>>>> >> >>>> >> from how
>>>>> >> >>>> >> it works with XenServer and VMware.
>>>>> >> >>>> >>
>>>>> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>>>> >> >>>> >>
>>>>> >> >>>> >> * The user creates a CS volume (this is just recorded in the
>>>>> >> >>>> >> cloud.volumes
>>>>> >> >>>> >> table).
>>>>> >> >>>> >>
>>>>> >> >>>> >> * The user attaches the volume as a disk to a VM for the
>>>>> first
>>>>> >> >>>> >> time
>>>>> >> >>>> >> (if the
>>>>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>>>>> >> >>>> >> framework
>>>>> >> >>>> >> invokes
>>>>> >> >>>> >> a method on the plug-in that creates a volume on the
>>>>> SAN...info
>>>>> >> >>>> >> like
>>>>> >> >>>> >> the IQN
>>>>> >> >>>> >> of the SAN volume is recorded in the DB).
>>>>> >> >>>> >>
>>>>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>>>>> executed.
>>>>> >> >>>> >> It
>>>>> >> >>>> >> determines based on a flag passed in that the storage in
>>>>> question
>>>>> >> >>>> >> is
>>>>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>>>> >> >>>> >> preallocated
>>>>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>>>>> >> >>>> >> discovered
>>>>> >> >>>> >> it
>>>>> >> >>>> >> determines if the iSCSI target already contains a storage
>>>>> >> >>>> >> repository
>>>>> >> >>>> >> (it
>>>>> >> >>>> >> would if this were a re-attach situation). If it does
>>>>> contain an
>>>>> >> >>>> >> SR
>>>>> >> >>>> >> already,
>>>>> >> >>>> >> then there should already be one VDI, as well. If there is
>>>>> no SR,
>>>>> >> >>>> >> an
>>>>> >> >>>> >> SR is
>>>>> >> >>>> >> created and a single VDI is created within it (that takes
>>>>> up about
>>>>> >> >>>> >> as
>>>>> >> >>>> >> much
>>>>> >> >>>> >> space as was requested for the CloudStack volume).
>>>>> >> >>>> >>
>>>>> >> >>>> >> * The normal attach-volume logic continues (it depends on
>>>>> the
>>>>> >> >>>> >> existence of
>>>>> >> >>>> >> an SR and a VDI).
>>>>> >> >>>> >>
>>>>> >> >>>> >> The VMware case is essentially the same (mainly just
>>>>> substitute
>>>>> >> >>>> >> datastore
>>>>> >> >>>> >> for SR and VMDK for VDI).
>>>>> >> >>>> >>
>>>>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>>>>> iSCSI
>>>>> >> >>>> >> target,
>>>>> >> >>>> >> but only the host that is currently running the VM that is
>>>>> using
>>>>> >> >>>> >> the
>>>>> >> >>>> >> VDI (or
>>>>> >> >>>> >> VMKD) is actually using the disk.
>>>>> >> >>>> >>
>>>>> >> >>>> >> Live Migration should be OK because the hypervisors
>>>>> communicate
>>>>> >> >>>> >> with
>>>>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>>>>> >> >>>> >>
>>>>> >> >>>> >> I see what you're saying with KVM, though.
>>>>> >> >>>> >>
>>>>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>>>>> eyes.
>>>>> >> >>>> >> CS
>>>>> >> >>>> >> controls
>>>>> >> >>>> >> Live Migration. You don't really need a clustered
>>>>> filesystem on
>>>>> >> >>>> >> the
>>>>> >> >>>> >> LUN. The
>>>>> >> >>>> >> LUN could be handed over raw to the VM using it.
>>>>> >> >>>> >>
>>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>>> to have
>>>>> >> >>>> >> only a
>>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>>> ideal.
>>>>> >> >>>> >>
>>>>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log
>>>>> in to
>>>>> >> >>>> >> the
>>>>> >> >>>> >> iSCSI
>>>>> >> >>>> >> target. I'll also need to take the resultant new device and
>>>>> pass
>>>>> >> >>>> >> it
>>>>> >> >>>> >> into the
>>>>> >> >>>> >> VM.
>>>>> >> >>>> >>
>>>>> >> >>>> >> Does this sound reasonable? Please call me out on anything
>>>>> I seem
>>>>> >> >>>> >> incorrect
>>>>> >> >>>> >> about. :)
>>>>> >> >>>> >>
>>>>> >> >>>> >> Thanks for all the thought on this, Marcus!
>>>>> >> >>>> >>
>>>>> >> >>>> >>
>>>>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>>>> >> >>>> >> <sh...@gmail.com>
>>>>> >> >>>> >> wrote:
>>>>> >> >>>> >>>
>>>>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def,
>>>>> and the
>>>>> >> >>>> >>> attach
>>>>> >> >>>> >>> the disk def to the vm. You may need to do your own
>>>>> >> >>>> >>> StorageAdaptor
>>>>> >> >>>> >>> and run
>>>>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>>>>> >> >>>> >>> libvirt
>>>>> >> >>>> >>> iscsi
>>>>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>>>>> how it
>>>>> >> >>>> >>> works on
>>>>> >> >>>> >>> xen at the momen., nor is it ideal.
>>>>> >> >>>> >>>
>>>>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>>>>> which
>>>>> >> >>>> >>> luns
>>>>> >> >>>> >>> as
>>>>> >> >>>> >>> well, I remember discussing that months ago, so that a
>>>>> disk won't
>>>>> >> >>>> >>> be
>>>>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>>>>> will
>>>>> >> >>>> >>> be
>>>>> >> >>>> >>> safe and
>>>>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>>>>> >> >>>> >>> connectivity
>>>>> >> >>>> >>> with. It
>>>>> >> >>>> >>> should revoke access to everything but the target host...
>>>>> Except
>>>>> >> >>>> >>> for
>>>>> >> >>>> >>> during
>>>>> >> >>>> >>> migration but we can discuss that later, there's a
>>>>> migration prep
>>>>> >> >>>> >>> process
>>>>> >> >>>> >>> where the new host can be added to the acls, and the old
>>>>> host can
>>>>> >> >>>> >>> be
>>>>> >> >>>> >>> removed
>>>>> >> >>>> >>> post migration.
>>>>> >> >>>> >>>
>>>>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>>>> >> >>>> >>> <mi...@solidfire.com>
>>>>> >> >>>> >>> wrote:
>>>>> >> >>>> >>>>
>>>>> >> >>>> >>>> Yeah, that would be ideal.
>>>>> >> >>>> >>>>
>>>>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log
>>>>> in to
>>>>> >> >>>> >>>> it,
>>>>> >> >>>> >>>> then
>>>>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and
>>>>> leave it
>>>>> >> >>>> >>>> as
>>>>> >> >>>> >>>> is - do
>>>>> >> >>>> >>>> not format it with any file system...clustered or not). I
>>>>> would
>>>>> >> >>>> >>>> pass that
>>>>> >> >>>> >>>> device into the VM.
>>>>> >> >>>> >>>>
>>>>> >> >>>> >>>> Kind of accurate?
>>>>> >> >>>> >>>>
>>>>> >> >>>> >>>>
>>>>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>>>> >> >>>> >>>> <sh...@gmail.com>
>>>>> >> >>>> >>>> wrote:
>>>>> >> >>>> >>>>>
>>>>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>>>>> definitions.
>>>>> >> >>>> >>>>> There are
>>>>> >> >>>> >>>>> ones that work for block devices rather than files. You
>>>>> can
>>>>> >> >>>> >>>>> piggy
>>>>> >> >>>> >>>>> back off
>>>>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm
>>>>> as a
>>>>> >> >>>> >>>>> block device.
>>>>> >> >>>> >>>>> The definition is an XML string per libvirt XML format.
>>>>> You may
>>>>> >> >>>> >>>>> want to use
>>>>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>>>>> like I
>>>>> >> >>>> >>>>> mentioned,
>>>>> >> >>>> >>>>> there are by-id paths to the block devices, as well as
>>>>> other
>>>>> >> >>>> >>>>> ones
>>>>> >> >>>> >>>>> that will
>>>>> >> >>>> >>>>> be consistent and easier for management, not sure how
>>>>> familiar
>>>>> >> >>>> >>>>> you
>>>>> >> >>>> >>>>> are with
>>>>> >> >>>> >>>>> device naming on Linux.
>>>>> >> >>>> >>>>>
>>>>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>>> >> >>>> >>>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>> wrote:
>>>>> >> >>>> >>>>>>
>>>>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>>>>> initiator
>>>>> >> >>>> >>>>>> inside
>>>>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>>>>> lun on
>>>>> >> >>>> >>>>>> hypervisor) as
>>>>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>>>>> that
>>>>> >> >>>> >>>>>> resides on a
>>>>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>>>>> >> >>>> >>>>>>
>>>>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>>>>> migration
>>>>> >> >>>> >>>>>> I
>>>>> >> >>>> >>>>>> think
>>>>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it
>>>>> and
>>>>> >> >>>> >>>>>> mount
>>>>> >> >>>> >>>>>> it in two
>>>>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>>>>> >> >>>> >>>>>> filesystem,
>>>>> >> >>>> >>>>>> in which
>>>>> >> >>>> >>>>>> case you're back to shared mount point.
>>>>> >> >>>> >>>>>>
>>>>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is
>>>>> basically LVM
>>>>> >> >>>> >>>>>> with
>>>>> >> >>>> >>>>>> a xen
>>>>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't
>>>>> use a
>>>>> >> >>>> >>>>>> filesystem
>>>>> >> >>>> >>>>>> either.
>>>>> >> >>>> >>>>>>
>>>>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>
>>>>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do
>>>>> you
>>>>> >> >>>> >>>>>>> mean
>>>>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could
>>>>> do that
>>>>> >> >>>> >>>>>>> in
>>>>> >> >>>> >>>>>>> CS.
>>>>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>>>>> >> >>>> >>>>>>> hypervisor,
>>>>> >> >>>> >>>>>>> as far as I
>>>>> >> >>>> >>>>>>> know.
>>>>> >> >>>> >>>>>>>
>>>>> >> >>>> >>>>>>>
>>>>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>>>> >> >>>> >>>>>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>> wrote:
>>>>> >> >>>> >>>>>>>>
>>>>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>>>>> there is
>>>>> >> >>>> >>>>>>>> a
>>>>> >> >>>> >>>>>>>> good
>>>>> >> >>>> >>>>>>>> reason not to.
>>>>> >> >>>> >>>>>>>>
>>>>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>>>> >> >>>> >>>>>>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>>> wrote:
>>>>> >> >>>> >>>>>>>>>
>>>>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>>>>> mistake
>>>>> >> >>>> >>>>>>>>> to
>>>>> >> >>>> >>>>>>>>> go to
>>>>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
>>>>> to luns
>>>>> >> >>>> >>>>>>>>> and then putting
>>>>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>>>>> QCOW2
>>>>> >> >>>> >>>>>>>>> or
>>>>> >> >>>> >>>>>>>>> even RAW disk
>>>>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>>>>> along
>>>>> >> >>>> >>>>>>>>> the
>>>>> >> >>>> >>>>>>>>> way, and have
>>>>> >> >>>> >>>>>>>>> more overhead with the filesystem and its
>>>>> journaling, etc.
>>>>> >> >>>> >>>>>>>>>
>>>>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>>> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in
>>>>> KVM with
>>>>> >> >>>> >>>>>>>>>> CS.
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS
>>>>> today is by
>>>>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the
>>>>> location of
>>>>> >> >>>> >>>>>>>>>> the
>>>>> >> >>>> >>>>>>>>>> share.
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>>>>> >> >>>> >>>>>>>>>> discovering
>>>>> >> >>>> >>>>>>>>>> their
>>>>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>>>>> somewhere
>>>>> >> >>>> >>>>>>>>>> on
>>>>> >> >>>> >>>>>>>>>> their file
>>>>> >> >>>> >>>>>>>>>> system.
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that
>>>>> discovery,
>>>>> >> >>>> >>>>>>>>>> logging
>>>>> >> >>>> >>>>>>>>>> in,
>>>>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting
>>>>> the
>>>>> >> >>>> >>>>>>>>>> current code manage
>>>>> >> >>>> >>>>>>>>>> the rest as it currently does?
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>>> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
>>>>> need to
>>>>> >> >>>> >>>>>>>>>>> catch up
>>>>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically
>>>>> just disk
>>>>> >> >>>> >>>>>>>>>>> snapshots + memory
>>>>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would
>>>>> preferably be
>>>>> >> >>>> >>>>>>>>>>> handled by the SAN,
>>>>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage
>>>>> or
>>>>> >> >>>> >>>>>>>>>>> something else. This is
>>>>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>>>>> want to
>>>>> >> >>>> >>>>>>>>>>> see how others are
>>>>> >> >>>> >>>>>>>>>>> planning theirs.
>>>>> >> >>>> >>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>>>> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>>>>>> wrote:
>>>>> >> >>>> >>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a
>>>>> vdi
>>>>> >> >>>> >>>>>>>>>>>> style
>>>>> >> >>>> >>>>>>>>>>>> on an
>>>>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>>>>> >> >>>> >>>>>>>>>>>> format.
>>>>> >> >>>> >>>>>>>>>>>> Otherwise you're
>>>>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>>>>> creating
>>>>> >> >>>> >>>>>>>>>>>> a
>>>>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>>>>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance
>>>>> killer.
>>>>> >> >>>> >>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>>>>> to the
>>>>> >> >>>> >>>>>>>>>>>> VM, and
>>>>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>>>>> >> >>>> >>>>>>>>>>>> plugin
>>>>> >> >>>> >>>>>>>>>>>> is best. My
>>>>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was
>>>>> that
>>>>> >> >>>> >>>>>>>>>>>> there
>>>>> >> >>>> >>>>>>>>>>>> was a snapshot
>>>>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle
>>>>> snapshots.
>>>>> >> >>>> >>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>>>> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>>>>>>> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the
>>>>> SAN back
>>>>> >> >>>> >>>>>>>>>>>>> end, if
>>>>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>>>>> could
>>>>> >> >>>> >>>>>>>>>>>>> call
>>>>> >> >>>> >>>>>>>>>>>>> your plugin for
>>>>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>>>>> agnostic. As
>>>>> >> >>>> >>>>>>>>>>>>> far as space, that
>>>>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With
>>>>> ours, we
>>>>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>>>>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>>>>> and is
>>>>> >> >>>> >>>>>>>>>>>>> independent of the
>>>>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>>>>> >> >>>> >>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>>> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>>>>> libvirt
>>>>> >> >>>> >>>>>>>>>>>>>> won't
>>>>> >> >>>> >>>>>>>>>>>>>> work
>>>>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>>>>> snapshots?
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor
>>>>> snapshot, the
>>>>> >> >>>> >>>>>>>>>>>>>> VDI for
>>>>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>>>>> repository
>>>>> >> >>>> >>>>>>>>>>>>>> as
>>>>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>>>>> >> >>>> >>>>>>>>>>>>>> XenServer
>>>>> >> >>>> >>>>>>>>>>>>>> and
>>>>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support
>>>>> hypervisor
>>>>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>>>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what
>>>>> the user
>>>>> >> >>>> >>>>>>>>>>>>>> requested for the
>>>>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>>>>> >> >>>> >>>>>>>>>>>>>> thinly
>>>>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>>>>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it
>>>>> needs to
>>>>> >> >>>> >>>>>>>>>>>>>> be).
>>>>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>>>>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN
>>>>> volume
>>>>> >> >>>> >>>>>>>>>>>>>> until
>>>>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>>>>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also
>>>>> reside on
>>>>> >> >>>> >>>>>>>>>>>>>> the
>>>>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>>>> >> >>>> >>>>>>>>>>>>>> creation
>>>>> >> >>>> >>>>>>>>>>>>>> of
>>>>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt
>>>>> (which, even
>>>>> >> >>>> >>>>>>>>>>>>>> if
>>>>> >> >>>> >>>>>>>>>>>>>> there were support
>>>>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>>>>> per
>>>>> >> >>>> >>>>>>>>>>>>>> iSCSI
>>>>> >> >>>> >>>>>>>>>>>>>> target), then I
>>>>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current
>>>>> way this
>>>>> >> >>>> >>>>>>>>>>>>>> works
>>>>> >> >>>> >>>>>>>>>>>>>> with DIR?
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> What do you think?
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> Thanks
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>>>>> access
>>>>> >> >>>> >>>>>>>>>>>>>>> today.
>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I
>>>>> might as
>>>>> >> >>>> >>>>>>>>>>>>>>> well
>>>>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus
>>>>> Sorensen
>>>>> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>>>>> believe
>>>>> >> >>>> >>>>>>>>>>>>>>>> it
>>>>> >> >>>> >>>>>>>>>>>>>>>> just
>>>>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>>>>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>>>>> that. The
>>>>> >> >>>> >>>>>>>>>>>>>>>> end-user
>>>>> >> >>>> >>>>>>>>>>>>>>>> is
>>>>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that
>>>>> all KVM
>>>>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>>>>> >> >>>> >>>>>>>>>>>>>>>> access,
>>>>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>>>>> providing the
>>>>> >> >>>> >>>>>>>>>>>>>>>> storage.
>>>>> >> >>>> >>>>>>>>>>>>>>>> It could
>>>>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>>>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>>>>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>>>>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>>>>> >> >>>> >>>>>>>>>>>>>>>> images.
>>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>>>>> Sorensen
>>>>> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>>>>> at the
>>>>> >> >>>> >>>>>>>>>>>>>>>> > same
>>>>> >> >>>> >>>>>>>>>>>>>>>> > time.
>>>>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >
>>>>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>>>>> Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>>>>> pools:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>>>>> Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>>>>> pool
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>>>>> have one
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
>>>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and
>>>>> destroys
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit
>>>>> to see
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>>>>> mentioned,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>>>>> Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting
>>>>> at.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2),
>>>>> when
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>>>>> iSCSI,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>>>>> iSCSI
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>>>>> logging
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>>>>> work
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>>>>> device
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a
>>>>> bit more
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
>>>>> write your
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>>>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> http://libvirt.org/sources/java/javadoc/
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>>>>> made to
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor
>>>>> to see
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>>>>> test
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>>>>> iscsi
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>>>>> libvirt
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
>>>>> of the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
>>>>> Marcus
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>>>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>>>>> Tutkowski"
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>>>>> storage
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>>>>> and
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>>>>> establish a
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>>>>> QoS.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>>>>> expected
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>>>>> volumes
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>>>>> friendly).
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>>>>> work, I
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
>>>>> they
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>>>>> with
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how
>>>>> this
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>>>>> how I
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>>>>> have to
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
>>>>> it for
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>>>>> SolidFire
>>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>>> >> >>>> >>>>>>
>>>>>
>>>> ...
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
That's right
On Sep 16, 2013 12:31 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> I understand what you're saying now, Marcus.
>
> I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
> (looking into that still), but I see what you mean: If it is, we don't need
> a new adaptor; otherwise, we do.
>
> If Libivirt's iSCSI Storage Pool does work, I could update the current
> adaptor, if need be, to make use of it.
>
>
> On Mon, Sep 16, 2013 at 12:24 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Well, you'd use neither of the two pool types, because you are not
>> letting libvirt handle the pool, you are doing it with your own pool and
>> adaptor class. Libvirt will be unaware of everything but the disk XML you
>> attach to a vm. You'd only use those if libvirts functions were
>> advantageous, I.e. if it already did everything you want. Since neither of
>> those seem to provide both iscsi and the 1:1 mapping you want that's why we
>> are talking about your own pool/adaptor.
>>
>> You can log into the target via your implementation of getPhysicalDisk as
>> you mention in AttachVolumeCommand, or log in during your implementation of
>> createStoragePool and simply rescan for luns in getPhysicalDisk. Presumably
>> in most cases the host will be logged in already and new luns have been
>> created in the meantime.
>> On Sep 16, 2013 12:09 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> Hey Marcus,
>>>
>>> Thanks for that clarification.
>>>
>>> Sorry if this is a redundant question:
>>>
>>> When the AttachVolumeCommand comes in, it sounds like we thought the
>>> best approach would be for me to discover and log in to the iSCSI target
>>> using iscsiadm.
>>>
>>> This will create a new device: /dev/sdX.
>>>
>>> We would then pass this new device into the VM (passing XML into the
>>> appropriate Libvirt API).
>>>
>>> If this is an accurate understanding, can you tell me: Do you think we
>>> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>>>
>>> I believe I recall you leaning toward a Disk Storage Pool because we
>>> will have already discovered the iSCSI target and, as such, will already
>>> have a device to pass into the VM.
>>>
>>> It seems like either way would work.
>>>
>>> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand
>>> if they would do the work of discovering the iSCSI target for me (and maybe
>>> avoid me having to use iscsiadm).
>>>
>>> Thanks for the clarification! :)
>>>
>>>
>>> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> It will still register the pool.  You still have a primary storage
>>>> pool that you registered, whether it's local, cluster or zone wide.
>>>> NFS is optionally zone wide as well (I'm assuming customers can launch
>>>> your storage only cluster-wide if they choose for resource
>>>> partitioning), but it registers the pool in Libvirt prior to use.
>>>>
>>>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>>>> both pool and volume info. It first looks up the pool:
>>>>
>>>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>>                     cmd.getPooltype(),
>>>>                     cmd.getPoolUuid());
>>>>
>>>> Then it looks up the disk from that pool:
>>>>
>>>>     KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());
>>>>
>>>> Most of the commands only pass volume info like this (getVolumePath
>>>> generally means the uuid of the volume), since it looks up the pool
>>>> separately. If you don't save the pool info in a map in your custom
>>>> class when createStoragePool is called, then getStoragePool won't be
>>>> able to find it. This is a simple thing in your implementation of
>>>> createStoragePool, just thought I'd mention it because it is key. Just
>>>> create a map of pool uuid and pool object and save them so they're
>>>> available across all implementations of that class.
>>>>
>>>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>>>> <mi...@solidfire.com> wrote:
>>>> > Thanks, Marcus
>>>> >
>>>> > About this:
>>>> >
>>>> > "When the agent connects to the
>>>> > management server, it registers all pools in the cluster with the
>>>> > agent."
>>>> >
>>>> > So, my plug-in allows you to create zone-wide primary storage. This
>>>> just
>>>> > means that any cluster can use the SAN (the SAN was registered as
>>>> primary
>>>> > storage as opposed to a preallocated volume from the SAN). Once you
>>>> create a
>>>> > primary storage based on this plug-in, the storage framework will
>>>> invoke the
>>>> > plug-in, as needed, to create and delete volumes on the SAN. For
>>>> example,
>>>> > you could have one SolidFire primary storage (zone wide) and
>>>> currently have
>>>> > 100 volumes created on the SAN to support it.
>>>> >
>>>> > In this case, what will the management server be registering with the
>>>> agent
>>>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>>>> passed
>>>> > in, that will be too vague as it does not contain information on what
>>>> > volumes have been created for the agent.
>>>> >
>>>> > Thanks
>>>> >
>>>> >
>>>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Yes, see my previous email from the 13th. You can create your own
>>>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
>>>> >> have. The previous email outlines how to add your own StorageAdaptor
>>>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>>>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>>>> >> getPhysicalDisk I believe will be the one you use to actually attach
>>>> a
>>>> >> lun.
>>>> >>
>>>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>>>> >> management server, it registers all pools in the cluster with the
>>>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>>>> >> pool object (with all of the settings for your SAN). This in turn
>>>> >> calls _storagePoolMgr.createStoragePool, which will route through
>>>> >> KVMStoragePoolManager to your storage adapter that you've registered.
>>>> >> The last argument to createStoragePool is the pool type, which is
>>>> used
>>>> >> to select a StorageAdaptor.
>>>> >>
>>>> >> From then on, most calls will only pass the volume info, and the
>>>> >> volume will have the uuid of the storage pool. For this reason, your
>>>> >> adaptor class needs to have a static Map variable that contains pool
>>>> >> uuid and pool object. Whenever they call createStoragePool on your
>>>> >> adaptor you add that pool to the map so that subsequent volume calls
>>>> >> can look up the pool details for the volume by pool uuid. With the
>>>> >> Libvirt adaptor, libvirt keeps track of that for you.
>>>> >>
>>>> >> When createStoragePool is called, you can log into the iscsi target
>>>> >> (or make sure you are already logged in, as it can be called over
>>>> >> again at any time), and when attach volume commands are fired off,
>>>> you
>>>> >> can attach individual LUNs that are asked for, or rescan (say that
>>>> the
>>>> >> plugin created a new ACL just prior to calling attach), or whatever
>>>> is
>>>> >> necessary.
>>>> >>
>>>> >> KVM is a bit more work, but you can do anything you want. Actually, I
>>>> >> think you can call host scripts with Xen, but having the agent there
>>>> >> that runs your own code gives you the flexibility to do whatever.
>>>> >>
>>>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>>>> >> <mi...@solidfire.com> wrote:
>>>> >> > I see right now LibvirtComputingResource.java has the following
>>>> method
>>>> >> > that
>>>> >> > I might be able to leverage (it's probably not called at present
>>>> and
>>>> >> > would
>>>> >> > need to be implemented in my case to discover my iSCSI target and
>>>> log in
>>>> >> > to
>>>> >> > it):
>>>> >> >
>>>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>>>> >> >
>>>> >> >         return new Answer(cmd, true, "success");
>>>> >> >
>>>> >> >     }
>>>> >> >
>>>> >> > I would probably be able to call the KVMStorageManager to have it
>>>> use my
>>>> >> > StorageAdaptor to do what's necessary here.
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>>>> >> > <mi...@solidfire.com> wrote:
>>>> >> >>
>>>> >> >> Hey Marcus,
>>>> >> >>
>>>> >> >> When I implemented support in the XenServer and VMware plug-ins
>>>> for
>>>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>>>> >> >> methods in
>>>> >> >> both plug-ins.
>>>> >> >>
>>>> >> >> The code there was changed to check the AttachVolumeCommand
>>>> instance
>>>> >> >> for a
>>>> >> >> "managed" property.
>>>> >> >>
>>>> >> >> If managed was false, the normal attach/detach logic would just
>>>> run and
>>>> >> >> the volume would be attached or detached.
>>>> >> >>
>>>> >> >> If managed was true, new 4.2 logic would run to create (let's talk
>>>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>>>> >> >> reattach an
>>>> >> >> existing VDI inside an existing SR, if this wasn't the first time
>>>> the
>>>> >> >> volume
>>>> >> >> was attached). If managed was true and we were detaching the
>>>> volume,
>>>> >> >> the SR
>>>> >> >> would be detached from the XenServer hosts.
>>>> >> >>
>>>> >> >> I am currently walking through the execute(AttachVolumeCommand) in
>>>> >> >> LibvirtComputingResource.java.
>>>> >> >>
>>>> >> >> I see how the XML is constructed to describe whether a disk
>>>> should be
>>>> >> >> attached or detached. I also see how we call in to get a
>>>> StorageAdapter
>>>> >> >> (and
>>>> >> >> how I will likely need to write a new one of these).
>>>> >> >>
>>>> >> >> So, talking in XenServer terminology again, I was wondering if you
>>>> >> >> think
>>>> >> >> the approach we took in 4.2 with creating and deleting SRs in the
>>>> >> >> execute(AttachVolumeCommand) method would work here or if there
>>>> is some
>>>> >> >> other way I should be looking at this for KVM?
>>>> >> >>
>>>> >> >> As it is right now for KVM, storage has to be set up ahead of
>>>> time.
>>>> >> >> Assuming this is the case, there probably isn't currently a place
>>>> I can
>>>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>>>> This is
>>>> >> >> why
>>>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>>>> XenServer
>>>> >> >> and
>>>> >> >> VMware, but I wanted to see if you have an alternative way that
>>>> might
>>>> >> >> be
>>>> >> >> better for KVM.
>>>> >> >>
>>>> >> >> One possible way to do this would be to modify VolumeManagerImpl
>>>> (or
>>>> >> >> whatever its equivalent is in 4.3) before it issues an
>>>> attach-volume
>>>> >> >> command
>>>> >> >> to KVM to check to see if the volume is to be attached to managed
>>>> >> >> storage.
>>>> >> >> If it is, then (before calling the attach-volume command in KVM)
>>>> call
>>>> >> >> the
>>>> >> >> create-storage-pool command in KVM (or whatever it might be
>>>> called).
>>>> >> >>
>>>> >> >> Just wanted to get some of your thoughts on this.
>>>> >> >>
>>>> >> >> Thanks!
>>>> >> >>
>>>> >> >>
>>>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>>>> >> >> <mi...@solidfire.com> wrote:
>>>> >> >>>
>>>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>>>> codebase
>>>> >> >>> and having to merge my code into it in 4.2.
>>>> >> >>>
>>>> >> >>> Thanks for all the details, Marcus! :)
>>>> >> >>>
>>>> >> >>> I can start digging into what you were talking about now.
>>>> >> >>>
>>>> >> >>>
>>>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>>>> >> >>> <sh...@gmail.com>
>>>> >> >>> wrote:
>>>> >> >>>>
>>>> >> >>>> Looks like things might be slightly different now in 4.2, with
>>>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less
>>>> like some
>>>> >> >>>> of the commands were ripped out verbatim from
>>>> >> >>>> LibvirtComputingResource
>>>> >> >>>> and placed here, so in general what I've said is probably still
>>>> true,
>>>> >> >>>> just that the location of things like AttachVolumeCommand might
>>>> be
>>>> >> >>>> different, in this file rather than
>>>> LibvirtComputingResource.java.
>>>> >> >>>>
>>>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>>>> >> >>>> <sh...@gmail.com>
>>>> >> >>>> wrote:
>>>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>>>> >> >>>> > hypervisor
>>>> >> >>>> > classes differ, the rest is all mgmt server. Creating a
>>>> volume is
>>>> >> >>>> > just
>>>> >> >>>> > a db entry until it's deployed for the first time.
>>>> >> >>>> > AttachVolumeCommand
>>>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a
>>>> KVM
>>>> >> >>>> > StorageAdaptor) to log in the host to the target and then you
>>>> have
>>>> >> >>>> > a
>>>> >> >>>> > block device.  Maybe libvirt will do that for you, but my
>>>> quick
>>>> >> >>>> > read
>>>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>>>> pool,
>>>> >> >>>> > not
>>>> >> >>>> > a lun or volume, so you'll need to figure out if that works
>>>> or if
>>>> >> >>>> > you'll have to use iscsiadm commands.
>>>> >> >>>> >
>>>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>>>> Libvirt
>>>> >> >>>> > doesn't really manage your pool the way you want), you're
>>>> going to
>>>> >> >>>> > have to create a version of KVMStoragePool class and a
>>>> >> >>>> > StorageAdaptor
>>>> >> >>>> > class (see LibvirtStoragePool.java and
>>>> LibvirtStorageAdaptor.java),
>>>> >> >>>> > implementing all of the methods, then in
>>>> KVMStorageManager.java
>>>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>>>> correct
>>>> >> >>>> > adaptor, you can see in this file that every call first pulls
>>>> the
>>>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you
>>>> can
>>>> >> >>>> > see
>>>> >> >>>> > a comment in this file that says "add other storage adaptors
>>>> here",
>>>> >> >>>> > where it puts to this map, this is where you'd register your
>>>> >> >>>> > adaptor.
>>>> >> >>>> >
>>>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool
>>>> accepts all
>>>> >> >>>> > of
>>>> >> >>>> > the pool data (host, port, name, path) which would be used to
>>>> log
>>>> >> >>>> > the
>>>> >> >>>> > host into the initiator. I *believe* the method
>>>> getPhysicalDisk
>>>> >> >>>> > will
>>>> >> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
>>>> >> >>>> > calls
>>>> >> >>>> > this and then creates the XML diskdef and attaches it to the
>>>> VM.
>>>> >> >>>> > Now,
>>>> >> >>>> > one thing you need to know is that createStoragePool is called
>>>> >> >>>> > often,
>>>> >> >>>> > sometimes just to make sure the pool is there. You may want to
>>>> >> >>>> > create
>>>> >> >>>> > a map in your adaptor class and keep track of pools that have
>>>> been
>>>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this
>>>> because it
>>>> >> >>>> > asks
>>>> >> >>>> > libvirt about which storage pools exist. There are also calls
>>>> to
>>>> >> >>>> > refresh the pool stats, and all of the other calls can be
>>>> seen in
>>>> >> >>>> > the
>>>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone,
>>>> etc,
>>>> >> >>>> > but
>>>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>>>> that
>>>> >> >>>> > volumes are created on the mgmt server via the plugin now, so
>>>> >> >>>> > whatever
>>>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>>>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>>>> talking to
>>>> >> >>>> > the san api).
>>>> >> >>>> >
>>>> >> >>>> > There is a difference between attaching new volumes and
>>>> launching a
>>>> >> >>>> > VM
>>>> >> >>>> > with existing volumes.  In the latter case, the VM definition
>>>> that
>>>> >> >>>> > was
>>>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>>>> >> >>>> >
>>>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>>>> imagine it
>>>> >> >>>> > would need to be kept the same. Is it just a definition to
>>>> the SAN
>>>> >> >>>> > (ip address or some such, port number) and perhaps a volume
>>>> pool
>>>> >> >>>> > name?
>>>> >> >>>> >
>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>> to have
>>>> >> >>>> >> only a
>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>> ideal.
>>>> >> >>>> >
>>>> >> >>>> > That depends on your SAN API.  I was under the impression
>>>> that the
>>>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>>>> >> >>>> > whatever
>>>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just
>>>> call
>>>> >> >>>> > your
>>>> >> >>>> > SAN API with the host info for the ACLs prior to when the
>>>> disk is
>>>> >> >>>> > attached (or the VM is started).  I'd have to look more at the
>>>> >> >>>> > framework to know the details, in 4.1 I would do this in
>>>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>>>> >> >>>> >
>>>> >> >>>> >
>>>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>> >> >>>> > <mi...@solidfire.com> wrote:
>>>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>>>> >> >>>> >> different
>>>> >> >>>> >> from how
>>>> >> >>>> >> it works with XenServer and VMware.
>>>> >> >>>> >>
>>>> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>>> >> >>>> >>
>>>> >> >>>> >> * The user creates a CS volume (this is just recorded in the
>>>> >> >>>> >> cloud.volumes
>>>> >> >>>> >> table).
>>>> >> >>>> >>
>>>> >> >>>> >> * The user attaches the volume as a disk to a VM for the
>>>> first
>>>> >> >>>> >> time
>>>> >> >>>> >> (if the
>>>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>>>> >> >>>> >> framework
>>>> >> >>>> >> invokes
>>>> >> >>>> >> a method on the plug-in that creates a volume on the
>>>> SAN...info
>>>> >> >>>> >> like
>>>> >> >>>> >> the IQN
>>>> >> >>>> >> of the SAN volume is recorded in the DB).
>>>> >> >>>> >>
>>>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>>>> executed.
>>>> >> >>>> >> It
>>>> >> >>>> >> determines based on a flag passed in that the storage in
>>>> question
>>>> >> >>>> >> is
>>>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>>> >> >>>> >> preallocated
>>>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>>>> >> >>>> >> discovered
>>>> >> >>>> >> it
>>>> >> >>>> >> determines if the iSCSI target already contains a storage
>>>> >> >>>> >> repository
>>>> >> >>>> >> (it
>>>> >> >>>> >> would if this were a re-attach situation). If it does
>>>> contain an
>>>> >> >>>> >> SR
>>>> >> >>>> >> already,
>>>> >> >>>> >> then there should already be one VDI, as well. If there is
>>>> no SR,
>>>> >> >>>> >> an
>>>> >> >>>> >> SR is
>>>> >> >>>> >> created and a single VDI is created within it (that takes up
>>>> about
>>>> >> >>>> >> as
>>>> >> >>>> >> much
>>>> >> >>>> >> space as was requested for the CloudStack volume).
>>>> >> >>>> >>
>>>> >> >>>> >> * The normal attach-volume logic continues (it depends on the
>>>> >> >>>> >> existence of
>>>> >> >>>> >> an SR and a VDI).
>>>> >> >>>> >>
>>>> >> >>>> >> The VMware case is essentially the same (mainly just
>>>> substitute
>>>> >> >>>> >> datastore
>>>> >> >>>> >> for SR and VMDK for VDI).
>>>> >> >>>> >>
>>>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>>>> iSCSI
>>>> >> >>>> >> target,
>>>> >> >>>> >> but only the host that is currently running the VM that is
>>>> using
>>>> >> >>>> >> the
>>>> >> >>>> >> VDI (or
>>>> >> >>>> >> VMKD) is actually using the disk.
>>>> >> >>>> >>
>>>> >> >>>> >> Live Migration should be OK because the hypervisors
>>>> communicate
>>>> >> >>>> >> with
>>>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>>>> >> >>>> >>
>>>> >> >>>> >> I see what you're saying with KVM, though.
>>>> >> >>>> >>
>>>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>>>> eyes.
>>>> >> >>>> >> CS
>>>> >> >>>> >> controls
>>>> >> >>>> >> Live Migration. You don't really need a clustered filesystem
>>>> on
>>>> >> >>>> >> the
>>>> >> >>>> >> LUN. The
>>>> >> >>>> >> LUN could be handed over raw to the VM using it.
>>>> >> >>>> >>
>>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN
>>>> to have
>>>> >> >>>> >> only a
>>>> >> >>>> >> single KVM host have access to the volume, that would be
>>>> ideal.
>>>> >> >>>> >>
>>>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log
>>>> in to
>>>> >> >>>> >> the
>>>> >> >>>> >> iSCSI
>>>> >> >>>> >> target. I'll also need to take the resultant new device and
>>>> pass
>>>> >> >>>> >> it
>>>> >> >>>> >> into the
>>>> >> >>>> >> VM.
>>>> >> >>>> >>
>>>> >> >>>> >> Does this sound reasonable? Please call me out on anything I
>>>> seem
>>>> >> >>>> >> incorrect
>>>> >> >>>> >> about. :)
>>>> >> >>>> >>
>>>> >> >>>> >> Thanks for all the thought on this, Marcus!
>>>> >> >>>> >>
>>>> >> >>>> >>
>>>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>>> >> >>>> >> <sh...@gmail.com>
>>>> >> >>>> >> wrote:
>>>> >> >>>> >>>
>>>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def,
>>>> and the
>>>> >> >>>> >>> attach
>>>> >> >>>> >>> the disk def to the vm. You may need to do your own
>>>> >> >>>> >>> StorageAdaptor
>>>> >> >>>> >>> and run
>>>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>>>> >> >>>> >>> libvirt
>>>> >> >>>> >>> iscsi
>>>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>>>> how it
>>>> >> >>>> >>> works on
>>>> >> >>>> >>> xen at the momen., nor is it ideal.
>>>> >> >>>> >>>
>>>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>>>> which
>>>> >> >>>> >>> luns
>>>> >> >>>> >>> as
>>>> >> >>>> >>> well, I remember discussing that months ago, so that a disk
>>>> won't
>>>> >> >>>> >>> be
>>>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>>>> will
>>>> >> >>>> >>> be
>>>> >> >>>> >>> safe and
>>>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>>>> >> >>>> >>> connectivity
>>>> >> >>>> >>> with. It
>>>> >> >>>> >>> should revoke access to everything but the target host...
>>>> Except
>>>> >> >>>> >>> for
>>>> >> >>>> >>> during
>>>> >> >>>> >>> migration but we can discuss that later, there's a
>>>> migration prep
>>>> >> >>>> >>> process
>>>> >> >>>> >>> where the new host can be added to the acls, and the old
>>>> host can
>>>> >> >>>> >>> be
>>>> >> >>>> >>> removed
>>>> >> >>>> >>> post migration.
>>>> >> >>>> >>>
>>>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>>> >> >>>> >>> <mi...@solidfire.com>
>>>> >> >>>> >>> wrote:
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> Yeah, that would be ideal.
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log
>>>> in to
>>>> >> >>>> >>>> it,
>>>> >> >>>> >>>> then
>>>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and
>>>> leave it
>>>> >> >>>> >>>> as
>>>> >> >>>> >>>> is - do
>>>> >> >>>> >>>> not format it with any file system...clustered or not). I
>>>> would
>>>> >> >>>> >>>> pass that
>>>> >> >>>> >>>> device into the VM.
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> Kind of accurate?
>>>> >> >>>> >>>>
>>>> >> >>>> >>>>
>>>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>>> >> >>>> >>>> <sh...@gmail.com>
>>>> >> >>>> >>>> wrote:
>>>> >> >>>> >>>>>
>>>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>>>> definitions.
>>>> >> >>>> >>>>> There are
>>>> >> >>>> >>>>> ones that work for block devices rather than files. You
>>>> can
>>>> >> >>>> >>>>> piggy
>>>> >> >>>> >>>>> back off
>>>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm
>>>> as a
>>>> >> >>>> >>>>> block device.
>>>> >> >>>> >>>>> The definition is an XML string per libvirt XML format.
>>>> You may
>>>> >> >>>> >>>>> want to use
>>>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>>>> like I
>>>> >> >>>> >>>>> mentioned,
>>>> >> >>>> >>>>> there are by-id paths to the block devices, as well as
>>>> other
>>>> >> >>>> >>>>> ones
>>>> >> >>>> >>>>> that will
>>>> >> >>>> >>>>> be consistent and easier for management, not sure how
>>>> familiar
>>>> >> >>>> >>>>> you
>>>> >> >>>> >>>>> are with
>>>> >> >>>> >>>>> device naming on Linux.
>>>> >> >>>> >>>>>
>>>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>> <sh...@gmail.com>
>>>> >> >>>> >>>>> wrote:
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>>>> initiator
>>>> >> >>>> >>>>>> inside
>>>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>>>> lun on
>>>> >> >>>> >>>>>> hypervisor) as
>>>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>>>> that
>>>> >> >>>> >>>>>> resides on a
>>>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>>>> migration
>>>> >> >>>> >>>>>> I
>>>> >> >>>> >>>>>> think
>>>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it
>>>> and
>>>> >> >>>> >>>>>> mount
>>>> >> >>>> >>>>>> it in two
>>>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>>>> >> >>>> >>>>>> filesystem,
>>>> >> >>>> >>>>>> in which
>>>> >> >>>> >>>>>> case you're back to shared mount point.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically
>>>> LVM
>>>> >> >>>> >>>>>> with
>>>> >> >>>> >>>>>> a xen
>>>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't
>>>> use a
>>>> >> >>>> >>>>>> filesystem
>>>> >> >>>> >>>>>> either.
>>>> >> >>>> >>>>>>
>>>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do
>>>> you
>>>> >> >>>> >>>>>>> mean
>>>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could
>>>> do that
>>>> >> >>>> >>>>>>> in
>>>> >> >>>> >>>>>>> CS.
>>>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>>>> >> >>>> >>>>>>> hypervisor,
>>>> >> >>>> >>>>>>> as far as I
>>>> >> >>>> >>>>>>> know.
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>>
>>>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>> <sh...@gmail.com>
>>>> >> >>>> >>>>>>> wrote:
>>>> >> >>>> >>>>>>>>
>>>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>>>> there is
>>>> >> >>>> >>>>>>>> a
>>>> >> >>>> >>>>>>>> good
>>>> >> >>>> >>>>>>>> reason not to.
>>>> >> >>>> >>>>>>>>
>>>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>> <sh...@gmail.com>
>>>> >> >>>> >>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>
>>>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>>>> mistake
>>>> >> >>>> >>>>>>>>> to
>>>> >> >>>> >>>>>>>>> go to
>>>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes
>>>> to luns
>>>> >> >>>> >>>>>>>>> and then putting
>>>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>>>> QCOW2
>>>> >> >>>> >>>>>>>>> or
>>>> >> >>>> >>>>>>>>> even RAW disk
>>>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>>>> along
>>>> >> >>>> >>>>>>>>> the
>>>> >> >>>> >>>>>>>>> way, and have
>>>> >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling,
>>>> etc.
>>>> >> >>>> >>>>>>>>>
>>>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in
>>>> KVM with
>>>> >> >>>> >>>>>>>>>> CS.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today
>>>> is by
>>>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the
>>>> location of
>>>> >> >>>> >>>>>>>>>> the
>>>> >> >>>> >>>>>>>>>> share.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>>>> >> >>>> >>>>>>>>>> discovering
>>>> >> >>>> >>>>>>>>>> their
>>>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>>>> somewhere
>>>> >> >>>> >>>>>>>>>> on
>>>> >> >>>> >>>>>>>>>> their file
>>>> >> >>>> >>>>>>>>>> system.
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
>>>> >> >>>> >>>>>>>>>> logging
>>>> >> >>>> >>>>>>>>>> in,
>>>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting
>>>> the
>>>> >> >>>> >>>>>>>>>> current code manage
>>>> >> >>>> >>>>>>>>>> the rest as it currently does?
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>
>>>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I
>>>> need to
>>>> >> >>>> >>>>>>>>>>> catch up
>>>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just
>>>> disk
>>>> >> >>>> >>>>>>>>>>> snapshots + memory
>>>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably
>>>> be
>>>> >> >>>> >>>>>>>>>>> handled by the SAN,
>>>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>>>> >> >>>> >>>>>>>>>>> something else. This is
>>>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>>>> want to
>>>> >> >>>> >>>>>>>>>>> see how others are
>>>> >> >>>> >>>>>>>>>>> planning theirs.
>>>> >> >>>> >>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
>>>> >> >>>> >>>>>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a
>>>> vdi
>>>> >> >>>> >>>>>>>>>>>> style
>>>> >> >>>> >>>>>>>>>>>> on an
>>>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>>>> >> >>>> >>>>>>>>>>>> format.
>>>> >> >>>> >>>>>>>>>>>> Otherwise you're
>>>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>>>> creating
>>>> >> >>>> >>>>>>>>>>>> a
>>>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>>>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance
>>>> killer.
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>>>> to the
>>>> >> >>>> >>>>>>>>>>>> VM, and
>>>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>>>> >> >>>> >>>>>>>>>>>> plugin
>>>> >> >>>> >>>>>>>>>>>> is best. My
>>>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was
>>>> that
>>>> >> >>>> >>>>>>>>>>>> there
>>>> >> >>>> >>>>>>>>>>>> was a snapshot
>>>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle
>>>> snapshots.
>>>> >> >>>> >>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>>> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the
>>>> SAN back
>>>> >> >>>> >>>>>>>>>>>>> end, if
>>>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>>>> could
>>>> >> >>>> >>>>>>>>>>>>> call
>>>> >> >>>> >>>>>>>>>>>>> your plugin for
>>>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>>>> agnostic. As
>>>> >> >>>> >>>>>>>>>>>>> far as space, that
>>>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With
>>>> ours, we
>>>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>>>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>>>> and is
>>>> >> >>>> >>>>>>>>>>>>> independent of the
>>>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>>>> >> >>>> >>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>>>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>> won't
>>>> >> >>>> >>>>>>>>>>>>>> work
>>>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>>>> snapshots?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor
>>>> snapshot, the
>>>> >> >>>> >>>>>>>>>>>>>> VDI for
>>>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>>>> repository
>>>> >> >>>> >>>>>>>>>>>>>> as
>>>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>>>> >> >>>> >>>>>>>>>>>>>> XenServer
>>>> >> >>>> >>>>>>>>>>>>>> and
>>>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what
>>>> the user
>>>> >> >>>> >>>>>>>>>>>>>> requested for the
>>>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>>>> >> >>>> >>>>>>>>>>>>>> thinly
>>>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>>>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it
>>>> needs to
>>>> >> >>>> >>>>>>>>>>>>>> be).
>>>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>>>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN
>>>> volume
>>>> >> >>>> >>>>>>>>>>>>>> until
>>>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>>>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also
>>>> reside on
>>>> >> >>>> >>>>>>>>>>>>>> the
>>>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>>> >> >>>> >>>>>>>>>>>>>> creation
>>>> >> >>>> >>>>>>>>>>>>>> of
>>>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
>>>> even
>>>> >> >>>> >>>>>>>>>>>>>> if
>>>> >> >>>> >>>>>>>>>>>>>> there were support
>>>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>>>> per
>>>> >> >>>> >>>>>>>>>>>>>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>> target), then I
>>>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current
>>>> way this
>>>> >> >>>> >>>>>>>>>>>>>> works
>>>> >> >>>> >>>>>>>>>>>>>> with DIR?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> What do you think?
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> Thanks
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>>>> access
>>>> >> >>>> >>>>>>>>>>>>>>> today.
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I
>>>> might as
>>>> >> >>>> >>>>>>>>>>>>>>> well
>>>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>>>> believe
>>>> >> >>>> >>>>>>>>>>>>>>>> it
>>>> >> >>>> >>>>>>>>>>>>>>>> just
>>>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>>>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>>>> that. The
>>>> >> >>>> >>>>>>>>>>>>>>>> end-user
>>>> >> >>>> >>>>>>>>>>>>>>>> is
>>>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that
>>>> all KVM
>>>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>>>> >> >>>> >>>>>>>>>>>>>>>> access,
>>>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>>>> providing the
>>>> >> >>>> >>>>>>>>>>>>>>>> storage.
>>>> >> >>>> >>>>>>>>>>>>>>>> It could
>>>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>>>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>>>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>>>> >> >>>> >>>>>>>>>>>>>>>> images.
>>>> >> >>>> >>>>>>>>>>>>>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus
>>>> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>>>> at the
>>>> >> >>>> >>>>>>>>>>>>>>>> > same
>>>> >> >>>> >>>>>>>>>>>>>>>> > time.
>>>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>>> >> >>>> >>>>>>>>>>>>>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>>>> pools:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>>>> pool
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>>>> have one
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
>>>> see
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>>>> mentioned,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting
>>>> at.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>>>> iSCSI,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>>>> iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>>>> logging
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>>>> work
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>>>> device
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
>>>> more
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to
>>>> write your
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>>>> made to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
>>>> see
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>>>> test
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>>>> iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>>>> libvirt
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some
>>>> of the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM,
>>>> Marcus
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>>>> Tutkowski"
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>>>> storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>>>> and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can
>>>> establish a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>>>> QoS.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>>>> expected
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>>>> volumes
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>>>> friendly).
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>>>> work, I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so
>>>> they
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>>>> with
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>>>> how I
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>>>> have to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use
>>>> it for
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>>>> SolidFire
>>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>>> >> >>>> >>>>>>
>>>>
>>> ...

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I understand what you're saying now, Marcus.

I wasn't sure if the Libvirt iSCSI Storage Pool was still an option
(looking into that still), but I see what you mean: If it is, we don't need
a new adaptor; otherwise, we do.

If Libivirt's iSCSI Storage Pool does work, I could update the current
adaptor, if need be, to make use of it.


On Mon, Sep 16, 2013 at 12:24 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Well, you'd use neither of the two pool types, because you are not letting
> libvirt handle the pool, you are doing it with your own pool and adaptor
> class. Libvirt will be unaware of everything but the disk XML you attach to
> a vm. You'd only use those if libvirts functions were advantageous, I.e. if
> it already did everything you want. Since neither of those seem to provide
> both iscsi and the 1:1 mapping you want that's why we are talking about
> your own pool/adaptor.
>
> You can log into the target via your implementation of getPhysicalDisk as
> you mention in AttachVolumeCommand, or log in during your implementation of
> createStoragePool and simply rescan for luns in getPhysicalDisk. Presumably
> in most cases the host will be logged in already and new luns have been
> created in the meantime.
> On Sep 16, 2013 12:09 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> Hey Marcus,
>>
>> Thanks for that clarification.
>>
>> Sorry if this is a redundant question:
>>
>> When the AttachVolumeCommand comes in, it sounds like we thought the best
>> approach would be for me to discover and log in to the iSCSI target using
>> iscsiadm.
>>
>> This will create a new device: /dev/sdX.
>>
>> We would then pass this new device into the VM (passing XML into the
>> appropriate Libvirt API).
>>
>> If this is an accurate understanding, can you tell me: Do you think we
>> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>>
>> I believe I recall you leaning toward a Disk Storage Pool because we will
>> have already discovered the iSCSI target and, as such, will already have a
>> device to pass into the VM.
>>
>> It seems like either way would work.
>>
>> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand if
>> they would do the work of discovering the iSCSI target for me (and maybe
>> avoid me having to use iscsiadm).
>>
>> Thanks for the clarification! :)
>>
>>
>> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> It will still register the pool.  You still have a primary storage
>>> pool that you registered, whether it's local, cluster or zone wide.
>>> NFS is optionally zone wide as well (I'm assuming customers can launch
>>> your storage only cluster-wide if they choose for resource
>>> partitioning), but it registers the pool in Libvirt prior to use.
>>>
>>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>>> both pool and volume info. It first looks up the pool:
>>>
>>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>>                     cmd.getPooltype(),
>>>                     cmd.getPoolUuid());
>>>
>>> Then it looks up the disk from that pool:
>>>
>>>     KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());
>>>
>>> Most of the commands only pass volume info like this (getVolumePath
>>> generally means the uuid of the volume), since it looks up the pool
>>> separately. If you don't save the pool info in a map in your custom
>>> class when createStoragePool is called, then getStoragePool won't be
>>> able to find it. This is a simple thing in your implementation of
>>> createStoragePool, just thought I'd mention it because it is key. Just
>>> create a map of pool uuid and pool object and save them so they're
>>> available across all implementations of that class.
>>>
>>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>>> <mi...@solidfire.com> wrote:
>>> > Thanks, Marcus
>>> >
>>> > About this:
>>> >
>>> > "When the agent connects to the
>>> > management server, it registers all pools in the cluster with the
>>> > agent."
>>> >
>>> > So, my plug-in allows you to create zone-wide primary storage. This
>>> just
>>> > means that any cluster can use the SAN (the SAN was registered as
>>> primary
>>> > storage as opposed to a preallocated volume from the SAN). Once you
>>> create a
>>> > primary storage based on this plug-in, the storage framework will
>>> invoke the
>>> > plug-in, as needed, to create and delete volumes on the SAN. For
>>> example,
>>> > you could have one SolidFire primary storage (zone wide) and currently
>>> have
>>> > 100 volumes created on the SAN to support it.
>>> >
>>> > In this case, what will the management server be registering with the
>>> agent
>>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>>> passed
>>> > in, that will be too vague as it does not contain information on what
>>> > volumes have been created for the agent.
>>> >
>>> > Thanks
>>> >
>>> >
>>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <shadowsor@gmail.com
>>> >
>>> > wrote:
>>> >>
>>> >> Yes, see my previous email from the 13th. You can create your own
>>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
>>> >> have. The previous email outlines how to add your own StorageAdaptor
>>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>>> >> getPhysicalDisk I believe will be the one you use to actually attach a
>>> >> lun.
>>> >>
>>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>>> >> management server, it registers all pools in the cluster with the
>>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>>> >> pool object (with all of the settings for your SAN). This in turn
>>> >> calls _storagePoolMgr.createStoragePool, which will route through
>>> >> KVMStoragePoolManager to your storage adapter that you've registered.
>>> >> The last argument to createStoragePool is the pool type, which is used
>>> >> to select a StorageAdaptor.
>>> >>
>>> >> From then on, most calls will only pass the volume info, and the
>>> >> volume will have the uuid of the storage pool. For this reason, your
>>> >> adaptor class needs to have a static Map variable that contains pool
>>> >> uuid and pool object. Whenever they call createStoragePool on your
>>> >> adaptor you add that pool to the map so that subsequent volume calls
>>> >> can look up the pool details for the volume by pool uuid. With the
>>> >> Libvirt adaptor, libvirt keeps track of that for you.
>>> >>
>>> >> When createStoragePool is called, you can log into the iscsi target
>>> >> (or make sure you are already logged in, as it can be called over
>>> >> again at any time), and when attach volume commands are fired off, you
>>> >> can attach individual LUNs that are asked for, or rescan (say that the
>>> >> plugin created a new ACL just prior to calling attach), or whatever is
>>> >> necessary.
>>> >>
>>> >> KVM is a bit more work, but you can do anything you want. Actually, I
>>> >> think you can call host scripts with Xen, but having the agent there
>>> >> that runs your own code gives you the flexibility to do whatever.
>>> >>
>>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>>> >> <mi...@solidfire.com> wrote:
>>> >> > I see right now LibvirtComputingResource.java has the following
>>> method
>>> >> > that
>>> >> > I might be able to leverage (it's probably not called at present and
>>> >> > would
>>> >> > need to be implemented in my case to discover my iSCSI target and
>>> log in
>>> >> > to
>>> >> > it):
>>> >> >
>>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>>> >> >
>>> >> >         return new Answer(cmd, true, "success");
>>> >> >
>>> >> >     }
>>> >> >
>>> >> > I would probably be able to call the KVMStorageManager to have it
>>> use my
>>> >> > StorageAdaptor to do what's necessary here.
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>>> >> > <mi...@solidfire.com> wrote:
>>> >> >>
>>> >> >> Hey Marcus,
>>> >> >>
>>> >> >> When I implemented support in the XenServer and VMware plug-ins for
>>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>>> >> >> methods in
>>> >> >> both plug-ins.
>>> >> >>
>>> >> >> The code there was changed to check the AttachVolumeCommand
>>> instance
>>> >> >> for a
>>> >> >> "managed" property.
>>> >> >>
>>> >> >> If managed was false, the normal attach/detach logic would just
>>> run and
>>> >> >> the volume would be attached or detached.
>>> >> >>
>>> >> >> If managed was true, new 4.2 logic would run to create (let's talk
>>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>>> >> >> reattach an
>>> >> >> existing VDI inside an existing SR, if this wasn't the first time
>>> the
>>> >> >> volume
>>> >> >> was attached). If managed was true and we were detaching the
>>> volume,
>>> >> >> the SR
>>> >> >> would be detached from the XenServer hosts.
>>> >> >>
>>> >> >> I am currently walking through the execute(AttachVolumeCommand) in
>>> >> >> LibvirtComputingResource.java.
>>> >> >>
>>> >> >> I see how the XML is constructed to describe whether a disk should
>>> be
>>> >> >> attached or detached. I also see how we call in to get a
>>> StorageAdapter
>>> >> >> (and
>>> >> >> how I will likely need to write a new one of these).
>>> >> >>
>>> >> >> So, talking in XenServer terminology again, I was wondering if you
>>> >> >> think
>>> >> >> the approach we took in 4.2 with creating and deleting SRs in the
>>> >> >> execute(AttachVolumeCommand) method would work here or if there is
>>> some
>>> >> >> other way I should be looking at this for KVM?
>>> >> >>
>>> >> >> As it is right now for KVM, storage has to be set up ahead of time.
>>> >> >> Assuming this is the case, there probably isn't currently a place
>>> I can
>>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>>> This is
>>> >> >> why
>>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>>> XenServer
>>> >> >> and
>>> >> >> VMware, but I wanted to see if you have an alternative way that
>>> might
>>> >> >> be
>>> >> >> better for KVM.
>>> >> >>
>>> >> >> One possible way to do this would be to modify VolumeManagerImpl
>>> (or
>>> >> >> whatever its equivalent is in 4.3) before it issues an
>>> attach-volume
>>> >> >> command
>>> >> >> to KVM to check to see if the volume is to be attached to managed
>>> >> >> storage.
>>> >> >> If it is, then (before calling the attach-volume command in KVM)
>>> call
>>> >> >> the
>>> >> >> create-storage-pool command in KVM (or whatever it might be
>>> called).
>>> >> >>
>>> >> >> Just wanted to get some of your thoughts on this.
>>> >> >>
>>> >> >> Thanks!
>>> >> >>
>>> >> >>
>>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>>> >> >> <mi...@solidfire.com> wrote:
>>> >> >>>
>>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>>> codebase
>>> >> >>> and having to merge my code into it in 4.2.
>>> >> >>>
>>> >> >>> Thanks for all the details, Marcus! :)
>>> >> >>>
>>> >> >>> I can start digging into what you were talking about now.
>>> >> >>>
>>> >> >>>
>>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>>> >> >>> <sh...@gmail.com>
>>> >> >>> wrote:
>>> >> >>>>
>>> >> >>>> Looks like things might be slightly different now in 4.2, with
>>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less like
>>> some
>>> >> >>>> of the commands were ripped out verbatim from
>>> >> >>>> LibvirtComputingResource
>>> >> >>>> and placed here, so in general what I've said is probably still
>>> true,
>>> >> >>>> just that the location of things like AttachVolumeCommand might
>>> be
>>> >> >>>> different, in this file rather than
>>> LibvirtComputingResource.java.
>>> >> >>>>
>>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>>> >> >>>> <sh...@gmail.com>
>>> >> >>>> wrote:
>>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>>> >> >>>> > hypervisor
>>> >> >>>> > classes differ, the rest is all mgmt server. Creating a volume
>>> is
>>> >> >>>> > just
>>> >> >>>> > a db entry until it's deployed for the first time.
>>> >> >>>> > AttachVolumeCommand
>>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a
>>> KVM
>>> >> >>>> > StorageAdaptor) to log in the host to the target and then you
>>> have
>>> >> >>>> > a
>>> >> >>>> > block device.  Maybe libvirt will do that for you, but my quick
>>> >> >>>> > read
>>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>>> pool,
>>> >> >>>> > not
>>> >> >>>> > a lun or volume, so you'll need to figure out if that works or
>>> if
>>> >> >>>> > you'll have to use iscsiadm commands.
>>> >> >>>> >
>>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>>> Libvirt
>>> >> >>>> > doesn't really manage your pool the way you want), you're
>>> going to
>>> >> >>>> > have to create a version of KVMStoragePool class and a
>>> >> >>>> > StorageAdaptor
>>> >> >>>> > class (see LibvirtStoragePool.java and
>>> LibvirtStorageAdaptor.java),
>>> >> >>>> > implementing all of the methods, then in KVMStorageManager.java
>>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>>> correct
>>> >> >>>> > adaptor, you can see in this file that every call first pulls
>>> the
>>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you
>>> can
>>> >> >>>> > see
>>> >> >>>> > a comment in this file that says "add other storage adaptors
>>> here",
>>> >> >>>> > where it puts to this map, this is where you'd register your
>>> >> >>>> > adaptor.
>>> >> >>>> >
>>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts
>>> all
>>> >> >>>> > of
>>> >> >>>> > the pool data (host, port, name, path) which would be used to
>>> log
>>> >> >>>> > the
>>> >> >>>> > host into the initiator. I *believe* the method getPhysicalDisk
>>> >> >>>> > will
>>> >> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
>>> >> >>>> > calls
>>> >> >>>> > this and then creates the XML diskdef and attaches it to the
>>> VM.
>>> >> >>>> > Now,
>>> >> >>>> > one thing you need to know is that createStoragePool is called
>>> >> >>>> > often,
>>> >> >>>> > sometimes just to make sure the pool is there. You may want to
>>> >> >>>> > create
>>> >> >>>> > a map in your adaptor class and keep track of pools that have
>>> been
>>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because
>>> it
>>> >> >>>> > asks
>>> >> >>>> > libvirt about which storage pools exist. There are also calls
>>> to
>>> >> >>>> > refresh the pool stats, and all of the other calls can be seen
>>> in
>>> >> >>>> > the
>>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone,
>>> etc,
>>> >> >>>> > but
>>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>>> that
>>> >> >>>> > volumes are created on the mgmt server via the plugin now, so
>>> >> >>>> > whatever
>>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>>> talking to
>>> >> >>>> > the san api).
>>> >> >>>> >
>>> >> >>>> > There is a difference between attaching new volumes and
>>> launching a
>>> >> >>>> > VM
>>> >> >>>> > with existing volumes.  In the latter case, the VM definition
>>> that
>>> >> >>>> > was
>>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>>> >> >>>> >
>>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>>> imagine it
>>> >> >>>> > would need to be kept the same. Is it just a definition to the
>>> SAN
>>> >> >>>> > (ip address or some such, port number) and perhaps a volume
>>> pool
>>> >> >>>> > name?
>>> >> >>>> >
>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
>>> have
>>> >> >>>> >> only a
>>> >> >>>> >> single KVM host have access to the volume, that would be
>>> ideal.
>>> >> >>>> >
>>> >> >>>> > That depends on your SAN API.  I was under the impression that
>>> the
>>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>>> >> >>>> > whatever
>>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just
>>> call
>>> >> >>>> > your
>>> >> >>>> > SAN API with the host info for the ACLs prior to when the disk
>>> is
>>> >> >>>> > attached (or the VM is started).  I'd have to look more at the
>>> >> >>>> > framework to know the details, in 4.1 I would do this in
>>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>>> >> >>>> >
>>> >> >>>> >
>>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>> >> >>>> > <mi...@solidfire.com> wrote:
>>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>>> >> >>>> >> different
>>> >> >>>> >> from how
>>> >> >>>> >> it works with XenServer and VMware.
>>> >> >>>> >>
>>> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>> >> >>>> >>
>>> >> >>>> >> * The user creates a CS volume (this is just recorded in the
>>> >> >>>> >> cloud.volumes
>>> >> >>>> >> table).
>>> >> >>>> >>
>>> >> >>>> >> * The user attaches the volume as a disk to a VM for the first
>>> >> >>>> >> time
>>> >> >>>> >> (if the
>>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>>> >> >>>> >> framework
>>> >> >>>> >> invokes
>>> >> >>>> >> a method on the plug-in that creates a volume on the
>>> SAN...info
>>> >> >>>> >> like
>>> >> >>>> >> the IQN
>>> >> >>>> >> of the SAN volume is recorded in the DB).
>>> >> >>>> >>
>>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>>> executed.
>>> >> >>>> >> It
>>> >> >>>> >> determines based on a flag passed in that the storage in
>>> question
>>> >> >>>> >> is
>>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>> >> >>>> >> preallocated
>>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>>> >> >>>> >> discovered
>>> >> >>>> >> it
>>> >> >>>> >> determines if the iSCSI target already contains a storage
>>> >> >>>> >> repository
>>> >> >>>> >> (it
>>> >> >>>> >> would if this were a re-attach situation). If it does contain
>>> an
>>> >> >>>> >> SR
>>> >> >>>> >> already,
>>> >> >>>> >> then there should already be one VDI, as well. If there is no
>>> SR,
>>> >> >>>> >> an
>>> >> >>>> >> SR is
>>> >> >>>> >> created and a single VDI is created within it (that takes up
>>> about
>>> >> >>>> >> as
>>> >> >>>> >> much
>>> >> >>>> >> space as was requested for the CloudStack volume).
>>> >> >>>> >>
>>> >> >>>> >> * The normal attach-volume logic continues (it depends on the
>>> >> >>>> >> existence of
>>> >> >>>> >> an SR and a VDI).
>>> >> >>>> >>
>>> >> >>>> >> The VMware case is essentially the same (mainly just
>>> substitute
>>> >> >>>> >> datastore
>>> >> >>>> >> for SR and VMDK for VDI).
>>> >> >>>> >>
>>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>>> iSCSI
>>> >> >>>> >> target,
>>> >> >>>> >> but only the host that is currently running the VM that is
>>> using
>>> >> >>>> >> the
>>> >> >>>> >> VDI (or
>>> >> >>>> >> VMKD) is actually using the disk.
>>> >> >>>> >>
>>> >> >>>> >> Live Migration should be OK because the hypervisors
>>> communicate
>>> >> >>>> >> with
>>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>>> >> >>>> >>
>>> >> >>>> >> I see what you're saying with KVM, though.
>>> >> >>>> >>
>>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>>> eyes.
>>> >> >>>> >> CS
>>> >> >>>> >> controls
>>> >> >>>> >> Live Migration. You don't really need a clustered filesystem
>>> on
>>> >> >>>> >> the
>>> >> >>>> >> LUN. The
>>> >> >>>> >> LUN could be handed over raw to the VM using it.
>>> >> >>>> >>
>>> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
>>> have
>>> >> >>>> >> only a
>>> >> >>>> >> single KVM host have access to the volume, that would be
>>> ideal.
>>> >> >>>> >>
>>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log
>>> in to
>>> >> >>>> >> the
>>> >> >>>> >> iSCSI
>>> >> >>>> >> target. I'll also need to take the resultant new device and
>>> pass
>>> >> >>>> >> it
>>> >> >>>> >> into the
>>> >> >>>> >> VM.
>>> >> >>>> >>
>>> >> >>>> >> Does this sound reasonable? Please call me out on anything I
>>> seem
>>> >> >>>> >> incorrect
>>> >> >>>> >> about. :)
>>> >> >>>> >>
>>> >> >>>> >> Thanks for all the thought on this, Marcus!
>>> >> >>>> >>
>>> >> >>>> >>
>>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>> >> >>>> >> <sh...@gmail.com>
>>> >> >>>> >> wrote:
>>> >> >>>> >>>
>>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and
>>> the
>>> >> >>>> >>> attach
>>> >> >>>> >>> the disk def to the vm. You may need to do your own
>>> >> >>>> >>> StorageAdaptor
>>> >> >>>> >>> and run
>>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>>> >> >>>> >>> libvirt
>>> >> >>>> >>> iscsi
>>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>>> how it
>>> >> >>>> >>> works on
>>> >> >>>> >>> xen at the momen., nor is it ideal.
>>> >> >>>> >>>
>>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>>> which
>>> >> >>>> >>> luns
>>> >> >>>> >>> as
>>> >> >>>> >>> well, I remember discussing that months ago, so that a disk
>>> won't
>>> >> >>>> >>> be
>>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>>> will
>>> >> >>>> >>> be
>>> >> >>>> >>> safe and
>>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>>> >> >>>> >>> connectivity
>>> >> >>>> >>> with. It
>>> >> >>>> >>> should revoke access to everything but the target host...
>>> Except
>>> >> >>>> >>> for
>>> >> >>>> >>> during
>>> >> >>>> >>> migration but we can discuss that later, there's a migration
>>> prep
>>> >> >>>> >>> process
>>> >> >>>> >>> where the new host can be added to the acls, and the old
>>> host can
>>> >> >>>> >>> be
>>> >> >>>> >>> removed
>>> >> >>>> >>> post migration.
>>> >> >>>> >>>
>>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>> >> >>>> >>> <mi...@solidfire.com>
>>> >> >>>> >>> wrote:
>>> >> >>>> >>>>
>>> >> >>>> >>>> Yeah, that would be ideal.
>>> >> >>>> >>>>
>>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log in
>>> to
>>> >> >>>> >>>> it,
>>> >> >>>> >>>> then
>>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and leave
>>> it
>>> >> >>>> >>>> as
>>> >> >>>> >>>> is - do
>>> >> >>>> >>>> not format it with any file system...clustered or not). I
>>> would
>>> >> >>>> >>>> pass that
>>> >> >>>> >>>> device into the VM.
>>> >> >>>> >>>>
>>> >> >>>> >>>> Kind of accurate?
>>> >> >>>> >>>>
>>> >> >>>> >>>>
>>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>> >> >>>> >>>> <sh...@gmail.com>
>>> >> >>>> >>>> wrote:
>>> >> >>>> >>>>>
>>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>>> definitions.
>>> >> >>>> >>>>> There are
>>> >> >>>> >>>>> ones that work for block devices rather than files. You can
>>> >> >>>> >>>>> piggy
>>> >> >>>> >>>>> back off
>>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm
>>> as a
>>> >> >>>> >>>>> block device.
>>> >> >>>> >>>>> The definition is an XML string per libvirt XML format.
>>> You may
>>> >> >>>> >>>>> want to use
>>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>>> like I
>>> >> >>>> >>>>> mentioned,
>>> >> >>>> >>>>> there are by-id paths to the block devices, as well as
>>> other
>>> >> >>>> >>>>> ones
>>> >> >>>> >>>>> that will
>>> >> >>>> >>>>> be consistent and easier for management, not sure how
>>> familiar
>>> >> >>>> >>>>> you
>>> >> >>>> >>>>> are with
>>> >> >>>> >>>>> device naming on Linux.
>>> >> >>>> >>>>>
>>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>>> >> >>>> >>>>> <sh...@gmail.com>
>>> >> >>>> >>>>> wrote:
>>> >> >>>> >>>>>>
>>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>>> initiator
>>> >> >>>> >>>>>> inside
>>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your
>>> lun on
>>> >> >>>> >>>>>> hypervisor) as
>>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>>> that
>>> >> >>>> >>>>>> resides on a
>>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>>> >> >>>> >>>>>>
>>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>>> migration
>>> >> >>>> >>>>>> I
>>> >> >>>> >>>>>> think
>>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it and
>>> >> >>>> >>>>>> mount
>>> >> >>>> >>>>>> it in two
>>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>>> >> >>>> >>>>>> filesystem,
>>> >> >>>> >>>>>> in which
>>> >> >>>> >>>>>> case you're back to shared mount point.
>>> >> >>>> >>>>>>
>>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically
>>> LVM
>>> >> >>>> >>>>>> with
>>> >> >>>> >>>>>> a xen
>>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't
>>> use a
>>> >> >>>> >>>>>> filesystem
>>> >> >>>> >>>>>> either.
>>> >> >>>> >>>>>>
>>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>
>>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do
>>> you
>>> >> >>>> >>>>>>> mean
>>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do
>>> that
>>> >> >>>> >>>>>>> in
>>> >> >>>> >>>>>>> CS.
>>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>>> >> >>>> >>>>>>> hypervisor,
>>> >> >>>> >>>>>>> as far as I
>>> >> >>>> >>>>>>> know.
>>> >> >>>> >>>>>>>
>>> >> >>>> >>>>>>>
>>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>> >> >>>> >>>>>>> <sh...@gmail.com>
>>> >> >>>> >>>>>>> wrote:
>>> >> >>>> >>>>>>>>
>>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>>> there is
>>> >> >>>> >>>>>>>> a
>>> >> >>>> >>>>>>>> good
>>> >> >>>> >>>>>>>> reason not to.
>>> >> >>>> >>>>>>>>
>>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>> >> >>>> >>>>>>>> <sh...@gmail.com>
>>> >> >>>> >>>>>>>> wrote:
>>> >> >>>> >>>>>>>>>
>>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>>> mistake
>>> >> >>>> >>>>>>>>> to
>>> >> >>>> >>>>>>>>> go to
>>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
>>> luns
>>> >> >>>> >>>>>>>>> and then putting
>>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>>> QCOW2
>>> >> >>>> >>>>>>>>> or
>>> >> >>>> >>>>>>>>> even RAW disk
>>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>>> along
>>> >> >>>> >>>>>>>>> the
>>> >> >>>> >>>>>>>>> way, and have
>>> >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling,
>>> etc.
>>> >> >>>> >>>>>>>>>
>>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
>>> with
>>> >> >>>> >>>>>>>>>> CS.
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today
>>> is by
>>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the
>>> location of
>>> >> >>>> >>>>>>>>>> the
>>> >> >>>> >>>>>>>>>> share.
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>>> >> >>>> >>>>>>>>>> discovering
>>> >> >>>> >>>>>>>>>> their
>>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>>> somewhere
>>> >> >>>> >>>>>>>>>> on
>>> >> >>>> >>>>>>>>>> their file
>>> >> >>>> >>>>>>>>>> system.
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
>>> >> >>>> >>>>>>>>>> logging
>>> >> >>>> >>>>>>>>>> in,
>>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting
>>> the
>>> >> >>>> >>>>>>>>>> current code manage
>>> >> >>>> >>>>>>>>>> the rest as it currently does?
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>>
>>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>> >> >>>> >>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
>>> to
>>> >> >>>> >>>>>>>>>>> catch up
>>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just
>>> disk
>>> >> >>>> >>>>>>>>>>> snapshots + memory
>>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably
>>> be
>>> >> >>>> >>>>>>>>>>> handled by the SAN,
>>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>>> >> >>>> >>>>>>>>>>> something else. This is
>>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>>> want to
>>> >> >>>> >>>>>>>>>>> see how others are
>>> >> >>>> >>>>>>>>>>> planning theirs.
>>> >> >>>> >>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
>>> >> >>>> >>>>>>>>>>> wrote:
>>> >> >>>> >>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>>> >> >>>> >>>>>>>>>>>> style
>>> >> >>>> >>>>>>>>>>>> on an
>>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>>> >> >>>> >>>>>>>>>>>> format.
>>> >> >>>> >>>>>>>>>>>> Otherwise you're
>>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>>> creating
>>> >> >>>> >>>>>>>>>>>> a
>>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>> >> >>>> >>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk
>>> to the
>>> >> >>>> >>>>>>>>>>>> VM, and
>>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>>> >> >>>> >>>>>>>>>>>> plugin
>>> >> >>>> >>>>>>>>>>>> is best. My
>>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that
>>> >> >>>> >>>>>>>>>>>> there
>>> >> >>>> >>>>>>>>>>>> was a snapshot
>>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle
>>> snapshots.
>>> >> >>>> >>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>>> >> >>>> >>>>>>>>>>>> wrote:
>>> >> >>>> >>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
>>> back
>>> >> >>>> >>>>>>>>>>>>> end, if
>>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>>> could
>>> >> >>>> >>>>>>>>>>>>> call
>>> >> >>>> >>>>>>>>>>>>> your plugin for
>>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>>> agnostic. As
>>> >> >>>> >>>>>>>>>>>>> far as space, that
>>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With
>>> ours, we
>>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>>> and is
>>> >> >>>> >>>>>>>>>>>>> independent of the
>>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>>> >> >>>> >>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for
>>> libvirt
>>> >> >>>> >>>>>>>>>>>>>> won't
>>> >> >>>> >>>>>>>>>>>>>> work
>>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>>> snapshots?
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor
>>> snapshot, the
>>> >> >>>> >>>>>>>>>>>>>> VDI for
>>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>>> repository
>>> >> >>>> >>>>>>>>>>>>>> as
>>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>>> >> >>>> >>>>>>>>>>>>>> XenServer
>>> >> >>>> >>>>>>>>>>>>>> and
>>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the
>>> user
>>> >> >>>> >>>>>>>>>>>>>> requested for the
>>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>>> >> >>>> >>>>>>>>>>>>>> thinly
>>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs
>>> to
>>> >> >>>> >>>>>>>>>>>>>> be).
>>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN
>>> volume
>>> >> >>>> >>>>>>>>>>>>>> until
>>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also
>>> reside on
>>> >> >>>> >>>>>>>>>>>>>> the
>>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>>> >> >>>> >>>>>>>>>>>>>> creation
>>> >> >>>> >>>>>>>>>>>>>> of
>>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
>>> even
>>> >> >>>> >>>>>>>>>>>>>> if
>>> >> >>>> >>>>>>>>>>>>>> there were support
>>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN
>>> per
>>> >> >>>> >>>>>>>>>>>>>> iSCSI
>>> >> >>>> >>>>>>>>>>>>>> target), then I
>>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way
>>> this
>>> >> >>>> >>>>>>>>>>>>>> works
>>> >> >>>> >>>>>>>>>>>>>> with DIR?
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> What do you think?
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> Thanks
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>>> access
>>> >> >>>> >>>>>>>>>>>>>>> today.
>>> >> >>>> >>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I
>>> might as
>>> >> >>>> >>>>>>>>>>>>>>> well
>>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>> >> >>>> >>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>>> believe
>>> >> >>>> >>>>>>>>>>>>>>>> it
>>> >> >>>> >>>>>>>>>>>>>>>> just
>>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to
>>> that. The
>>> >> >>>> >>>>>>>>>>>>>>>> end-user
>>> >> >>>> >>>>>>>>>>>>>>>> is
>>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all
>>> KVM
>>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>>> >> >>>> >>>>>>>>>>>>>>>> access,
>>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is
>>> providing the
>>> >> >>>> >>>>>>>>>>>>>>>> storage.
>>> >> >>>> >>>>>>>>>>>>>>>> It could
>>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>>> >> >>>> >>>>>>>>>>>>>>>> images.
>>> >> >>>> >>>>>>>>>>>>>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all
>>> at the
>>> >> >>>> >>>>>>>>>>>>>>>> > same
>>> >> >>>> >>>>>>>>>>>>>>>> > time.
>>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>> >> >>>> >>>>>>>>>>>>>>>> >
>>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike
>>> Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>>> pools:
>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>>> Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>>> pool
>>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only
>>> have one
>>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
>>> see
>>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>>> mentioned,
>>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>>> Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting
>>> at.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>>> iSCSI,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the
>>> iSCSI
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>>> logging
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>>> work
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>>> device
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
>>> more
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
>>> your
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see
>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls
>>> made to
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
>>> see
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some
>>> test
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>>> iscsi
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate
>>> libvirt
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need
>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>>> Tutkowski"
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>>> storage
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create
>>> and
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish
>>> a
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for
>>> QoS.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>>> expected
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>>> volumes
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>>> friendly).
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme
>>> work, I
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>>> with
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>>> how I
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>>> have to
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
>>> for
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer,
>>> SolidFire
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> cloud™
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> --
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
>>> Inc.
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
>>> cloud™
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >> >>>> >>>>>>>>>>>>>>>>
>>>
>> ...
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Well, you'd use neither of the two pool types, because you are not letting
libvirt handle the pool, you are doing it with your own pool and adaptor
class. Libvirt will be unaware of everything but the disk XML you attach to
a vm. You'd only use those if libvirts functions were advantageous, I.e. if
it already did everything you want. Since neither of those seem to provide
both iscsi and the 1:1 mapping you want that's why we are talking about
your own pool/adaptor.

You can log into the target via your implementation of getPhysicalDisk as
you mention in AttachVolumeCommand, or log in during your implementation of
createStoragePool and simply rescan for luns in getPhysicalDisk. Presumably
in most cases the host will be logged in already and new luns have been
created in the meantime.
On Sep 16, 2013 12:09 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hey Marcus,
>
> Thanks for that clarification.
>
> Sorry if this is a redundant question:
>
> When the AttachVolumeCommand comes in, it sounds like we thought the best
> approach would be for me to discover and log in to the iSCSI target using
> iscsiadm.
>
> This will create a new device: /dev/sdX.
>
> We would then pass this new device into the VM (passing XML into the
> appropriate Libvirt API).
>
> If this is an accurate understanding, can you tell me: Do you think we
> should be using a Disk Storage Pool or an iSCSI Storage Pool?
>
> I believe I recall you leaning toward a Disk Storage Pool because we will
> have already discovered the iSCSI target and, as such, will already have a
> device to pass into the VM.
>
> It seems like either way would work.
>
> Maybe I need to study Libvirt's iSCSI Storage Pools more to understand if
> they would do the work of discovering the iSCSI target for me (and maybe
> avoid me having to use iscsiadm).
>
> Thanks for the clarification! :)
>
>
> On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> It will still register the pool.  You still have a primary storage
>> pool that you registered, whether it's local, cluster or zone wide.
>> NFS is optionally zone wide as well (I'm assuming customers can launch
>> your storage only cluster-wide if they choose for resource
>> partitioning), but it registers the pool in Libvirt prior to use.
>>
>> Here's a better explanation of what I meant.  AttachVolumeCommand gets
>> both pool and volume info. It first looks up the pool:
>>
>>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>>                     cmd.getPooltype(),
>>                     cmd.getPoolUuid());
>>
>> Then it looks up the disk from that pool:
>>
>>     KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());
>>
>> Most of the commands only pass volume info like this (getVolumePath
>> generally means the uuid of the volume), since it looks up the pool
>> separately. If you don't save the pool info in a map in your custom
>> class when createStoragePool is called, then getStoragePool won't be
>> able to find it. This is a simple thing in your implementation of
>> createStoragePool, just thought I'd mention it because it is key. Just
>> create a map of pool uuid and pool object and save them so they're
>> available across all implementations of that class.
>>
>> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > Thanks, Marcus
>> >
>> > About this:
>> >
>> > "When the agent connects to the
>> > management server, it registers all pools in the cluster with the
>> > agent."
>> >
>> > So, my plug-in allows you to create zone-wide primary storage. This just
>> > means that any cluster can use the SAN (the SAN was registered as
>> primary
>> > storage as opposed to a preallocated volume from the SAN). Once you
>> create a
>> > primary storage based on this plug-in, the storage framework will
>> invoke the
>> > plug-in, as needed, to create and delete volumes on the SAN. For
>> example,
>> > you could have one SolidFire primary storage (zone wide) and currently
>> have
>> > 100 volumes created on the SAN to support it.
>> >
>> > In this case, what will the management server be registering with the
>> agent
>> > in ModifyStoragePool? If only the storage pool (primary storage) is
>> passed
>> > in, that will be too vague as it does not contain information on what
>> > volumes have been created for the agent.
>> >
>> > Thanks
>> >
>> >
>> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <sh...@gmail.com>
>> > wrote:
>> >>
>> >> Yes, see my previous email from the 13th. You can create your own
>> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
>> >> have. The previous email outlines how to add your own StorageAdaptor
>> >> alongside LibvirtStorageAdaptor to take over all of the calls
>> >> (createStoragePool, getStoragePool, etc). As mentioned,
>> >> getPhysicalDisk I believe will be the one you use to actually attach a
>> >> lun.
>> >>
>> >> Ignore CreateStoragePoolCommand. When the agent connects to the
>> >> management server, it registers all pools in the cluster with the
>> >> agent. It will call ModifyStoragePoolCommand, passing your storage
>> >> pool object (with all of the settings for your SAN). This in turn
>> >> calls _storagePoolMgr.createStoragePool, which will route through
>> >> KVMStoragePoolManager to your storage adapter that you've registered.
>> >> The last argument to createStoragePool is the pool type, which is used
>> >> to select a StorageAdaptor.
>> >>
>> >> From then on, most calls will only pass the volume info, and the
>> >> volume will have the uuid of the storage pool. For this reason, your
>> >> adaptor class needs to have a static Map variable that contains pool
>> >> uuid and pool object. Whenever they call createStoragePool on your
>> >> adaptor you add that pool to the map so that subsequent volume calls
>> >> can look up the pool details for the volume by pool uuid. With the
>> >> Libvirt adaptor, libvirt keeps track of that for you.
>> >>
>> >> When createStoragePool is called, you can log into the iscsi target
>> >> (or make sure you are already logged in, as it can be called over
>> >> again at any time), and when attach volume commands are fired off, you
>> >> can attach individual LUNs that are asked for, or rescan (say that the
>> >> plugin created a new ACL just prior to calling attach), or whatever is
>> >> necessary.
>> >>
>> >> KVM is a bit more work, but you can do anything you want. Actually, I
>> >> think you can call host scripts with Xen, but having the agent there
>> >> that runs your own code gives you the flexibility to do whatever.
>> >>
>> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>> >> <mi...@solidfire.com> wrote:
>> >> > I see right now LibvirtComputingResource.java has the following
>> method
>> >> > that
>> >> > I might be able to leverage (it's probably not called at present and
>> >> > would
>> >> > need to be implemented in my case to discover my iSCSI target and
>> log in
>> >> > to
>> >> > it):
>> >> >
>> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>> >> >
>> >> >         return new Answer(cmd, true, "success");
>> >> >
>> >> >     }
>> >> >
>> >> > I would probably be able to call the KVMStorageManager to have it
>> use my
>> >> > StorageAdaptor to do what's necessary here.
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>> >> > <mi...@solidfire.com> wrote:
>> >> >>
>> >> >> Hey Marcus,
>> >> >>
>> >> >> When I implemented support in the XenServer and VMware plug-ins for
>> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>> >> >> methods in
>> >> >> both plug-ins.
>> >> >>
>> >> >> The code there was changed to check the AttachVolumeCommand instance
>> >> >> for a
>> >> >> "managed" property.
>> >> >>
>> >> >> If managed was false, the normal attach/detach logic would just run
>> and
>> >> >> the volume would be attached or detached.
>> >> >>
>> >> >> If managed was true, new 4.2 logic would run to create (let's talk
>> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>> >> >> reattach an
>> >> >> existing VDI inside an existing SR, if this wasn't the first time
>> the
>> >> >> volume
>> >> >> was attached). If managed was true and we were detaching the volume,
>> >> >> the SR
>> >> >> would be detached from the XenServer hosts.
>> >> >>
>> >> >> I am currently walking through the execute(AttachVolumeCommand) in
>> >> >> LibvirtComputingResource.java.
>> >> >>
>> >> >> I see how the XML is constructed to describe whether a disk should
>> be
>> >> >> attached or detached. I also see how we call in to get a
>> StorageAdapter
>> >> >> (and
>> >> >> how I will likely need to write a new one of these).
>> >> >>
>> >> >> So, talking in XenServer terminology again, I was wondering if you
>> >> >> think
>> >> >> the approach we took in 4.2 with creating and deleting SRs in the
>> >> >> execute(AttachVolumeCommand) method would work here or if there is
>> some
>> >> >> other way I should be looking at this for KVM?
>> >> >>
>> >> >> As it is right now for KVM, storage has to be set up ahead of time.
>> >> >> Assuming this is the case, there probably isn't currently a place I
>> can
>> >> >> easily inject my logic to discover and log in to iSCSI targets.
>> This is
>> >> >> why
>> >> >> we did it as needed in the execute(AttachVolumeCommand) for
>> XenServer
>> >> >> and
>> >> >> VMware, but I wanted to see if you have an alternative way that
>> might
>> >> >> be
>> >> >> better for KVM.
>> >> >>
>> >> >> One possible way to do this would be to modify VolumeManagerImpl (or
>> >> >> whatever its equivalent is in 4.3) before it issues an attach-volume
>> >> >> command
>> >> >> to KVM to check to see if the volume is to be attached to managed
>> >> >> storage.
>> >> >> If it is, then (before calling the attach-volume command in KVM)
>> call
>> >> >> the
>> >> >> create-storage-pool command in KVM (or whatever it might be called).
>> >> >>
>> >> >> Just wanted to get some of your thoughts on this.
>> >> >>
>> >> >> Thanks!
>> >> >>
>> >> >>
>> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>> >> >> <mi...@solidfire.com> wrote:
>> >> >>>
>> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
>> codebase
>> >> >>> and having to merge my code into it in 4.2.
>> >> >>>
>> >> >>> Thanks for all the details, Marcus! :)
>> >> >>>
>> >> >>> I can start digging into what you were talking about now.
>> >> >>>
>> >> >>>
>> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>> >> >>> <sh...@gmail.com>
>> >> >>> wrote:
>> >> >>>>
>> >> >>>> Looks like things might be slightly different now in 4.2, with
>> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less like
>> some
>> >> >>>> of the commands were ripped out verbatim from
>> >> >>>> LibvirtComputingResource
>> >> >>>> and placed here, so in general what I've said is probably still
>> true,
>> >> >>>> just that the location of things like AttachVolumeCommand might be
>> >> >>>> different, in this file rather than LibvirtComputingResource.java.
>> >> >>>>
>> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>> >> >>>> <sh...@gmail.com>
>> >> >>>> wrote:
>> >> >>>> > Ok, KVM will be close to that, of course, because only the
>> >> >>>> > hypervisor
>> >> >>>> > classes differ, the rest is all mgmt server. Creating a volume
>> is
>> >> >>>> > just
>> >> >>>> > a db entry until it's deployed for the first time.
>> >> >>>> > AttachVolumeCommand
>> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a
>> KVM
>> >> >>>> > StorageAdaptor) to log in the host to the target and then you
>> have
>> >> >>>> > a
>> >> >>>> > block device.  Maybe libvirt will do that for you, but my quick
>> >> >>>> > read
>> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
>> pool,
>> >> >>>> > not
>> >> >>>> > a lun or volume, so you'll need to figure out if that works or
>> if
>> >> >>>> > you'll have to use iscsiadm commands.
>> >> >>>> >
>> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because
>> Libvirt
>> >> >>>> > doesn't really manage your pool the way you want), you're going
>> to
>> >> >>>> > have to create a version of KVMStoragePool class and a
>> >> >>>> > StorageAdaptor
>> >> >>>> > class (see LibvirtStoragePool.java and
>> LibvirtStorageAdaptor.java),
>> >> >>>> > implementing all of the methods, then in KVMStorageManager.java
>> >> >>>> > there's a "_storageMapper" map. This is used to select the
>> correct
>> >> >>>> > adaptor, you can see in this file that every call first pulls
>> the
>> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you
>> can
>> >> >>>> > see
>> >> >>>> > a comment in this file that says "add other storage adaptors
>> here",
>> >> >>>> > where it puts to this map, this is where you'd register your
>> >> >>>> > adaptor.
>> >> >>>> >
>> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts
>> all
>> >> >>>> > of
>> >> >>>> > the pool data (host, port, name, path) which would be used to
>> log
>> >> >>>> > the
>> >> >>>> > host into the initiator. I *believe* the method getPhysicalDisk
>> >> >>>> > will
>> >> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
>> >> >>>> > calls
>> >> >>>> > this and then creates the XML diskdef and attaches it to the VM.
>> >> >>>> > Now,
>> >> >>>> > one thing you need to know is that createStoragePool is called
>> >> >>>> > often,
>> >> >>>> > sometimes just to make sure the pool is there. You may want to
>> >> >>>> > create
>> >> >>>> > a map in your adaptor class and keep track of pools that have
>> been
>> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because
>> it
>> >> >>>> > asks
>> >> >>>> > libvirt about which storage pools exist. There are also calls to
>> >> >>>> > refresh the pool stats, and all of the other calls can be seen
>> in
>> >> >>>> > the
>> >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone,
>> etc,
>> >> >>>> > but
>> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea
>> that
>> >> >>>> > volumes are created on the mgmt server via the plugin now, so
>> >> >>>> > whatever
>> >> >>>> > doesn't apply can just be stubbed out (or optionally
>> >> >>>> > extended/reimplemented here, if you don't mind the hosts
>> talking to
>> >> >>>> > the san api).
>> >> >>>> >
>> >> >>>> > There is a difference between attaching new volumes and
>> launching a
>> >> >>>> > VM
>> >> >>>> > with existing volumes.  In the latter case, the VM definition
>> that
>> >> >>>> > was
>> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>> >> >>>> >
>> >> >>>> > I'd be interested in how your pool is defined for Xen, I
>> imagine it
>> >> >>>> > would need to be kept the same. Is it just a definition to the
>> SAN
>> >> >>>> > (ip address or some such, port number) and perhaps a volume pool
>> >> >>>> > name?
>> >> >>>> >
>> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
>> have
>> >> >>>> >> only a
>> >> >>>> >> single KVM host have access to the volume, that would be ideal.
>> >> >>>> >
>> >> >>>> > That depends on your SAN API.  I was under the impression that
>> the
>> >> >>>> > storage plugin framework allowed for acls, or for you to do
>> >> >>>> > whatever
>> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just call
>> >> >>>> > your
>> >> >>>> > SAN API with the host info for the ACLs prior to when the disk
>> is
>> >> >>>> > attached (or the VM is started).  I'd have to look more at the
>> >> >>>> > framework to know the details, in 4.1 I would do this in
>> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>> >> >>>> >
>> >> >>>> >
>> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> >> >>>> > <mi...@solidfire.com> wrote:
>> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>> >> >>>> >> different
>> >> >>>> >> from how
>> >> >>>> >> it works with XenServer and VMware.
>> >> >>>> >>
>> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>> >> >>>> >>
>> >> >>>> >> * The user creates a CS volume (this is just recorded in the
>> >> >>>> >> cloud.volumes
>> >> >>>> >> table).
>> >> >>>> >>
>> >> >>>> >> * The user attaches the volume as a disk to a VM for the first
>> >> >>>> >> time
>> >> >>>> >> (if the
>> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>> >> >>>> >> framework
>> >> >>>> >> invokes
>> >> >>>> >> a method on the plug-in that creates a volume on the SAN...info
>> >> >>>> >> like
>> >> >>>> >> the IQN
>> >> >>>> >> of the SAN volume is recorded in the DB).
>> >> >>>> >>
>> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is
>> executed.
>> >> >>>> >> It
>> >> >>>> >> determines based on a flag passed in that the storage in
>> question
>> >> >>>> >> is
>> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>> >> >>>> >> preallocated
>> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
>> >> >>>> >> discovered
>> >> >>>> >> it
>> >> >>>> >> determines if the iSCSI target already contains a storage
>> >> >>>> >> repository
>> >> >>>> >> (it
>> >> >>>> >> would if this were a re-attach situation). If it does contain
>> an
>> >> >>>> >> SR
>> >> >>>> >> already,
>> >> >>>> >> then there should already be one VDI, as well. If there is no
>> SR,
>> >> >>>> >> an
>> >> >>>> >> SR is
>> >> >>>> >> created and a single VDI is created within it (that takes up
>> about
>> >> >>>> >> as
>> >> >>>> >> much
>> >> >>>> >> space as was requested for the CloudStack volume).
>> >> >>>> >>
>> >> >>>> >> * The normal attach-volume logic continues (it depends on the
>> >> >>>> >> existence of
>> >> >>>> >> an SR and a VDI).
>> >> >>>> >>
>> >> >>>> >> The VMware case is essentially the same (mainly just substitute
>> >> >>>> >> datastore
>> >> >>>> >> for SR and VMDK for VDI).
>> >> >>>> >>
>> >> >>>> >> In both cases, all hosts in the cluster have discovered the
>> iSCSI
>> >> >>>> >> target,
>> >> >>>> >> but only the host that is currently running the VM that is
>> using
>> >> >>>> >> the
>> >> >>>> >> VDI (or
>> >> >>>> >> VMKD) is actually using the disk.
>> >> >>>> >>
>> >> >>>> >> Live Migration should be OK because the hypervisors communicate
>> >> >>>> >> with
>> >> >>>> >> whatever metadata they have on the SR (or datastore).
>> >> >>>> >>
>> >> >>>> >> I see what you're saying with KVM, though.
>> >> >>>> >>
>> >> >>>> >> In that case, the hosts are clustered only in CloudStack's
>> eyes.
>> >> >>>> >> CS
>> >> >>>> >> controls
>> >> >>>> >> Live Migration. You don't really need a clustered filesystem on
>> >> >>>> >> the
>> >> >>>> >> LUN. The
>> >> >>>> >> LUN could be handed over raw to the VM using it.
>> >> >>>> >>
>> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
>> have
>> >> >>>> >> only a
>> >> >>>> >> single KVM host have access to the volume, that would be ideal.
>> >> >>>> >>
>> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log in
>> to
>> >> >>>> >> the
>> >> >>>> >> iSCSI
>> >> >>>> >> target. I'll also need to take the resultant new device and
>> pass
>> >> >>>> >> it
>> >> >>>> >> into the
>> >> >>>> >> VM.
>> >> >>>> >>
>> >> >>>> >> Does this sound reasonable? Please call me out on anything I
>> seem
>> >> >>>> >> incorrect
>> >> >>>> >> about. :)
>> >> >>>> >>
>> >> >>>> >> Thanks for all the thought on this, Marcus!
>> >> >>>> >>
>> >> >>>> >>
>> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>> >> >>>> >> <sh...@gmail.com>
>> >> >>>> >> wrote:
>> >> >>>> >>>
>> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and
>> the
>> >> >>>> >>> attach
>> >> >>>> >>> the disk def to the vm. You may need to do your own
>> >> >>>> >>> StorageAdaptor
>> >> >>>> >>> and run
>> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>> >> >>>> >>> libvirt
>> >> >>>> >>> iscsi
>> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't
>> how it
>> >> >>>> >>> works on
>> >> >>>> >>> xen at the momen., nor is it ideal.
>> >> >>>> >>>
>> >> >>>> >>> Your plugin will handle acls as far as which host can see
>> which
>> >> >>>> >>> luns
>> >> >>>> >>> as
>> >> >>>> >>> well, I remember discussing that months ago, so that a disk
>> won't
>> >> >>>> >>> be
>> >> >>>> >>> connected until the hypervisor has exclusive access, so it
>> will
>> >> >>>> >>> be
>> >> >>>> >>> safe and
>> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>> >> >>>> >>> connectivity
>> >> >>>> >>> with. It
>> >> >>>> >>> should revoke access to everything but the target host...
>> Except
>> >> >>>> >>> for
>> >> >>>> >>> during
>> >> >>>> >>> migration but we can discuss that later, there's a migration
>> prep
>> >> >>>> >>> process
>> >> >>>> >>> where the new host can be added to the acls, and the old host
>> can
>> >> >>>> >>> be
>> >> >>>> >>> removed
>> >> >>>> >>> post migration.
>> >> >>>> >>>
>> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>> >> >>>> >>> <mi...@solidfire.com>
>> >> >>>> >>> wrote:
>> >> >>>> >>>>
>> >> >>>> >>>> Yeah, that would be ideal.
>> >> >>>> >>>>
>> >> >>>> >>>> So, I would still need to discover the iSCSI target, log in
>> to
>> >> >>>> >>>> it,
>> >> >>>> >>>> then
>> >> >>>> >>>> figure out what /dev/sdX was created as a result (and leave
>> it
>> >> >>>> >>>> as
>> >> >>>> >>>> is - do
>> >> >>>> >>>> not format it with any file system...clustered or not). I
>> would
>> >> >>>> >>>> pass that
>> >> >>>> >>>> device into the VM.
>> >> >>>> >>>>
>> >> >>>> >>>> Kind of accurate?
>> >> >>>> >>>>
>> >> >>>> >>>>
>> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>> >> >>>> >>>> <sh...@gmail.com>
>> >> >>>> >>>> wrote:
>> >> >>>> >>>>>
>> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk
>> definitions.
>> >> >>>> >>>>> There are
>> >> >>>> >>>>> ones that work for block devices rather than files. You can
>> >> >>>> >>>>> piggy
>> >> >>>> >>>>> back off
>> >> >>>> >>>>> of the existing disk definitions and attach it to the vm as
>> a
>> >> >>>> >>>>> block device.
>> >> >>>> >>>>> The definition is an XML string per libvirt XML format. You
>> may
>> >> >>>> >>>>> want to use
>> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx
>> like I
>> >> >>>> >>>>> mentioned,
>> >> >>>> >>>>> there are by-id paths to the block devices, as well as other
>> >> >>>> >>>>> ones
>> >> >>>> >>>>> that will
>> >> >>>> >>>>> be consistent and easier for management, not sure how
>> familiar
>> >> >>>> >>>>> you
>> >> >>>> >>>>> are with
>> >> >>>> >>>>> device naming on Linux.
>> >> >>>> >>>>>
>> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> >> >>>> >>>>> <sh...@gmail.com>
>> >> >>>> >>>>> wrote:
>> >> >>>> >>>>>>
>> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
>> initiator
>> >> >>>> >>>>>> inside
>> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun
>> on
>> >> >>>> >>>>>> hypervisor) as
>> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file
>> that
>> >> >>>> >>>>>> resides on a
>> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>> >> >>>> >>>>>>
>> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
>> migration
>> >> >>>> >>>>>> I
>> >> >>>> >>>>>> think
>> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it and
>> >> >>>> >>>>>> mount
>> >> >>>> >>>>>> it in two
>> >> >>>> >>>>>> places to facilitate migration unless its a clustered
>> >> >>>> >>>>>> filesystem,
>> >> >>>> >>>>>> in which
>> >> >>>> >>>>>> case you're back to shared mount point.
>> >> >>>> >>>>>>
>> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically
>> LVM
>> >> >>>> >>>>>> with
>> >> >>>> >>>>>> a xen
>> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't use
>> a
>> >> >>>> >>>>>> filesystem
>> >> >>>> >>>>>> either.
>> >> >>>> >>>>>>
>> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>
>> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you
>> >> >>>> >>>>>>> mean
>> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do
>> that
>> >> >>>> >>>>>>> in
>> >> >>>> >>>>>>> CS.
>> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>> >> >>>> >>>>>>> hypervisor,
>> >> >>>> >>>>>>> as far as I
>> >> >>>> >>>>>>> know.
>> >> >>>> >>>>>>>
>> >> >>>> >>>>>>>
>> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>> >> >>>> >>>>>>> <sh...@gmail.com>
>> >> >>>> >>>>>>> wrote:
>> >> >>>> >>>>>>>>
>> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless
>> there is
>> >> >>>> >>>>>>>> a
>> >> >>>> >>>>>>>> good
>> >> >>>> >>>>>>>> reason not to.
>> >> >>>> >>>>>>>>
>> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>> >> >>>> >>>>>>>> <sh...@gmail.com>
>> >> >>>> >>>>>>>> wrote:
>> >> >>>> >>>>>>>>>
>> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a
>> mistake
>> >> >>>> >>>>>>>>> to
>> >> >>>> >>>>>>>>> go to
>> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
>> luns
>> >> >>>> >>>>>>>>> and then putting
>> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a
>> QCOW2
>> >> >>>> >>>>>>>>> or
>> >> >>>> >>>>>>>>> even RAW disk
>> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops
>> along
>> >> >>>> >>>>>>>>> the
>> >> >>>> >>>>>>>>> way, and have
>> >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling,
>> etc.
>> >> >>>> >>>>>>>>>
>> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
>> with
>> >> >>>> >>>>>>>>>> CS.
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today
>> is by
>> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location
>> of
>> >> >>>> >>>>>>>>>> the
>> >> >>>> >>>>>>>>>> share.
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>> >> >>>> >>>>>>>>>> discovering
>> >> >>>> >>>>>>>>>> their
>> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
>> somewhere
>> >> >>>> >>>>>>>>>> on
>> >> >>>> >>>>>>>>>> their file
>> >> >>>> >>>>>>>>>> system.
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
>> >> >>>> >>>>>>>>>> logging
>> >> >>>> >>>>>>>>>> in,
>> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
>> >> >>>> >>>>>>>>>> current code manage
>> >> >>>> >>>>>>>>>> the rest as it currently does?
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>>
>> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>> >> >>>> >>>>>>>>>>>
>> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need
>> to
>> >> >>>> >>>>>>>>>>> catch up
>> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just
>> disk
>> >> >>>> >>>>>>>>>>> snapshots + memory
>> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
>> >> >>>> >>>>>>>>>>> handled by the SAN,
>> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>> >> >>>> >>>>>>>>>>> something else. This is
>> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will
>> want to
>> >> >>>> >>>>>>>>>>> see how others are
>> >> >>>> >>>>>>>>>>> planning theirs.
>> >> >>>> >>>>>>>>>>>
>> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
>> >> >>>> >>>>>>>>>>> wrote:
>> >> >>>> >>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> >> >>>> >>>>>>>>>>>> style
>> >> >>>> >>>>>>>>>>>> on an
>> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>> >> >>>> >>>>>>>>>>>> format.
>> >> >>>> >>>>>>>>>>>> Otherwise you're
>> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
>> creating
>> >> >>>> >>>>>>>>>>>> a
>> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
>> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>> >> >>>> >>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
>> the
>> >> >>>> >>>>>>>>>>>> VM, and
>> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>> >> >>>> >>>>>>>>>>>> plugin
>> >> >>>> >>>>>>>>>>>> is best. My
>> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that
>> >> >>>> >>>>>>>>>>>> there
>> >> >>>> >>>>>>>>>>>> was a snapshot
>> >> >>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
>> >> >>>> >>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>> >> >>>> >>>>>>>>>>>> wrote:
>> >> >>>> >>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
>> back
>> >> >>>> >>>>>>>>>>>>> end, if
>> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server
>> could
>> >> >>>> >>>>>>>>>>>>> call
>> >> >>>> >>>>>>>>>>>>> your plugin for
>> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor
>> agnostic. As
>> >> >>>> >>>>>>>>>>>>> far as space, that
>> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours,
>> we
>> >> >>>> >>>>>>>>>>>>> carve out luns from a
>> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool
>> and is
>> >> >>>> >>>>>>>>>>>>> independent of the
>> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
>> >> >>>> >>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>> >> >>>> >>>>>>>>>>>>>> won't
>> >> >>>> >>>>>>>>>>>>>> work
>> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
>> snapshots?
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
>> the
>> >> >>>> >>>>>>>>>>>>>> VDI for
>> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
>> repository
>> >> >>>> >>>>>>>>>>>>>> as
>> >> >>>> >>>>>>>>>>>>>> the volume is on.
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>> >> >>>> >>>>>>>>>>>>>> XenServer
>> >> >>>> >>>>>>>>>>>>>> and
>> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the
>> user
>> >> >>>> >>>>>>>>>>>>>> requested for the
>> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>> >> >>>> >>>>>>>>>>>>>> thinly
>> >> >>>> >>>>>>>>>>>>>> provisions volumes,
>> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs
>> to
>> >> >>>> >>>>>>>>>>>>>> be).
>> >> >>>> >>>>>>>>>>>>>> The CloudStack
>> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>> >> >>>> >>>>>>>>>>>>>> until
>> >> >>>> >>>>>>>>>>>>>> a hypervisor
>> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside
>> on
>> >> >>>> >>>>>>>>>>>>>> the
>> >> >>>> >>>>>>>>>>>>>> SAN volume.
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> >> >>>> >>>>>>>>>>>>>> creation
>> >> >>>> >>>>>>>>>>>>>> of
>> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
>> even
>> >> >>>> >>>>>>>>>>>>>> if
>> >> >>>> >>>>>>>>>>>>>> there were support
>> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>> >> >>>> >>>>>>>>>>>>>> iSCSI
>> >> >>>> >>>>>>>>>>>>>> target), then I
>> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way
>> this
>> >> >>>> >>>>>>>>>>>>>> works
>> >> >>>> >>>>>>>>>>>>>> with DIR?
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> What do you think?
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> Thanks
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
>> access
>> >> >>>> >>>>>>>>>>>>>>> today.
>> >> >>>> >>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might
>> as
>> >> >>>> >>>>>>>>>>>>>>> well
>> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> >> >>>> >>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I
>> believe
>> >> >>>> >>>>>>>>>>>>>>>> it
>> >> >>>> >>>>>>>>>>>>>>>> just
>> >> >>>> >>>>>>>>>>>>>>>> acts like a
>> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
>> The
>> >> >>>> >>>>>>>>>>>>>>>> end-user
>> >> >>>> >>>>>>>>>>>>>>>> is
>> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all
>> KVM
>> >> >>>> >>>>>>>>>>>>>>>> hosts can
>> >> >>>> >>>>>>>>>>>>>>>> access,
>> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
>> the
>> >> >>>> >>>>>>>>>>>>>>>> storage.
>> >> >>>> >>>>>>>>>>>>>>>> It could
>> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>> >> >>>> >>>>>>>>>>>>>>>> filesystem,
>> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
>> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>> >> >>>> >>>>>>>>>>>>>>>> images.
>> >> >>>> >>>>>>>>>>>>>>>>
>> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
>> the
>> >> >>>> >>>>>>>>>>>>>>>> > same
>> >> >>>> >>>>>>>>>>>>>>>> > time.
>> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>> >> >>>> >>>>>>>>>>>>>>>> >
>> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage
>> pools:
>> >> >>>> >>>>>>>>>>>>>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>> >> >>>> >>>>>>>>>>>>>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike
>> Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage
>> pool
>> >> >>>> >>>>>>>>>>>>>>>> >>> based on
>> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
>> one
>> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
>> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>> >> >>>> >>>>>>>>>>>>>>>> >>> does
>> >> >>>> >>>>>>>>>>>>>>>> >>> not support
>> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to
>> see
>> >> >>>> >>>>>>>>>>>>>>>> >>> if
>> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>> >> >>>> >>>>>>>>>>>>>>>> >>> supports
>> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
>> mentioned,
>> >> >>>> >>>>>>>>>>>>>>>> >>> since
>> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
>> Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
>> >> >>>> >>>>>>>>>>>>>>>> >>>> being
>> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
>> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
>> iSCSI,
>> >> >>>> >>>>>>>>>>>>>>>> >>>> is
>> >> >>>> >>>>>>>>>>>>>>>> >>>> that
>> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> http://libvirt.org/storage.html#StorageBackendISCSI
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
>> logging
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that
>> work
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
>> device
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
>> more
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
>> your
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made
>> to
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to
>> see
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
>> iscsi
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
>> the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
>> Tutkowski"
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the
>> storage
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always
>> expected
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those
>> volumes
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS
>> friendly).
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work,
>> I
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen
>> with
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know
>> how I
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I
>> have to
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
>> for
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> --
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
>> Inc.
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
>> cloud™
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >> >>>> >>>>>>>>>>>>>>>>
>>
> ...

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

Thanks for that clarification.

Sorry if this is a redundant question:

When the AttachVolumeCommand comes in, it sounds like we thought the best
approach would be for me to discover and log in to the iSCSI target using
iscsiadm.

This will create a new device: /dev/sdX.

We would then pass this new device into the VM (passing XML into the
appropriate Libvirt API).

If this is an accurate understanding, can you tell me: Do you think we
should be using a Disk Storage Pool or an iSCSI Storage Pool?

I believe I recall you leaning toward a Disk Storage Pool because we will
have already discovered the iSCSI target and, as such, will already have a
device to pass into the VM.

It seems like either way would work.

Maybe I need to study Libvirt's iSCSI Storage Pools more to understand if
they would do the work of discovering the iSCSI target for me (and maybe
avoid me having to use iscsiadm).

Thanks for the clarification! :)


On Mon, Sep 16, 2013 at 11:08 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> It will still register the pool.  You still have a primary storage
> pool that you registered, whether it's local, cluster or zone wide.
> NFS is optionally zone wide as well (I'm assuming customers can launch
> your storage only cluster-wide if they choose for resource
> partitioning), but it registers the pool in Libvirt prior to use.
>
> Here's a better explanation of what I meant.  AttachVolumeCommand gets
> both pool and volume info. It first looks up the pool:
>
>     KVMStoragePool primary = _storagePoolMgr.getStoragePool(
>                     cmd.getPooltype(),
>                     cmd.getPoolUuid());
>
> Then it looks up the disk from that pool:
>
>     KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());
>
> Most of the commands only pass volume info like this (getVolumePath
> generally means the uuid of the volume), since it looks up the pool
> separately. If you don't save the pool info in a map in your custom
> class when createStoragePool is called, then getStoragePool won't be
> able to find it. This is a simple thing in your implementation of
> createStoragePool, just thought I'd mention it because it is key. Just
> create a map of pool uuid and pool object and save them so they're
> available across all implementations of that class.
>
> On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > Thanks, Marcus
> >
> > About this:
> >
> > "When the agent connects to the
> > management server, it registers all pools in the cluster with the
> > agent."
> >
> > So, my plug-in allows you to create zone-wide primary storage. This just
> > means that any cluster can use the SAN (the SAN was registered as primary
> > storage as opposed to a preallocated volume from the SAN). Once you
> create a
> > primary storage based on this plug-in, the storage framework will invoke
> the
> > plug-in, as needed, to create and delete volumes on the SAN. For example,
> > you could have one SolidFire primary storage (zone wide) and currently
> have
> > 100 volumes created on the SAN to support it.
> >
> > In this case, what will the management server be registering with the
> agent
> > in ModifyStoragePool? If only the storage pool (primary storage) is
> passed
> > in, that will be too vague as it does not contain information on what
> > volumes have been created for the agent.
> >
> > Thanks
> >
> >
> > On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <sh...@gmail.com>
> > wrote:
> >>
> >> Yes, see my previous email from the 13th. You can create your own
> >> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
> >> have. The previous email outlines how to add your own StorageAdaptor
> >> alongside LibvirtStorageAdaptor to take over all of the calls
> >> (createStoragePool, getStoragePool, etc). As mentioned,
> >> getPhysicalDisk I believe will be the one you use to actually attach a
> >> lun.
> >>
> >> Ignore CreateStoragePoolCommand. When the agent connects to the
> >> management server, it registers all pools in the cluster with the
> >> agent. It will call ModifyStoragePoolCommand, passing your storage
> >> pool object (with all of the settings for your SAN). This in turn
> >> calls _storagePoolMgr.createStoragePool, which will route through
> >> KVMStoragePoolManager to your storage adapter that you've registered.
> >> The last argument to createStoragePool is the pool type, which is used
> >> to select a StorageAdaptor.
> >>
> >> From then on, most calls will only pass the volume info, and the
> >> volume will have the uuid of the storage pool. For this reason, your
> >> adaptor class needs to have a static Map variable that contains pool
> >> uuid and pool object. Whenever they call createStoragePool on your
> >> adaptor you add that pool to the map so that subsequent volume calls
> >> can look up the pool details for the volume by pool uuid. With the
> >> Libvirt adaptor, libvirt keeps track of that for you.
> >>
> >> When createStoragePool is called, you can log into the iscsi target
> >> (or make sure you are already logged in, as it can be called over
> >> again at any time), and when attach volume commands are fired off, you
> >> can attach individual LUNs that are asked for, or rescan (say that the
> >> plugin created a new ACL just prior to calling attach), or whatever is
> >> necessary.
> >>
> >> KVM is a bit more work, but you can do anything you want. Actually, I
> >> think you can call host scripts with Xen, but having the agent there
> >> that runs your own code gives you the flexibility to do whatever.
> >>
> >> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
> >> <mi...@solidfire.com> wrote:
> >> > I see right now LibvirtComputingResource.java has the following method
> >> > that
> >> > I might be able to leverage (it's probably not called at present and
> >> > would
> >> > need to be implemented in my case to discover my iSCSI target and log
> in
> >> > to
> >> > it):
> >> >
> >> >     protected Answer execute(CreateStoragePoolCommand cmd) {
> >> >
> >> >         return new Answer(cmd, true, "success");
> >> >
> >> >     }
> >> >
> >> > I would probably be able to call the KVMStorageManager to have it use
> my
> >> > StorageAdaptor to do what's necessary here.
> >> >
> >> >
> >> >
> >> >
> >> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
> >> > <mi...@solidfire.com> wrote:
> >> >>
> >> >> Hey Marcus,
> >> >>
> >> >> When I implemented support in the XenServer and VMware plug-ins for
> >> >> "managed" storage, I started at the execute(AttachVolumeCommand)
> >> >> methods in
> >> >> both plug-ins.
> >> >>
> >> >> The code there was changed to check the AttachVolumeCommand instance
> >> >> for a
> >> >> "managed" property.
> >> >>
> >> >> If managed was false, the normal attach/detach logic would just run
> and
> >> >> the volume would be attached or detached.
> >> >>
> >> >> If managed was true, new 4.2 logic would run to create (let's talk
> >> >> XenServer here) a new SR and a new VDI inside of that SR (or to
> >> >> reattach an
> >> >> existing VDI inside an existing SR, if this wasn't the first time the
> >> >> volume
> >> >> was attached). If managed was true and we were detaching the volume,
> >> >> the SR
> >> >> would be detached from the XenServer hosts.
> >> >>
> >> >> I am currently walking through the execute(AttachVolumeCommand) in
> >> >> LibvirtComputingResource.java.
> >> >>
> >> >> I see how the XML is constructed to describe whether a disk should be
> >> >> attached or detached. I also see how we call in to get a
> StorageAdapter
> >> >> (and
> >> >> how I will likely need to write a new one of these).
> >> >>
> >> >> So, talking in XenServer terminology again, I was wondering if you
> >> >> think
> >> >> the approach we took in 4.2 with creating and deleting SRs in the
> >> >> execute(AttachVolumeCommand) method would work here or if there is
> some
> >> >> other way I should be looking at this for KVM?
> >> >>
> >> >> As it is right now for KVM, storage has to be set up ahead of time.
> >> >> Assuming this is the case, there probably isn't currently a place I
> can
> >> >> easily inject my logic to discover and log in to iSCSI targets. This
> is
> >> >> why
> >> >> we did it as needed in the execute(AttachVolumeCommand) for XenServer
> >> >> and
> >> >> VMware, but I wanted to see if you have an alternative way that might
> >> >> be
> >> >> better for KVM.
> >> >>
> >> >> One possible way to do this would be to modify VolumeManagerImpl (or
> >> >> whatever its equivalent is in 4.3) before it issues an attach-volume
> >> >> command
> >> >> to KVM to check to see if the volume is to be attached to managed
> >> >> storage.
> >> >> If it is, then (before calling the attach-volume command in KVM) call
> >> >> the
> >> >> create-storage-pool command in KVM (or whatever it might be called).
> >> >>
> >> >> Just wanted to get some of your thoughts on this.
> >> >>
> >> >> Thanks!
> >> >>
> >> >>
> >> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
> >> >> <mi...@solidfire.com> wrote:
> >> >>>
> >> >>> Yeah, I remember that StorageProcessor stuff being put in the
> codebase
> >> >>> and having to merge my code into it in 4.2.
> >> >>>
> >> >>> Thanks for all the details, Marcus! :)
> >> >>>
> >> >>> I can start digging into what you were talking about now.
> >> >>>
> >> >>>
> >> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
> >> >>> <sh...@gmail.com>
> >> >>> wrote:
> >> >>>>
> >> >>>> Looks like things might be slightly different now in 4.2, with
> >> >>>> KVMStorageProcessor.java in the mix.This looks more or less like
> some
> >> >>>> of the commands were ripped out verbatim from
> >> >>>> LibvirtComputingResource
> >> >>>> and placed here, so in general what I've said is probably still
> true,
> >> >>>> just that the location of things like AttachVolumeCommand might be
> >> >>>> different, in this file rather than LibvirtComputingResource.java.
> >> >>>>
> >> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
> >> >>>> <sh...@gmail.com>
> >> >>>> wrote:
> >> >>>> > Ok, KVM will be close to that, of course, because only the
> >> >>>> > hypervisor
> >> >>>> > classes differ, the rest is all mgmt server. Creating a volume is
> >> >>>> > just
> >> >>>> > a db entry until it's deployed for the first time.
> >> >>>> > AttachVolumeCommand
> >> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
> >> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> >> >>>> > StorageAdaptor) to log in the host to the target and then you
> have
> >> >>>> > a
> >> >>>> > block device.  Maybe libvirt will do that for you, but my quick
> >> >>>> > read
> >> >>>> > made it sound like the iscsi libvirt pool type is actually a
> pool,
> >> >>>> > not
> >> >>>> > a lun or volume, so you'll need to figure out if that works or if
> >> >>>> > you'll have to use iscsiadm commands.
> >> >>>> >
> >> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> >> >>>> > doesn't really manage your pool the way you want), you're going
> to
> >> >>>> > have to create a version of KVMStoragePool class and a
> >> >>>> > StorageAdaptor
> >> >>>> > class (see LibvirtStoragePool.java and
> LibvirtStorageAdaptor.java),
> >> >>>> > implementing all of the methods, then in KVMStorageManager.java
> >> >>>> > there's a "_storageMapper" map. This is used to select the
> correct
> >> >>>> > adaptor, you can see in this file that every call first pulls the
> >> >>>> > correct adaptor out of this map via getStorageAdaptor. So you can
> >> >>>> > see
> >> >>>> > a comment in this file that says "add other storage adaptors
> here",
> >> >>>> > where it puts to this map, this is where you'd register your
> >> >>>> > adaptor.
> >> >>>> >
> >> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts
> all
> >> >>>> > of
> >> >>>> > the pool data (host, port, name, path) which would be used to log
> >> >>>> > the
> >> >>>> > host into the initiator. I *believe* the method getPhysicalDisk
> >> >>>> > will
> >> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
> >> >>>> > calls
> >> >>>> > this and then creates the XML diskdef and attaches it to the VM.
> >> >>>> > Now,
> >> >>>> > one thing you need to know is that createStoragePool is called
> >> >>>> > often,
> >> >>>> > sometimes just to make sure the pool is there. You may want to
> >> >>>> > create
> >> >>>> > a map in your adaptor class and keep track of pools that have
> been
> >> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because it
> >> >>>> > asks
> >> >>>> > libvirt about which storage pools exist. There are also calls to
> >> >>>> > refresh the pool stats, and all of the other calls can be seen in
> >> >>>> > the
> >> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone,
> etc,
> >> >>>> > but
> >> >>>> > it's probably a hold-over from 4.1, as I have the vague idea that
> >> >>>> > volumes are created on the mgmt server via the plugin now, so
> >> >>>> > whatever
> >> >>>> > doesn't apply can just be stubbed out (or optionally
> >> >>>> > extended/reimplemented here, if you don't mind the hosts talking
> to
> >> >>>> > the san api).
> >> >>>> >
> >> >>>> > There is a difference between attaching new volumes and
> launching a
> >> >>>> > VM
> >> >>>> > with existing volumes.  In the latter case, the VM definition
> that
> >> >>>> > was
> >> >>>> > passed to the KVM agent includes the disks, (StartCommand).
> >> >>>> >
> >> >>>> > I'd be interested in how your pool is defined for Xen, I imagine
> it
> >> >>>> > would need to be kept the same. Is it just a definition to the
> SAN
> >> >>>> > (ip address or some such, port number) and perhaps a volume pool
> >> >>>> > name?
> >> >>>> >
> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
> have
> >> >>>> >> only a
> >> >>>> >> single KVM host have access to the volume, that would be ideal.
> >> >>>> >
> >> >>>> > That depends on your SAN API.  I was under the impression that
> the
> >> >>>> > storage plugin framework allowed for acls, or for you to do
> >> >>>> > whatever
> >> >>>> > you want for create/attach/delete/snapshot, etc. You'd just call
> >> >>>> > your
> >> >>>> > SAN API with the host info for the ACLs prior to when the disk is
> >> >>>> > attached (or the VM is started).  I'd have to look more at the
> >> >>>> > framework to know the details, in 4.1 I would do this in
> >> >>>> > getPhysicalDisk just prior to connecting up the LUN.
> >> >>>> >
> >> >>>> >
> >> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >> >>>> > <mi...@solidfire.com> wrote:
> >> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
> >> >>>> >> different
> >> >>>> >> from how
> >> >>>> >> it works with XenServer and VMware.
> >> >>>> >>
> >> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
> >> >>>> >>
> >> >>>> >> * The user creates a CS volume (this is just recorded in the
> >> >>>> >> cloud.volumes
> >> >>>> >> table).
> >> >>>> >>
> >> >>>> >> * The user attaches the volume as a disk to a VM for the first
> >> >>>> >> time
> >> >>>> >> (if the
> >> >>>> >> storage allocator picks the SolidFire plug-in, the storage
> >> >>>> >> framework
> >> >>>> >> invokes
> >> >>>> >> a method on the plug-in that creates a volume on the SAN...info
> >> >>>> >> like
> >> >>>> >> the IQN
> >> >>>> >> of the SAN volume is recorded in the DB).
> >> >>>> >>
> >> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
> >> >>>> >> It
> >> >>>> >> determines based on a flag passed in that the storage in
> question
> >> >>>> >> is
> >> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
> >> >>>> >> preallocated
> >> >>>> >> storage). This tells it to discover the iSCSI target. Once
> >> >>>> >> discovered
> >> >>>> >> it
> >> >>>> >> determines if the iSCSI target already contains a storage
> >> >>>> >> repository
> >> >>>> >> (it
> >> >>>> >> would if this were a re-attach situation). If it does contain an
> >> >>>> >> SR
> >> >>>> >> already,
> >> >>>> >> then there should already be one VDI, as well. If there is no
> SR,
> >> >>>> >> an
> >> >>>> >> SR is
> >> >>>> >> created and a single VDI is created within it (that takes up
> about
> >> >>>> >> as
> >> >>>> >> much
> >> >>>> >> space as was requested for the CloudStack volume).
> >> >>>> >>
> >> >>>> >> * The normal attach-volume logic continues (it depends on the
> >> >>>> >> existence of
> >> >>>> >> an SR and a VDI).
> >> >>>> >>
> >> >>>> >> The VMware case is essentially the same (mainly just substitute
> >> >>>> >> datastore
> >> >>>> >> for SR and VMDK for VDI).
> >> >>>> >>
> >> >>>> >> In both cases, all hosts in the cluster have discovered the
> iSCSI
> >> >>>> >> target,
> >> >>>> >> but only the host that is currently running the VM that is using
> >> >>>> >> the
> >> >>>> >> VDI (or
> >> >>>> >> VMKD) is actually using the disk.
> >> >>>> >>
> >> >>>> >> Live Migration should be OK because the hypervisors communicate
> >> >>>> >> with
> >> >>>> >> whatever metadata they have on the SR (or datastore).
> >> >>>> >>
> >> >>>> >> I see what you're saying with KVM, though.
> >> >>>> >>
> >> >>>> >> In that case, the hosts are clustered only in CloudStack's eyes.
> >> >>>> >> CS
> >> >>>> >> controls
> >> >>>> >> Live Migration. You don't really need a clustered filesystem on
> >> >>>> >> the
> >> >>>> >> LUN. The
> >> >>>> >> LUN could be handed over raw to the VM using it.
> >> >>>> >>
> >> >>>> >> If there is a way for me to update the ACL list on the SAN to
> have
> >> >>>> >> only a
> >> >>>> >> single KVM host have access to the volume, that would be ideal.
> >> >>>> >>
> >> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log in
> to
> >> >>>> >> the
> >> >>>> >> iSCSI
> >> >>>> >> target. I'll also need to take the resultant new device and pass
> >> >>>> >> it
> >> >>>> >> into the
> >> >>>> >> VM.
> >> >>>> >>
> >> >>>> >> Does this sound reasonable? Please call me out on anything I
> seem
> >> >>>> >> incorrect
> >> >>>> >> about. :)
> >> >>>> >>
> >> >>>> >> Thanks for all the thought on this, Marcus!
> >> >>>> >>
> >> >>>> >>
> >> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
> >> >>>> >> <sh...@gmail.com>
> >> >>>> >> wrote:
> >> >>>> >>>
> >> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and
> the
> >> >>>> >>> attach
> >> >>>> >>> the disk def to the vm. You may need to do your own
> >> >>>> >>> StorageAdaptor
> >> >>>> >>> and run
> >> >>>> >>> iscsiadm commands to accomplish that, depending on how the
> >> >>>> >>> libvirt
> >> >>>> >>> iscsi
> >> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how
> it
> >> >>>> >>> works on
> >> >>>> >>> xen at the momen., nor is it ideal.
> >> >>>> >>>
> >> >>>> >>> Your plugin will handle acls as far as which host can see which
> >> >>>> >>> luns
> >> >>>> >>> as
> >> >>>> >>> well, I remember discussing that months ago, so that a disk
> won't
> >> >>>> >>> be
> >> >>>> >>> connected until the hypervisor has exclusive access, so it will
> >> >>>> >>> be
> >> >>>> >>> safe and
> >> >>>> >>> fence the disk from rogue nodes that cloudstack loses
> >> >>>> >>> connectivity
> >> >>>> >>> with. It
> >> >>>> >>> should revoke access to everything but the target host...
> Except
> >> >>>> >>> for
> >> >>>> >>> during
> >> >>>> >>> migration but we can discuss that later, there's a migration
> prep
> >> >>>> >>> process
> >> >>>> >>> where the new host can be added to the acls, and the old host
> can
> >> >>>> >>> be
> >> >>>> >>> removed
> >> >>>> >>> post migration.
> >> >>>> >>>
> >> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
> >> >>>> >>> <mi...@solidfire.com>
> >> >>>> >>> wrote:
> >> >>>> >>>>
> >> >>>> >>>> Yeah, that would be ideal.
> >> >>>> >>>>
> >> >>>> >>>> So, I would still need to discover the iSCSI target, log in to
> >> >>>> >>>> it,
> >> >>>> >>>> then
> >> >>>> >>>> figure out what /dev/sdX was created as a result (and leave it
> >> >>>> >>>> as
> >> >>>> >>>> is - do
> >> >>>> >>>> not format it with any file system...clustered or not). I
> would
> >> >>>> >>>> pass that
> >> >>>> >>>> device into the VM.
> >> >>>> >>>>
> >> >>>> >>>> Kind of accurate?
> >> >>>> >>>>
> >> >>>> >>>>
> >> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
> >> >>>> >>>> <sh...@gmail.com>
> >> >>>> >>>> wrote:
> >> >>>> >>>>>
> >> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> >> >>>> >>>>> There are
> >> >>>> >>>>> ones that work for block devices rather than files. You can
> >> >>>> >>>>> piggy
> >> >>>> >>>>> back off
> >> >>>> >>>>> of the existing disk definitions and attach it to the vm as a
> >> >>>> >>>>> block device.
> >> >>>> >>>>> The definition is an XML string per libvirt XML format. You
> may
> >> >>>> >>>>> want to use
> >> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx like
> I
> >> >>>> >>>>> mentioned,
> >> >>>> >>>>> there are by-id paths to the block devices, as well as other
> >> >>>> >>>>> ones
> >> >>>> >>>>> that will
> >> >>>> >>>>> be consistent and easier for management, not sure how
> familiar
> >> >>>> >>>>> you
> >> >>>> >>>>> are with
> >> >>>> >>>>> device naming on Linux.
> >> >>>> >>>>>
> >> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
> >> >>>> >>>>> <sh...@gmail.com>
> >> >>>> >>>>> wrote:
> >> >>>> >>>>>>
> >> >>>> >>>>>> No, as that would rely on virtualized network/iscsi
> initiator
> >> >>>> >>>>>> inside
> >> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun
> on
> >> >>>> >>>>>> hypervisor) as
> >> >>>> >>>>>> a disk to the VM, rather than attaching some image file that
> >> >>>> >>>>>> resides on a
> >> >>>> >>>>>> filesystem, mounted on the host, living on a target.
> >> >>>> >>>>>>
> >> >>>> >>>>>> Actually, if you plan on the storage supporting live
> migration
> >> >>>> >>>>>> I
> >> >>>> >>>>>> think
> >> >>>> >>>>>> this is the only way. You can't put a filesystem on it and
> >> >>>> >>>>>> mount
> >> >>>> >>>>>> it in two
> >> >>>> >>>>>> places to facilitate migration unless its a clustered
> >> >>>> >>>>>> filesystem,
> >> >>>> >>>>>> in which
> >> >>>> >>>>>> case you're back to shared mount point.
> >> >>>> >>>>>>
> >> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM
> >> >>>> >>>>>> with
> >> >>>> >>>>>> a xen
> >> >>>> >>>>>> specific cluster management, a custom CLVM. They don't use a
> >> >>>> >>>>>> filesystem
> >> >>>> >>>>>> either.
> >> >>>> >>>>>>
> >> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >> >>>> >>>>>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>
> >> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you
> >> >>>> >>>>>>> mean
> >> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do
> that
> >> >>>> >>>>>>> in
> >> >>>> >>>>>>> CS.
> >> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
> >> >>>> >>>>>>> hypervisor,
> >> >>>> >>>>>>> as far as I
> >> >>>> >>>>>>> know.
> >> >>>> >>>>>>>
> >> >>>> >>>>>>>
> >> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
> >> >>>> >>>>>>> <sh...@gmail.com>
> >> >>>> >>>>>>> wrote:
> >> >>>> >>>>>>>>
> >> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless there
> is
> >> >>>> >>>>>>>> a
> >> >>>> >>>>>>>> good
> >> >>>> >>>>>>>> reason not to.
> >> >>>> >>>>>>>>
> >> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
> >> >>>> >>>>>>>> <sh...@gmail.com>
> >> >>>> >>>>>>>> wrote:
> >> >>>> >>>>>>>>>
> >> >>>> >>>>>>>>> You could do that, but as mentioned I think its a mistake
> >> >>>> >>>>>>>>> to
> >> >>>> >>>>>>>>> go to
> >> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to
> luns
> >> >>>> >>>>>>>>> and then putting
> >> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
> >> >>>> >>>>>>>>> or
> >> >>>> >>>>>>>>> even RAW disk
> >> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along
> >> >>>> >>>>>>>>> the
> >> >>>> >>>>>>>>> way, and have
> >> >>>> >>>>>>>>> more overhead with the filesystem and its journaling,
> etc.
> >> >>>> >>>>>>>>>
> >> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM
> with
> >> >>>> >>>>>>>>>> CS.
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is
> by
> >> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location
> of
> >> >>>> >>>>>>>>>> the
> >> >>>> >>>>>>>>>> share.
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
> >> >>>> >>>>>>>>>> discovering
> >> >>>> >>>>>>>>>> their
> >> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it
> somewhere
> >> >>>> >>>>>>>>>> on
> >> >>>> >>>>>>>>>> their file
> >> >>>> >>>>>>>>>> system.
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
> >> >>>> >>>>>>>>>> logging
> >> >>>> >>>>>>>>>> in,
> >> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
> >> >>>> >>>>>>>>>> current code manage
> >> >>>> >>>>>>>>>> the rest as it currently does?
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
> >> >>>> >>>>>>>>>>>
> >> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> >> >>>> >>>>>>>>>>> catch up
> >> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just
> disk
> >> >>>> >>>>>>>>>>> snapshots + memory
> >> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
> >> >>>> >>>>>>>>>>> handled by the SAN,
> >> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
> >> >>>> >>>>>>>>>>> something else. This is
> >> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want
> to
> >> >>>> >>>>>>>>>>> see how others are
> >> >>>> >>>>>>>>>>> planning theirs.
> >> >>>> >>>>>>>>>>>
> >> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
> >> >>>> >>>>>>>>>>> <sh...@gmail.com>
> >> >>>> >>>>>>>>>>> wrote:
> >> >>>> >>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> >> >>>> >>>>>>>>>>>> style
> >> >>>> >>>>>>>>>>>> on an
> >> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> >> >>>> >>>>>>>>>>>> format.
> >> >>>> >>>>>>>>>>>> Otherwise you're
> >> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it,
> creating
> >> >>>> >>>>>>>>>>>> a
> >> >>>> >>>>>>>>>>>> QCOW2 disk image,
> >> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
> >> >>>> >>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to
> the
> >> >>>> >>>>>>>>>>>> VM, and
> >> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
> >> >>>> >>>>>>>>>>>> plugin
> >> >>>> >>>>>>>>>>>> is best. My
> >> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that
> >> >>>> >>>>>>>>>>>> there
> >> >>>> >>>>>>>>>>>> was a snapshot
> >> >>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
> >> >>>> >>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
> >> >>>> >>>>>>>>>>>> <sh...@gmail.com>
> >> >>>> >>>>>>>>>>>> wrote:
> >> >>>> >>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN
> back
> >> >>>> >>>>>>>>>>>>> end, if
> >> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> >> >>>> >>>>>>>>>>>>> call
> >> >>>> >>>>>>>>>>>>> your plugin for
> >> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic.
> As
> >> >>>> >>>>>>>>>>>>> far as space, that
> >> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours,
> we
> >> >>>> >>>>>>>>>>>>> carve out luns from a
> >> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and
> is
> >> >>>> >>>>>>>>>>>>> independent of the
> >> >>>> >>>>>>>>>>>>> LUN size the host sees.
> >> >>>> >>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> Hey Marcus,
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> >> >>>> >>>>>>>>>>>>>> won't
> >> >>>> >>>>>>>>>>>>>> work
> >> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor
> snapshots?
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot,
> the
> >> >>>> >>>>>>>>>>>>>> VDI for
> >> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage
> repository
> >> >>>> >>>>>>>>>>>>>> as
> >> >>>> >>>>>>>>>>>>>> the volume is on.
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
> >> >>>> >>>>>>>>>>>>>> XenServer
> >> >>>> >>>>>>>>>>>>>> and
> >> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> >> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
> >> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the
> user
> >> >>>> >>>>>>>>>>>>>> requested for the
> >> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
> >> >>>> >>>>>>>>>>>>>> thinly
> >> >>>> >>>>>>>>>>>>>> provisions volumes,
> >> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to
> >> >>>> >>>>>>>>>>>>>> be).
> >> >>>> >>>>>>>>>>>>>> The CloudStack
> >> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> >> >>>> >>>>>>>>>>>>>> until
> >> >>>> >>>>>>>>>>>>>> a hypervisor
> >> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside
> on
> >> >>>> >>>>>>>>>>>>>> the
> >> >>>> >>>>>>>>>>>>>> SAN volume.
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
> >> >>>> >>>>>>>>>>>>>> creation
> >> >>>> >>>>>>>>>>>>>> of
> >> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which,
> even
> >> >>>> >>>>>>>>>>>>>> if
> >> >>>> >>>>>>>>>>>>>> there were support
> >> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> >> >>>> >>>>>>>>>>>>>> iSCSI
> >> >>>> >>>>>>>>>>>>>> target), then I
> >> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way
> this
> >> >>>> >>>>>>>>>>>>>> works
> >> >>>> >>>>>>>>>>>>>> with DIR?
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> What do you think?
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> Thanks
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI
> access
> >> >>>> >>>>>>>>>>>>>>> today.
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might
> as
> >> >>>> >>>>>>>>>>>>>>> well
> >> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
> >> >>>> >>>>>>>>>>>>>>>> it
> >> >>>> >>>>>>>>>>>>>>>> just
> >> >>>> >>>>>>>>>>>>>>>> acts like a
> >> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that.
> The
> >> >>>> >>>>>>>>>>>>>>>> end-user
> >> >>>> >>>>>>>>>>>>>>>> is
> >> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all
> KVM
> >> >>>> >>>>>>>>>>>>>>>> hosts can
> >> >>>> >>>>>>>>>>>>>>>> access,
> >> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing
> the
> >> >>>> >>>>>>>>>>>>>>>> storage.
> >> >>>> >>>>>>>>>>>>>>>> It could
> >> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
> >> >>>> >>>>>>>>>>>>>>>> filesystem,
> >> >>>> >>>>>>>>>>>>>>>> cloudstack just
> >> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
> >> >>>> >>>>>>>>>>>>>>>> images.
> >> >>>> >>>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at
> the
> >> >>>> >>>>>>>>>>>>>>>> > same
> >> >>>> >>>>>>>>>>>>>>>> > time.
> >> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
> >> >>>> >>>>>>>>>>>>>>>> >
> >> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
> >> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
> >> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
> >> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> >> >>>> >>>>>>>>>>>>>>>> >>> based on
> >> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have
> one
> >> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
> >> >>>> >>>>>>>>>>>>>>>> >>> there would only
> >> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> >> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
> >> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
> >> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
> >> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
> >> >>>> >>>>>>>>>>>>>>>> >>> does
> >> >>>> >>>>>>>>>>>>>>>> >>> not support
> >> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
> >> >>>> >>>>>>>>>>>>>>>> >>> if
> >> >>>> >>>>>>>>>>>>>>>> >>> libvirt
> >> >>>> >>>>>>>>>>>>>>>> >>> supports
> >> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you
> mentioned,
> >> >>>> >>>>>>>>>>>>>>>> >>> since
> >> >>>> >>>>>>>>>>>>>>>> >>> each one of its
> >> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> >> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike
> Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>         }
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>     }
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
> >> >>>> >>>>>>>>>>>>>>>> >>>> currently
> >> >>>> >>>>>>>>>>>>>>>> >>>> being
> >> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
> >> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> >> >>>> >>>>>>>>>>>>>>>> >>>> someone
> >> >>>> >>>>>>>>>>>>>>>> >>>> selects the
> >> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with
> iSCSI,
> >> >>>> >>>>>>>>>>>>>>>> >>>> is
> >> >>>> >>>>>>>>>>>>>>>> >>>> that
> >> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
> >> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
> >> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
> >> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/storage.html#StorageBackendISCSI
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> >> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
> >> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
> >> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
> >> >>>> >>>>>>>>>>>>>>>> >>>>> believe
> >> >>>> >>>>>>>>>>>>>>>> >>>>> your
> >> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
> >> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of
> logging
> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
> >> >>>> >>>>>>>>>>>>>>>> >>>>> and
> >> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
> >> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
> >> >>>> >>>>>>>>>>>>>>>> >>>>> in
> >> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
> >> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> >> >>>> >>>>>>>>>>>>>>>> >>>>> provides
> >> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
> >> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
> >> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi
> device
> >> >>>> >>>>>>>>>>>>>>>> >>>>> as
> >> >>>> >>>>>>>>>>>>>>>> >>>>> a
> >> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
> >> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit
> more
> >> >>>> >>>>>>>>>>>>>>>> >>>>> about
> >> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write
> your
> >> >>>> >>>>>>>>>>>>>>>> >>>>> own
> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
> >> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
> >> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> We
> >> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
> >> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
> >> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
> >> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
> >> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
> >> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made
> to
> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
> >> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> >> >>>> >>>>>>>>>>>>>>>> >>>>> how
> >> >>>> >>>>>>>>>>>>>>>> >>>>> that
> >> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
> >> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> >> >>>> >>>>>>>>>>>>>>>> >>>>> java
> >> >>>> >>>>>>>>>>>>>>>> >>>>> code
> >> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
> >> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register
> iscsi
> >> >>>> >>>>>>>>>>>>>>>> >>>>> storage
> >> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
> >> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
> >> >>>> >>>>>>>>>>>>>>>> >>>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
> >> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > but
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of
> the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike
> Tutkowski"
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how
> I
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have
> to
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it
> for
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> cloud™
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> --
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire
> Inc.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the
> cloud™
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > --
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire
> Inc.
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the
> cloud™
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>>
> >> >>>> >>>>>>>>>>>>>>>> >>>> --
> >> >>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>>
> >> >>>> >>>>>>>>>>>>>>>> >>> --
> >> >>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >>
> >> >>>> >>>>>>>>>>>>>>>> >> --
> >> >>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>> --
> >> >>>> >>>>>>>>>>>>>>> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>>> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>>
> >> >>>> >>>>>>>>>>>>>> --
> >> >>>> >>>>>>>>>>>>>> Mike Tutkowski
> >> >>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>>>>>> o: 303.746.7302
> >> >>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>>
> >> >>>> >>>>>>>>>> --
> >> >>>> >>>>>>>>>> Mike Tutkowski
> >> >>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>>>>> o: 303.746.7302
> >> >>>> >>>>>>>>>> Advancing the way the world uses the cloud™
> >> >>>> >>>>>>>
> >> >>>> >>>>>>>
> >> >>>> >>>>>>>
> >> >>>> >>>>>>>
> >> >>>> >>>>>>> --
> >> >>>> >>>>>>> Mike Tutkowski
> >> >>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>>>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>>>>> o: 303.746.7302
> >> >>>> >>>>>>> Advancing the way the world uses the cloud™
> >> >>>> >>>>
> >> >>>> >>>>
> >> >>>> >>>>
> >> >>>> >>>>
> >> >>>> >>>> --
> >> >>>> >>>> Mike Tutkowski
> >> >>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >>>> e: mike.tutkowski@solidfire.com
> >> >>>> >>>> o: 303.746.7302
> >> >>>> >>>> Advancing the way the world uses the cloud™
> >> >>>> >>
> >> >>>> >>
> >> >>>> >>
> >> >>>> >>
> >> >>>> >> --
> >> >>>> >> Mike Tutkowski
> >> >>>> >> Senior CloudStack Developer, SolidFire Inc.
> >> >>>> >> e: mike.tutkowski@solidfire.com
> >> >>>> >> o: 303.746.7302
> >> >>>> >> Advancing the way the world uses the cloud™
> >> >>>
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Mike Tutkowski
> >> >>> Senior CloudStack Developer, SolidFire Inc.
> >> >>> e: mike.tutkowski@solidfire.com
> >> >>> o: 303.746.7302
> >> >>> Advancing the way the world uses the cloud™
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Mike Tutkowski
> >> >> Senior CloudStack Developer, SolidFire Inc.
> >> >> e: mike.tutkowski@solidfire.com
> >> >> o: 303.746.7302
> >> >> Advancing the way the world uses the cloud™
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > Mike Tutkowski
> >> > Senior CloudStack Developer, SolidFire Inc.
> >> > e: mike.tutkowski@solidfire.com
> >> > o: 303.746.7302
> >> > Advancing the way the world uses the cloud™
> >
> >
> >
> >
> > --
> > Mike Tutkowski
> > Senior CloudStack Developer, SolidFire Inc.
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud™
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
It will still register the pool.  You still have a primary storage
pool that you registered, whether it's local, cluster or zone wide.
NFS is optionally zone wide as well (I'm assuming customers can launch
your storage only cluster-wide if they choose for resource
partitioning), but it registers the pool in Libvirt prior to use.

Here's a better explanation of what I meant.  AttachVolumeCommand gets
both pool and volume info. It first looks up the pool:

    KVMStoragePool primary = _storagePoolMgr.getStoragePool(
                    cmd.getPooltype(),
                    cmd.getPoolUuid());

Then it looks up the disk from that pool:

    KVMPhysicalDisk disk = primary.getPhysicalDisk(cmd.getVolumePath());

Most of the commands only pass volume info like this (getVolumePath
generally means the uuid of the volume), since it looks up the pool
separately. If you don't save the pool info in a map in your custom
class when createStoragePool is called, then getStoragePool won't be
able to find it. This is a simple thing in your implementation of
createStoragePool, just thought I'd mention it because it is key. Just
create a map of pool uuid and pool object and save them so they're
available across all implementations of that class.

On Mon, Sep 16, 2013 at 10:43 AM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> Thanks, Marcus
>
> About this:
>
> "When the agent connects to the
> management server, it registers all pools in the cluster with the
> agent."
>
> So, my plug-in allows you to create zone-wide primary storage. This just
> means that any cluster can use the SAN (the SAN was registered as primary
> storage as opposed to a preallocated volume from the SAN). Once you create a
> primary storage based on this plug-in, the storage framework will invoke the
> plug-in, as needed, to create and delete volumes on the SAN. For example,
> you could have one SolidFire primary storage (zone wide) and currently have
> 100 volumes created on the SAN to support it.
>
> In this case, what will the management server be registering with the agent
> in ModifyStoragePool? If only the storage pool (primary storage) is passed
> in, that will be too vague as it does not contain information on what
> volumes have been created for the agent.
>
> Thanks
>
>
> On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
>>
>> Yes, see my previous email from the 13th. You can create your own
>> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
>> have. The previous email outlines how to add your own StorageAdaptor
>> alongside LibvirtStorageAdaptor to take over all of the calls
>> (createStoragePool, getStoragePool, etc). As mentioned,
>> getPhysicalDisk I believe will be the one you use to actually attach a
>> lun.
>>
>> Ignore CreateStoragePoolCommand. When the agent connects to the
>> management server, it registers all pools in the cluster with the
>> agent. It will call ModifyStoragePoolCommand, passing your storage
>> pool object (with all of the settings for your SAN). This in turn
>> calls _storagePoolMgr.createStoragePool, which will route through
>> KVMStoragePoolManager to your storage adapter that you've registered.
>> The last argument to createStoragePool is the pool type, which is used
>> to select a StorageAdaptor.
>>
>> From then on, most calls will only pass the volume info, and the
>> volume will have the uuid of the storage pool. For this reason, your
>> adaptor class needs to have a static Map variable that contains pool
>> uuid and pool object. Whenever they call createStoragePool on your
>> adaptor you add that pool to the map so that subsequent volume calls
>> can look up the pool details for the volume by pool uuid. With the
>> Libvirt adaptor, libvirt keeps track of that for you.
>>
>> When createStoragePool is called, you can log into the iscsi target
>> (or make sure you are already logged in, as it can be called over
>> again at any time), and when attach volume commands are fired off, you
>> can attach individual LUNs that are asked for, or rescan (say that the
>> plugin created a new ACL just prior to calling attach), or whatever is
>> necessary.
>>
>> KVM is a bit more work, but you can do anything you want. Actually, I
>> think you can call host scripts with Xen, but having the agent there
>> that runs your own code gives you the flexibility to do whatever.
>>
>> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>> > I see right now LibvirtComputingResource.java has the following method
>> > that
>> > I might be able to leverage (it's probably not called at present and
>> > would
>> > need to be implemented in my case to discover my iSCSI target and log in
>> > to
>> > it):
>> >
>> >     protected Answer execute(CreateStoragePoolCommand cmd) {
>> >
>> >         return new Answer(cmd, true, "success");
>> >
>> >     }
>> >
>> > I would probably be able to call the KVMStorageManager to have it use my
>> > StorageAdaptor to do what's necessary here.
>> >
>> >
>> >
>> >
>> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
>> > <mi...@solidfire.com> wrote:
>> >>
>> >> Hey Marcus,
>> >>
>> >> When I implemented support in the XenServer and VMware plug-ins for
>> >> "managed" storage, I started at the execute(AttachVolumeCommand)
>> >> methods in
>> >> both plug-ins.
>> >>
>> >> The code there was changed to check the AttachVolumeCommand instance
>> >> for a
>> >> "managed" property.
>> >>
>> >> If managed was false, the normal attach/detach logic would just run and
>> >> the volume would be attached or detached.
>> >>
>> >> If managed was true, new 4.2 logic would run to create (let's talk
>> >> XenServer here) a new SR and a new VDI inside of that SR (or to
>> >> reattach an
>> >> existing VDI inside an existing SR, if this wasn't the first time the
>> >> volume
>> >> was attached). If managed was true and we were detaching the volume,
>> >> the SR
>> >> would be detached from the XenServer hosts.
>> >>
>> >> I am currently walking through the execute(AttachVolumeCommand) in
>> >> LibvirtComputingResource.java.
>> >>
>> >> I see how the XML is constructed to describe whether a disk should be
>> >> attached or detached. I also see how we call in to get a StorageAdapter
>> >> (and
>> >> how I will likely need to write a new one of these).
>> >>
>> >> So, talking in XenServer terminology again, I was wondering if you
>> >> think
>> >> the approach we took in 4.2 with creating and deleting SRs in the
>> >> execute(AttachVolumeCommand) method would work here or if there is some
>> >> other way I should be looking at this for KVM?
>> >>
>> >> As it is right now for KVM, storage has to be set up ahead of time.
>> >> Assuming this is the case, there probably isn't currently a place I can
>> >> easily inject my logic to discover and log in to iSCSI targets. This is
>> >> why
>> >> we did it as needed in the execute(AttachVolumeCommand) for XenServer
>> >> and
>> >> VMware, but I wanted to see if you have an alternative way that might
>> >> be
>> >> better for KVM.
>> >>
>> >> One possible way to do this would be to modify VolumeManagerImpl (or
>> >> whatever its equivalent is in 4.3) before it issues an attach-volume
>> >> command
>> >> to KVM to check to see if the volume is to be attached to managed
>> >> storage.
>> >> If it is, then (before calling the attach-volume command in KVM) call
>> >> the
>> >> create-storage-pool command in KVM (or whatever it might be called).
>> >>
>> >> Just wanted to get some of your thoughts on this.
>> >>
>> >> Thanks!
>> >>
>> >>
>> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>> >> <mi...@solidfire.com> wrote:
>> >>>
>> >>> Yeah, I remember that StorageProcessor stuff being put in the codebase
>> >>> and having to merge my code into it in 4.2.
>> >>>
>> >>> Thanks for all the details, Marcus! :)
>> >>>
>> >>> I can start digging into what you were talking about now.
>> >>>
>> >>>
>> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen
>> >>> <sh...@gmail.com>
>> >>> wrote:
>> >>>>
>> >>>> Looks like things might be slightly different now in 4.2, with
>> >>>> KVMStorageProcessor.java in the mix.This looks more or less like some
>> >>>> of the commands were ripped out verbatim from
>> >>>> LibvirtComputingResource
>> >>>> and placed here, so in general what I've said is probably still true,
>> >>>> just that the location of things like AttachVolumeCommand might be
>> >>>> different, in this file rather than LibvirtComputingResource.java.
>> >>>>
>> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen
>> >>>> <sh...@gmail.com>
>> >>>> wrote:
>> >>>> > Ok, KVM will be close to that, of course, because only the
>> >>>> > hypervisor
>> >>>> > classes differ, the rest is all mgmt server. Creating a volume is
>> >>>> > just
>> >>>> > a db entry until it's deployed for the first time.
>> >>>> > AttachVolumeCommand
>> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> >>>> > StorageAdaptor) to log in the host to the target and then you have
>> >>>> > a
>> >>>> > block device.  Maybe libvirt will do that for you, but my quick
>> >>>> > read
>> >>>> > made it sound like the iscsi libvirt pool type is actually a pool,
>> >>>> > not
>> >>>> > a lun or volume, so you'll need to figure out if that works or if
>> >>>> > you'll have to use iscsiadm commands.
>> >>>> >
>> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> >>>> > doesn't really manage your pool the way you want), you're going to
>> >>>> > have to create a version of KVMStoragePool class and a
>> >>>> > StorageAdaptor
>> >>>> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> >>>> > implementing all of the methods, then in KVMStorageManager.java
>> >>>> > there's a "_storageMapper" map. This is used to select the correct
>> >>>> > adaptor, you can see in this file that every call first pulls the
>> >>>> > correct adaptor out of this map via getStorageAdaptor. So you can
>> >>>> > see
>> >>>> > a comment in this file that says "add other storage adaptors here",
>> >>>> > where it puts to this map, this is where you'd register your
>> >>>> > adaptor.
>> >>>> >
>> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts all
>> >>>> > of
>> >>>> > the pool data (host, port, name, path) which would be used to log
>> >>>> > the
>> >>>> > host into the initiator. I *believe* the method getPhysicalDisk
>> >>>> > will
>> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand
>> >>>> > calls
>> >>>> > this and then creates the XML diskdef and attaches it to the VM.
>> >>>> > Now,
>> >>>> > one thing you need to know is that createStoragePool is called
>> >>>> > often,
>> >>>> > sometimes just to make sure the pool is there. You may want to
>> >>>> > create
>> >>>> > a map in your adaptor class and keep track of pools that have been
>> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because it
>> >>>> > asks
>> >>>> > libvirt about which storage pools exist. There are also calls to
>> >>>> > refresh the pool stats, and all of the other calls can be seen in
>> >>>> > the
>> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
>> >>>> > but
>> >>>> > it's probably a hold-over from 4.1, as I have the vague idea that
>> >>>> > volumes are created on the mgmt server via the plugin now, so
>> >>>> > whatever
>> >>>> > doesn't apply can just be stubbed out (or optionally
>> >>>> > extended/reimplemented here, if you don't mind the hosts talking to
>> >>>> > the san api).
>> >>>> >
>> >>>> > There is a difference between attaching new volumes and launching a
>> >>>> > VM
>> >>>> > with existing volumes.  In the latter case, the VM definition that
>> >>>> > was
>> >>>> > passed to the KVM agent includes the disks, (StartCommand).
>> >>>> >
>> >>>> > I'd be interested in how your pool is defined for Xen, I imagine it
>> >>>> > would need to be kept the same. Is it just a definition to the SAN
>> >>>> > (ip address or some such, port number) and perhaps a volume pool
>> >>>> > name?
>> >>>> >
>> >>>> >> If there is a way for me to update the ACL list on the SAN to have
>> >>>> >> only a
>> >>>> >> single KVM host have access to the volume, that would be ideal.
>> >>>> >
>> >>>> > That depends on your SAN API.  I was under the impression that the
>> >>>> > storage plugin framework allowed for acls, or for you to do
>> >>>> > whatever
>> >>>> > you want for create/attach/delete/snapshot, etc. You'd just call
>> >>>> > your
>> >>>> > SAN API with the host info for the ACLs prior to when the disk is
>> >>>> > attached (or the VM is started).  I'd have to look more at the
>> >>>> > framework to know the details, in 4.1 I would do this in
>> >>>> > getPhysicalDisk just prior to connecting up the LUN.
>> >>>> >
>> >>>> >
>> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> >>>> > <mi...@solidfire.com> wrote:
>> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit
>> >>>> >> different
>> >>>> >> from how
>> >>>> >> it works with XenServer and VMware.
>> >>>> >>
>> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>> >>>> >>
>> >>>> >> * The user creates a CS volume (this is just recorded in the
>> >>>> >> cloud.volumes
>> >>>> >> table).
>> >>>> >>
>> >>>> >> * The user attaches the volume as a disk to a VM for the first
>> >>>> >> time
>> >>>> >> (if the
>> >>>> >> storage allocator picks the SolidFire plug-in, the storage
>> >>>> >> framework
>> >>>> >> invokes
>> >>>> >> a method on the plug-in that creates a volume on the SAN...info
>> >>>> >> like
>> >>>> >> the IQN
>> >>>> >> of the SAN volume is recorded in the DB).
>> >>>> >>
>> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed.
>> >>>> >> It
>> >>>> >> determines based on a flag passed in that the storage in question
>> >>>> >> is
>> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>> >>>> >> preallocated
>> >>>> >> storage). This tells it to discover the iSCSI target. Once
>> >>>> >> discovered
>> >>>> >> it
>> >>>> >> determines if the iSCSI target already contains a storage
>> >>>> >> repository
>> >>>> >> (it
>> >>>> >> would if this were a re-attach situation). If it does contain an
>> >>>> >> SR
>> >>>> >> already,
>> >>>> >> then there should already be one VDI, as well. If there is no SR,
>> >>>> >> an
>> >>>> >> SR is
>> >>>> >> created and a single VDI is created within it (that takes up about
>> >>>> >> as
>> >>>> >> much
>> >>>> >> space as was requested for the CloudStack volume).
>> >>>> >>
>> >>>> >> * The normal attach-volume logic continues (it depends on the
>> >>>> >> existence of
>> >>>> >> an SR and a VDI).
>> >>>> >>
>> >>>> >> The VMware case is essentially the same (mainly just substitute
>> >>>> >> datastore
>> >>>> >> for SR and VMDK for VDI).
>> >>>> >>
>> >>>> >> In both cases, all hosts in the cluster have discovered the iSCSI
>> >>>> >> target,
>> >>>> >> but only the host that is currently running the VM that is using
>> >>>> >> the
>> >>>> >> VDI (or
>> >>>> >> VMKD) is actually using the disk.
>> >>>> >>
>> >>>> >> Live Migration should be OK because the hypervisors communicate
>> >>>> >> with
>> >>>> >> whatever metadata they have on the SR (or datastore).
>> >>>> >>
>> >>>> >> I see what you're saying with KVM, though.
>> >>>> >>
>> >>>> >> In that case, the hosts are clustered only in CloudStack's eyes.
>> >>>> >> CS
>> >>>> >> controls
>> >>>> >> Live Migration. You don't really need a clustered filesystem on
>> >>>> >> the
>> >>>> >> LUN. The
>> >>>> >> LUN could be handed over raw to the VM using it.
>> >>>> >>
>> >>>> >> If there is a way for me to update the ACL list on the SAN to have
>> >>>> >> only a
>> >>>> >> single KVM host have access to the volume, that would be ideal.
>> >>>> >>
>> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log in to
>> >>>> >> the
>> >>>> >> iSCSI
>> >>>> >> target. I'll also need to take the resultant new device and pass
>> >>>> >> it
>> >>>> >> into the
>> >>>> >> VM.
>> >>>> >>
>> >>>> >> Does this sound reasonable? Please call me out on anything I seem
>> >>>> >> incorrect
>> >>>> >> about. :)
>> >>>> >>
>> >>>> >> Thanks for all the thought on this, Marcus!
>> >>>> >>
>> >>>> >>
>> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>> >>>> >> <sh...@gmail.com>
>> >>>> >> wrote:
>> >>>> >>>
>> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the
>> >>>> >>> attach
>> >>>> >>> the disk def to the vm. You may need to do your own
>> >>>> >>> StorageAdaptor
>> >>>> >>> and run
>> >>>> >>> iscsiadm commands to accomplish that, depending on how the
>> >>>> >>> libvirt
>> >>>> >>> iscsi
>> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>> >>>> >>> works on
>> >>>> >>> xen at the momen., nor is it ideal.
>> >>>> >>>
>> >>>> >>> Your plugin will handle acls as far as which host can see which
>> >>>> >>> luns
>> >>>> >>> as
>> >>>> >>> well, I remember discussing that months ago, so that a disk won't
>> >>>> >>> be
>> >>>> >>> connected until the hypervisor has exclusive access, so it will
>> >>>> >>> be
>> >>>> >>> safe and
>> >>>> >>> fence the disk from rogue nodes that cloudstack loses
>> >>>> >>> connectivity
>> >>>> >>> with. It
>> >>>> >>> should revoke access to everything but the target host... Except
>> >>>> >>> for
>> >>>> >>> during
>> >>>> >>> migration but we can discuss that later, there's a migration prep
>> >>>> >>> process
>> >>>> >>> where the new host can be added to the acls, and the old host can
>> >>>> >>> be
>> >>>> >>> removed
>> >>>> >>> post migration.
>> >>>> >>>
>> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>> >>>> >>> <mi...@solidfire.com>
>> >>>> >>> wrote:
>> >>>> >>>>
>> >>>> >>>> Yeah, that would be ideal.
>> >>>> >>>>
>> >>>> >>>> So, I would still need to discover the iSCSI target, log in to
>> >>>> >>>> it,
>> >>>> >>>> then
>> >>>> >>>> figure out what /dev/sdX was created as a result (and leave it
>> >>>> >>>> as
>> >>>> >>>> is - do
>> >>>> >>>> not format it with any file system...clustered or not). I would
>> >>>> >>>> pass that
>> >>>> >>>> device into the VM.
>> >>>> >>>>
>> >>>> >>>> Kind of accurate?
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>> >>>> >>>> <sh...@gmail.com>
>> >>>> >>>> wrote:
>> >>>> >>>>>
>> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>> >>>> >>>>> There are
>> >>>> >>>>> ones that work for block devices rather than files. You can
>> >>>> >>>>> piggy
>> >>>> >>>>> back off
>> >>>> >>>>> of the existing disk definitions and attach it to the vm as a
>> >>>> >>>>> block device.
>> >>>> >>>>> The definition is an XML string per libvirt XML format. You may
>> >>>> >>>>> want to use
>> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx like I
>> >>>> >>>>> mentioned,
>> >>>> >>>>> there are by-id paths to the block devices, as well as other
>> >>>> >>>>> ones
>> >>>> >>>>> that will
>> >>>> >>>>> be consistent and easier for management, not sure how familiar
>> >>>> >>>>> you
>> >>>> >>>>> are with
>> >>>> >>>>> device naming on Linux.
>> >>>> >>>>>
>> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen"
>> >>>> >>>>> <sh...@gmail.com>
>> >>>> >>>>> wrote:
>> >>>> >>>>>>
>> >>>> >>>>>> No, as that would rely on virtualized network/iscsi initiator
>> >>>> >>>>>> inside
>> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>> >>>> >>>>>> hypervisor) as
>> >>>> >>>>>> a disk to the VM, rather than attaching some image file that
>> >>>> >>>>>> resides on a
>> >>>> >>>>>> filesystem, mounted on the host, living on a target.
>> >>>> >>>>>>
>> >>>> >>>>>> Actually, if you plan on the storage supporting live migration
>> >>>> >>>>>> I
>> >>>> >>>>>> think
>> >>>> >>>>>> this is the only way. You can't put a filesystem on it and
>> >>>> >>>>>> mount
>> >>>> >>>>>> it in two
>> >>>> >>>>>> places to facilitate migration unless its a clustered
>> >>>> >>>>>> filesystem,
>> >>>> >>>>>> in which
>> >>>> >>>>>> case you're back to shared mount point.
>> >>>> >>>>>>
>> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM
>> >>>> >>>>>> with
>> >>>> >>>>>> a xen
>> >>>> >>>>>> specific cluster management, a custom CLVM. They don't use a
>> >>>> >>>>>> filesystem
>> >>>> >>>>>> either.
>> >>>> >>>>>>
>> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> >>>> >>>>>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>
>> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you
>> >>>> >>>>>>> mean
>> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do that
>> >>>> >>>>>>> in
>> >>>> >>>>>>> CS.
>> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
>> >>>> >>>>>>> hypervisor,
>> >>>> >>>>>>> as far as I
>> >>>> >>>>>>> know.
>> >>>> >>>>>>>
>> >>>> >>>>>>>
>> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>> >>>> >>>>>>> <sh...@gmail.com>
>> >>>> >>>>>>> wrote:
>> >>>> >>>>>>>>
>> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless there is
>> >>>> >>>>>>>> a
>> >>>> >>>>>>>> good
>> >>>> >>>>>>>> reason not to.
>> >>>> >>>>>>>>
>> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>> >>>> >>>>>>>> <sh...@gmail.com>
>> >>>> >>>>>>>> wrote:
>> >>>> >>>>>>>>>
>> >>>> >>>>>>>>> You could do that, but as mentioned I think its a mistake
>> >>>> >>>>>>>>> to
>> >>>> >>>>>>>>> go to
>> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>> >>>> >>>>>>>>> and then putting
>> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2
>> >>>> >>>>>>>>> or
>> >>>> >>>>>>>>> even RAW disk
>> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along
>> >>>> >>>>>>>>> the
>> >>>> >>>>>>>>> way, and have
>> >>>> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
>> >>>> >>>>>>>>>
>> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>> >>>> >>>>>>>>>> CS.
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location of
>> >>>> >>>>>>>>>> the
>> >>>> >>>>>>>>>> share.
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by
>> >>>> >>>>>>>>>> discovering
>> >>>> >>>>>>>>>> their
>> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
>> >>>> >>>>>>>>>> on
>> >>>> >>>>>>>>>> their file
>> >>>> >>>>>>>>>> system.
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
>> >>>> >>>>>>>>>> logging
>> >>>> >>>>>>>>>> in,
>> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
>> >>>> >>>>>>>>>> current code manage
>> >>>> >>>>>>>>>> the rest as it currently does?
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>> >>>>>>>>>>>
>> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>> >>>> >>>>>>>>>>> catch up
>> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
>> >>>> >>>>>>>>>>> snapshots + memory
>> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
>> >>>> >>>>>>>>>>> handled by the SAN,
>> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>> >>>> >>>>>>>>>>> something else. This is
>> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>> >>>> >>>>>>>>>>> see how others are
>> >>>> >>>>>>>>>>> planning theirs.
>> >>>> >>>>>>>>>>>
>> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>> >>>> >>>>>>>>>>> <sh...@gmail.com>
>> >>>> >>>>>>>>>>> wrote:
>> >>>> >>>>>>>>>>>>
>> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
>> >>>> >>>>>>>>>>>> style
>> >>>> >>>>>>>>>>>> on an
>> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
>> >>>> >>>>>>>>>>>> format.
>> >>>> >>>>>>>>>>>> Otherwise you're
>> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating
>> >>>> >>>>>>>>>>>> a
>> >>>> >>>>>>>>>>>> QCOW2 disk image,
>> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>> >>>> >>>>>>>>>>>>
>> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>> >>>> >>>>>>>>>>>> VM, and
>> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage
>> >>>> >>>>>>>>>>>> plugin
>> >>>> >>>>>>>>>>>> is best. My
>> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that
>> >>>> >>>>>>>>>>>> there
>> >>>> >>>>>>>>>>>> was a snapshot
>> >>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
>> >>>> >>>>>>>>>>>>
>> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>> >>>> >>>>>>>>>>>> <sh...@gmail.com>
>> >>>> >>>>>>>>>>>> wrote:
>> >>>> >>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>> >>>> >>>>>>>>>>>>> end, if
>> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
>> >>>> >>>>>>>>>>>>> call
>> >>>> >>>>>>>>>>>>> your plugin for
>> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>> >>>> >>>>>>>>>>>>> far as space, that
>> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>> >>>> >>>>>>>>>>>>> carve out luns from a
>> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>> >>>> >>>>>>>>>>>>> independent of the
>> >>>> >>>>>>>>>>>>> LUN size the host sees.
>> >>>> >>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> Hey Marcus,
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
>> >>>> >>>>>>>>>>>>>> won't
>> >>>> >>>>>>>>>>>>>> work
>> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>> >>>> >>>>>>>>>>>>>> VDI for
>> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository
>> >>>> >>>>>>>>>>>>>> as
>> >>>> >>>>>>>>>>>>>> the volume is on.
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
>> >>>> >>>>>>>>>>>>>> XenServer
>> >>>> >>>>>>>>>>>>>> and
>> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>> >>>> >>>>>>>>>>>>>> requested for the
>> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN
>> >>>> >>>>>>>>>>>>>> thinly
>> >>>> >>>>>>>>>>>>>> provisions volumes,
>> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to
>> >>>> >>>>>>>>>>>>>> be).
>> >>>> >>>>>>>>>>>>>> The CloudStack
>> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume
>> >>>> >>>>>>>>>>>>>> until
>> >>>> >>>>>>>>>>>>>> a hypervisor
>> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
>> >>>> >>>>>>>>>>>>>> the
>> >>>> >>>>>>>>>>>>>> SAN volume.
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
>> >>>> >>>>>>>>>>>>>> creation
>> >>>> >>>>>>>>>>>>>> of
>> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
>> >>>> >>>>>>>>>>>>>> if
>> >>>> >>>>>>>>>>>>>> there were support
>> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
>> >>>> >>>>>>>>>>>>>> iSCSI
>> >>>> >>>>>>>>>>>>>> target), then I
>> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>> >>>> >>>>>>>>>>>>>> works
>> >>>> >>>>>>>>>>>>>> with DIR?
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> What do you think?
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> Thanks
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>> >>>> >>>>>>>>>>>>>>> today.
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>> >>>> >>>>>>>>>>>>>>> well
>> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>> >>>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe
>> >>>> >>>>>>>>>>>>>>>> it
>> >>>> >>>>>>>>>>>>>>>> just
>> >>>> >>>>>>>>>>>>>>>> acts like a
>> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>> >>>> >>>>>>>>>>>>>>>> end-user
>> >>>> >>>>>>>>>>>>>>>> is
>> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>> >>>> >>>>>>>>>>>>>>>> hosts can
>> >>>> >>>>>>>>>>>>>>>> access,
>> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>> >>>> >>>>>>>>>>>>>>>> storage.
>> >>>> >>>>>>>>>>>>>>>> It could
>> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered
>> >>>> >>>>>>>>>>>>>>>> filesystem,
>> >>>> >>>>>>>>>>>>>>>> cloudstack just
>> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM
>> >>>> >>>>>>>>>>>>>>>> images.
>> >>>> >>>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>> >>>> >>>>>>>>>>>>>>>> > same
>> >>>> >>>>>>>>>>>>>>>> > time.
>> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>> >>>> >>>>>>>>>>>>>>>> >
>> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
>> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>> >>>> >>>>>>>>>>>>>>>> >>> based on
>> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
>> >>>> >>>>>>>>>>>>>>>> >>> there would only
>> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys
>> >>>> >>>>>>>>>>>>>>>> >>> iSCSI
>> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
>> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>> >>>> >>>>>>>>>>>>>>>> >>> does
>> >>>> >>>>>>>>>>>>>>>> >>> not support
>> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see
>> >>>> >>>>>>>>>>>>>>>> >>> if
>> >>>> >>>>>>>>>>>>>>>> >>> libvirt
>> >>>> >>>>>>>>>>>>>>>> >>> supports
>> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>> >>>> >>>>>>>>>>>>>>>> >>> since
>> >>>> >>>>>>>>>>>>>>>> >>> each one of its
>> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         }
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>         }
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>     }
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is
>> >>>> >>>>>>>>>>>>>>>> >>>> currently
>> >>>> >>>>>>>>>>>>>>>> >>>> being
>> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
>> >>>> >>>>>>>>>>>>>>>> >>>> someone
>> >>>> >>>>>>>>>>>>>>>> >>>> selects the
>> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
>> >>>> >>>>>>>>>>>>>>>> >>>> is
>> >>>> >>>>>>>>>>>>>>>> >>>> that
>> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus
>> >>>> >>>>>>>>>>>>>>>> >>>> Sorensen
>> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
>> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I
>> >>>> >>>>>>>>>>>>>>>> >>>>> believe
>> >>>> >>>>>>>>>>>>>>>> >>>>> your
>> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
>> >>>> >>>>>>>>>>>>>>>> >>>>> in
>> >>>> >>>>>>>>>>>>>>>> >>>>> and
>> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work
>> >>>> >>>>>>>>>>>>>>>> >>>>> in
>> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
>> >>>> >>>>>>>>>>>>>>>> >>>>> provides
>> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
>> >>>> >>>>>>>>>>>>>>>> >>>>> as
>> >>>> >>>>>>>>>>>>>>>> >>>>> a
>> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>> >>>> >>>>>>>>>>>>>>>> >>>>> about
>> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>> >>>> >>>>>>>>>>>>>>>> >>>>> own
>> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
>> >>>> >>>>>>>>>>>>>>>> >>>>> LibvirtStorageAdaptor.java.
>> >>>> >>>>>>>>>>>>>>>> >>>>> We
>> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
>> >>>> >>>>>>>>>>>>>>>> >>>>> java
>> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>> >>>> >>>>>>>>>>>>>>>> >>>>> that
>> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
>> >>>> >>>>>>>>>>>>>>>> >>>>> how
>> >>>> >>>>>>>>>>>>>>>> >>>>> that
>> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
>> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
>> >>>> >>>>>>>>>>>>>>>> >>>>> java
>> >>>> >>>>>>>>>>>>>>>> >>>>> code
>> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>> >>>> >>>>>>>>>>>>>>>> >>>>> storage
>> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
>> >>>> >>>>>>>>>>>>>>>> >>>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike
>> >>>> >>>>>>>>>>>>>>>> >>>>> Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
>> >>>> >>>>>>>>>>>>>>>> >>>>> > more,
>> >>>> >>>>>>>>>>>>>>>> >>>>> > but
>> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
>> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
>> >>>> >>>>>>>>>>>>>>>> >>>>> > targets,
>> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
>> >>>> >>>>>>>>>>>>>>>> >>>>> > Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> iscsi
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> packages
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> LibvirtStorageAdaptor.java
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> release
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> 1:1
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> could
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> might
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Inc.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> cloud™
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> --
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>> >>>>>>>>>>>>>>>> >>>>> > --
>> >>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>>
>> >>>> >>>>>>>>>>>>>>>> >>>> --
>> >>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>>
>> >>>> >>>>>>>>>>>>>>>> >>> --
>> >>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >>
>> >>>> >>>>>>>>>>>>>>>> >> --
>> >>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>> --
>> >>>> >>>>>>>>>>>>>>> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>>> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>>
>> >>>> >>>>>>>>>>>>>> --
>> >>>> >>>>>>>>>>>>>> Mike Tutkowski
>> >>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>>>>>> o: 303.746.7302
>> >>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>>
>> >>>> >>>>>>>>>> --
>> >>>> >>>>>>>>>> Mike Tutkowski
>> >>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>>>>> o: 303.746.7302
>> >>>> >>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>> >>>>>>>
>> >>>> >>>>>>>
>> >>>> >>>>>>>
>> >>>> >>>>>>>
>> >>>> >>>>>>> --
>> >>>> >>>>>>> Mike Tutkowski
>> >>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>>>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>>>>> o: 303.746.7302
>> >>>> >>>>>>> Advancing the way the world uses the cloud™
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>>
>> >>>> >>>> --
>> >>>> >>>> Mike Tutkowski
>> >>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> >>>> o: 303.746.7302
>> >>>> >>>> Advancing the way the world uses the cloud™
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >>
>> >>>> >> --
>> >>>> >> Mike Tutkowski
>> >>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>> >> e: mike.tutkowski@solidfire.com
>> >>>> >> o: 303.746.7302
>> >>>> >> Advancing the way the world uses the cloud™
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Mike Tutkowski
>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud™
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Tutkowski
>> >> Senior CloudStack Developer, SolidFire Inc.
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud™
>> >
>> >
>> >
>> >
>> > --
>> > Mike Tutkowski
>> > Senior CloudStack Developer, SolidFire Inc.
>> > e: mike.tutkowski@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the cloud™
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Thanks, Marcus

About this:

"When the agent connects to the
management server, it registers all pools in the cluster with the
agent."

So, my plug-in allows you to create zone-wide primary storage. This just
means that any cluster can use the SAN (the SAN was registered as primary
storage as opposed to a preallocated volume from the SAN). Once you create
a primary storage based on this plug-in, the storage framework will invoke
the plug-in, as needed, to create and delete volumes on the SAN. For
example, you could have one SolidFire primary storage (zone wide) and
currently have 100 volumes created on the SAN to support it.

In this case, what will the management server be registering with the agent
in ModifyStoragePool? If only the storage pool (primary storage) is passed
in, that will be too vague as it does not contain information on what
volumes have been created for the agent.

Thanks


On Sun, Sep 15, 2013 at 11:53 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Yes, see my previous email from the 13th. You can create your own
> KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
> have. The previous email outlines how to add your own StorageAdaptor
> alongside LibvirtStorageAdaptor to take over all of the calls
> (createStoragePool, getStoragePool, etc). As mentioned,
> getPhysicalDisk I believe will be the one you use to actually attach a
> lun.
>
> Ignore CreateStoragePoolCommand. When the agent connects to the
> management server, it registers all pools in the cluster with the
> agent. It will call ModifyStoragePoolCommand, passing your storage
> pool object (with all of the settings for your SAN). This in turn
> calls _storagePoolMgr.createStoragePool, which will route through
> KVMStoragePoolManager to your storage adapter that you've registered.
> The last argument to createStoragePool is the pool type, which is used
> to select a StorageAdaptor.
>
> From then on, most calls will only pass the volume info, and the
> volume will have the uuid of the storage pool. For this reason, your
> adaptor class needs to have a static Map variable that contains pool
> uuid and pool object. Whenever they call createStoragePool on your
> adaptor you add that pool to the map so that subsequent volume calls
> can look up the pool details for the volume by pool uuid. With the
> Libvirt adaptor, libvirt keeps track of that for you.
>
> When createStoragePool is called, you can log into the iscsi target
> (or make sure you are already logged in, as it can be called over
> again at any time), and when attach volume commands are fired off, you
> can attach individual LUNs that are asked for, or rescan (say that the
> plugin created a new ACL just prior to calling attach), or whatever is
> necessary.
>
> KVM is a bit more work, but you can do anything you want. Actually, I
> think you can call host scripts with Xen, but having the agent there
> that runs your own code gives you the flexibility to do whatever.
>
> On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
> > I see right now LibvirtComputingResource.java has the following method
> that
> > I might be able to leverage (it's probably not called at present and
> would
> > need to be implemented in my case to discover my iSCSI target and log in
> to
> > it):
> >
> >     protected Answer execute(CreateStoragePoolCommand cmd) {
> >
> >         return new Answer(cmd, true, "success");
> >
> >     }
> >
> > I would probably be able to call the KVMStorageManager to have it use my
> > StorageAdaptor to do what's necessary here.
> >
> >
> >
> >
> > On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> >>
> >> Hey Marcus,
> >>
> >> When I implemented support in the XenServer and VMware plug-ins for
> >> "managed" storage, I started at the execute(AttachVolumeCommand)
> methods in
> >> both plug-ins.
> >>
> >> The code there was changed to check the AttachVolumeCommand instance
> for a
> >> "managed" property.
> >>
> >> If managed was false, the normal attach/detach logic would just run and
> >> the volume would be attached or detached.
> >>
> >> If managed was true, new 4.2 logic would run to create (let's talk
> >> XenServer here) a new SR and a new VDI inside of that SR (or to
> reattach an
> >> existing VDI inside an existing SR, if this wasn't the first time the
> volume
> >> was attached). If managed was true and we were detaching the volume,
> the SR
> >> would be detached from the XenServer hosts.
> >>
> >> I am currently walking through the execute(AttachVolumeCommand) in
> >> LibvirtComputingResource.java.
> >>
> >> I see how the XML is constructed to describe whether a disk should be
> >> attached or detached. I also see how we call in to get a StorageAdapter
> (and
> >> how I will likely need to write a new one of these).
> >>
> >> So, talking in XenServer terminology again, I was wondering if you think
> >> the approach we took in 4.2 with creating and deleting SRs in the
> >> execute(AttachVolumeCommand) method would work here or if there is some
> >> other way I should be looking at this for KVM?
> >>
> >> As it is right now for KVM, storage has to be set up ahead of time.
> >> Assuming this is the case, there probably isn't currently a place I can
> >> easily inject my logic to discover and log in to iSCSI targets. This is
> why
> >> we did it as needed in the execute(AttachVolumeCommand) for XenServer
> and
> >> VMware, but I wanted to see if you have an alternative way that might be
> >> better for KVM.
> >>
> >> One possible way to do this would be to modify VolumeManagerImpl (or
> >> whatever its equivalent is in 4.3) before it issues an attach-volume
> command
> >> to KVM to check to see if the volume is to be attached to managed
> storage.
> >> If it is, then (before calling the attach-volume command in KVM) call
> the
> >> create-storage-pool command in KVM (or whatever it might be called).
> >>
> >> Just wanted to get some of your thoughts on this.
> >>
> >> Thanks!
> >>
> >>
> >> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
> >> <mi...@solidfire.com> wrote:
> >>>
> >>> Yeah, I remember that StorageProcessor stuff being put in the codebase
> >>> and having to merge my code into it in 4.2.
> >>>
> >>> Thanks for all the details, Marcus! :)
> >>>
> >>> I can start digging into what you were talking about now.
> >>>
> >>>
> >>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <shadowsor@gmail.com
> >
> >>> wrote:
> >>>>
> >>>> Looks like things might be slightly different now in 4.2, with
> >>>> KVMStorageProcessor.java in the mix.This looks more or less like some
> >>>> of the commands were ripped out verbatim from LibvirtComputingResource
> >>>> and placed here, so in general what I've said is probably still true,
> >>>> just that the location of things like AttachVolumeCommand might be
> >>>> different, in this file rather than LibvirtComputingResource.java.
> >>>>
> >>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> >>>> wrote:
> >>>> > Ok, KVM will be close to that, of course, because only the
> hypervisor
> >>>> > classes differ, the rest is all mgmt server. Creating a volume is
> just
> >>>> > a db entry until it's deployed for the first time.
> AttachVolumeCommand
> >>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
> >>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> >>>> > StorageAdaptor) to log in the host to the target and then you have a
> >>>> > block device.  Maybe libvirt will do that for you, but my quick read
> >>>> > made it sound like the iscsi libvirt pool type is actually a pool,
> not
> >>>> > a lun or volume, so you'll need to figure out if that works or if
> >>>> > you'll have to use iscsiadm commands.
> >>>> >
> >>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> >>>> > doesn't really manage your pool the way you want), you're going to
> >>>> > have to create a version of KVMStoragePool class and a
> StorageAdaptor
> >>>> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> >>>> > implementing all of the methods, then in KVMStorageManager.java
> >>>> > there's a "_storageMapper" map. This is used to select the correct
> >>>> > adaptor, you can see in this file that every call first pulls the
> >>>> > correct adaptor out of this map via getStorageAdaptor. So you can
> see
> >>>> > a comment in this file that says "add other storage adaptors here",
> >>>> > where it puts to this map, this is where you'd register your
> adaptor.
> >>>> >
> >>>> > So, referencing StorageAdaptor.java, createStoragePool accepts all
> of
> >>>> > the pool data (host, port, name, path) which would be used to log
> the
> >>>> > host into the initiator. I *believe* the method getPhysicalDisk will
> >>>> > need to do the work of attaching the lun.  AttachVolumeCommand calls
> >>>> > this and then creates the XML diskdef and attaches it to the VM.
> Now,
> >>>> > one thing you need to know is that createStoragePool is called
> often,
> >>>> > sometimes just to make sure the pool is there. You may want to
> create
> >>>> > a map in your adaptor class and keep track of pools that have been
> >>>> > created, LibvirtStorageAdaptor doesn't have to do this because it
> asks
> >>>> > libvirt about which storage pools exist. There are also calls to
> >>>> > refresh the pool stats, and all of the other calls can be seen in
> the
> >>>> > StorageAdaptor as well. There's a createPhysical disk, clone, etc,
> but
> >>>> > it's probably a hold-over from 4.1, as I have the vague idea that
> >>>> > volumes are created on the mgmt server via the plugin now, so
> whatever
> >>>> > doesn't apply can just be stubbed out (or optionally
> >>>> > extended/reimplemented here, if you don't mind the hosts talking to
> >>>> > the san api).
> >>>> >
> >>>> > There is a difference between attaching new volumes and launching a
> VM
> >>>> > with existing volumes.  In the latter case, the VM definition that
> was
> >>>> > passed to the KVM agent includes the disks, (StartCommand).
> >>>> >
> >>>> > I'd be interested in how your pool is defined for Xen, I imagine it
> >>>> > would need to be kept the same. Is it just a definition to the SAN
> >>>> > (ip address or some such, port number) and perhaps a volume pool
> name?
> >>>> >
> >>>> >> If there is a way for me to update the ACL list on the SAN to have
> >>>> >> only a
> >>>> >> single KVM host have access to the volume, that would be ideal.
> >>>> >
> >>>> > That depends on your SAN API.  I was under the impression that the
> >>>> > storage plugin framework allowed for acls, or for you to do whatever
> >>>> > you want for create/attach/delete/snapshot, etc. You'd just call
> your
> >>>> > SAN API with the host info for the ACLs prior to when the disk is
> >>>> > attached (or the VM is started).  I'd have to look more at the
> >>>> > framework to know the details, in 4.1 I would do this in
> >>>> > getPhysicalDisk just prior to connecting up the LUN.
> >>>> >
> >>>> >
> >>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> >>>> > <mi...@solidfire.com> wrote:
> >>>> >> OK, yeah, the ACL part will be interesting. That is a bit different
> >>>> >> from how
> >>>> >> it works with XenServer and VMware.
> >>>> >>
> >>>> >> Just to give you an idea how it works in 4.2 with XenServer:
> >>>> >>
> >>>> >> * The user creates a CS volume (this is just recorded in the
> >>>> >> cloud.volumes
> >>>> >> table).
> >>>> >>
> >>>> >> * The user attaches the volume as a disk to a VM for the first time
> >>>> >> (if the
> >>>> >> storage allocator picks the SolidFire plug-in, the storage
> framework
> >>>> >> invokes
> >>>> >> a method on the plug-in that creates a volume on the SAN...info
> like
> >>>> >> the IQN
> >>>> >> of the SAN volume is recorded in the DB).
> >>>> >>
> >>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> >>>> >> determines based on a flag passed in that the storage in question
> is
> >>>> >> "CloudStack-managed" storage (as opposed to "traditional"
> >>>> >> preallocated
> >>>> >> storage). This tells it to discover the iSCSI target. Once
> discovered
> >>>> >> it
> >>>> >> determines if the iSCSI target already contains a storage
> repository
> >>>> >> (it
> >>>> >> would if this were a re-attach situation). If it does contain an SR
> >>>> >> already,
> >>>> >> then there should already be one VDI, as well. If there is no SR,
> an
> >>>> >> SR is
> >>>> >> created and a single VDI is created within it (that takes up about
> as
> >>>> >> much
> >>>> >> space as was requested for the CloudStack volume).
> >>>> >>
> >>>> >> * The normal attach-volume logic continues (it depends on the
> >>>> >> existence of
> >>>> >> an SR and a VDI).
> >>>> >>
> >>>> >> The VMware case is essentially the same (mainly just substitute
> >>>> >> datastore
> >>>> >> for SR and VMDK for VDI).
> >>>> >>
> >>>> >> In both cases, all hosts in the cluster have discovered the iSCSI
> >>>> >> target,
> >>>> >> but only the host that is currently running the VM that is using
> the
> >>>> >> VDI (or
> >>>> >> VMKD) is actually using the disk.
> >>>> >>
> >>>> >> Live Migration should be OK because the hypervisors communicate
> with
> >>>> >> whatever metadata they have on the SR (or datastore).
> >>>> >>
> >>>> >> I see what you're saying with KVM, though.
> >>>> >>
> >>>> >> In that case, the hosts are clustered only in CloudStack's eyes. CS
> >>>> >> controls
> >>>> >> Live Migration. You don't really need a clustered filesystem on the
> >>>> >> LUN. The
> >>>> >> LUN could be handed over raw to the VM using it.
> >>>> >>
> >>>> >> If there is a way for me to update the ACL list on the SAN to have
> >>>> >> only a
> >>>> >> single KVM host have access to the volume, that would be ideal.
> >>>> >>
> >>>> >> Also, I agree I'll need to use iscsiadm to discover and log in to
> the
> >>>> >> iSCSI
> >>>> >> target. I'll also need to take the resultant new device and pass it
> >>>> >> into the
> >>>> >> VM.
> >>>> >>
> >>>> >> Does this sound reasonable? Please call me out on anything I seem
> >>>> >> incorrect
> >>>> >> about. :)
> >>>> >>
> >>>> >> Thanks for all the thought on this, Marcus!
> >>>> >>
> >>>> >>
> >>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
> >>>> >> <sh...@gmail.com>
> >>>> >> wrote:
> >>>> >>>
> >>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the
> >>>> >>> attach
> >>>> >>> the disk def to the vm. You may need to do your own StorageAdaptor
> >>>> >>> and run
> >>>> >>> iscsiadm commands to accomplish that, depending on how the libvirt
> >>>> >>> iscsi
> >>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> >>>> >>> works on
> >>>> >>> xen at the momen., nor is it ideal.
> >>>> >>>
> >>>> >>> Your plugin will handle acls as far as which host can see which
> luns
> >>>> >>> as
> >>>> >>> well, I remember discussing that months ago, so that a disk won't
> be
> >>>> >>> connected until the hypervisor has exclusive access, so it will be
> >>>> >>> safe and
> >>>> >>> fence the disk from rogue nodes that cloudstack loses connectivity
> >>>> >>> with. It
> >>>> >>> should revoke access to everything but the target host... Except
> for
> >>>> >>> during
> >>>> >>> migration but we can discuss that later, there's a migration prep
> >>>> >>> process
> >>>> >>> where the new host can be added to the acls, and the old host can
> be
> >>>> >>> removed
> >>>> >>> post migration.
> >>>> >>>
> >>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
> >>>> >>> <mi...@solidfire.com>
> >>>> >>> wrote:
> >>>> >>>>
> >>>> >>>> Yeah, that would be ideal.
> >>>> >>>>
> >>>> >>>> So, I would still need to discover the iSCSI target, log in to
> it,
> >>>> >>>> then
> >>>> >>>> figure out what /dev/sdX was created as a result (and leave it as
> >>>> >>>> is - do
> >>>> >>>> not format it with any file system...clustered or not). I would
> >>>> >>>> pass that
> >>>> >>>> device into the VM.
> >>>> >>>>
> >>>> >>>> Kind of accurate?
> >>>> >>>>
> >>>> >>>>
> >>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
> >>>> >>>> <sh...@gmail.com>
> >>>> >>>> wrote:
> >>>> >>>>>
> >>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
> >>>> >>>>> There are
> >>>> >>>>> ones that work for block devices rather than files. You can
> piggy
> >>>> >>>>> back off
> >>>> >>>>> of the existing disk definitions and attach it to the vm as a
> >>>> >>>>> block device.
> >>>> >>>>> The definition is an XML string per libvirt XML format. You may
> >>>> >>>>> want to use
> >>>> >>>>> an alternate path to the disk rather than just /dev/sdx like I
> >>>> >>>>> mentioned,
> >>>> >>>>> there are by-id paths to the block devices, as well as other
> ones
> >>>> >>>>> that will
> >>>> >>>>> be consistent and easier for management, not sure how familiar
> you
> >>>> >>>>> are with
> >>>> >>>>> device naming on Linux.
> >>>> >>>>>
> >>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <shadowsor@gmail.com
> >
> >>>> >>>>> wrote:
> >>>> >>>>>>
> >>>> >>>>>> No, as that would rely on virtualized network/iscsi initiator
> >>>> >>>>>> inside
> >>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> >>>> >>>>>> hypervisor) as
> >>>> >>>>>> a disk to the VM, rather than attaching some image file that
> >>>> >>>>>> resides on a
> >>>> >>>>>> filesystem, mounted on the host, living on a target.
> >>>> >>>>>>
> >>>> >>>>>> Actually, if you plan on the storage supporting live migration
> I
> >>>> >>>>>> think
> >>>> >>>>>> this is the only way. You can't put a filesystem on it and
> mount
> >>>> >>>>>> it in two
> >>>> >>>>>> places to facilitate migration unless its a clustered
> filesystem,
> >>>> >>>>>> in which
> >>>> >>>>>> case you're back to shared mount point.
> >>>> >>>>>>
> >>>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM
> with
> >>>> >>>>>> a xen
> >>>> >>>>>> specific cluster management, a custom CLVM. They don't use a
> >>>> >>>>>> filesystem
> >>>> >>>>>> either.
> >>>> >>>>>>
> >>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >>>> >>>>>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>
> >>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you
> mean
> >>>> >>>>>>> circumventing the hypervisor? I didn't think we could do that
> in
> >>>> >>>>>>> CS.
> >>>> >>>>>>> OpenStack, on the other hand, always circumvents the
> hypervisor,
> >>>> >>>>>>> as far as I
> >>>> >>>>>>> know.
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
> >>>> >>>>>>> <sh...@gmail.com>
> >>>> >>>>>>> wrote:
> >>>> >>>>>>>>
> >>>> >>>>>>>> Better to wire up the lun directly to the vm unless there is
> a
> >>>> >>>>>>>> good
> >>>> >>>>>>>> reason not to.
> >>>> >>>>>>>>
> >>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
> >>>> >>>>>>>> <sh...@gmail.com>
> >>>> >>>>>>>> wrote:
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> You could do that, but as mentioned I think its a mistake to
> >>>> >>>>>>>>> go to
> >>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
> >>>> >>>>>>>>> and then putting
> >>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> >>>> >>>>>>>>> even RAW disk
> >>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along
> the
> >>>> >>>>>>>>> way, and have
> >>>> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
> >>>> >>>>>>>>>
> >>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >>>> >>>>>>>>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
> >>>> >>>>>>>>>> CS.
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> >>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location of
> the
> >>>> >>>>>>>>>> share.
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> They can set up their share using Open iSCSI by discovering
> >>>> >>>>>>>>>> their
> >>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere
> on
> >>>> >>>>>>>>>> their file
> >>>> >>>>>>>>>> system.
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> Would it make sense for me to just do that discovery,
> logging
> >>>> >>>>>>>>>> in,
> >>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
> >>>> >>>>>>>>>> current code manage
> >>>> >>>>>>>>>> the rest as it currently does?
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >>>> >>>>>>>>>> <sh...@gmail.com> wrote:
> >>>> >>>>>>>>>>>
> >>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
> >>>> >>>>>>>>>>> catch up
> >>>> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
> >>>> >>>>>>>>>>> snapshots + memory
> >>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
> >>>> >>>>>>>>>>> handled by the SAN,
> >>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
> >>>> >>>>>>>>>>> something else. This is
> >>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to
> >>>> >>>>>>>>>>> see how others are
> >>>> >>>>>>>>>>> planning theirs.
> >>>> >>>>>>>>>>>
> >>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
> >>>> >>>>>>>>>>> <sh...@gmail.com>
> >>>> >>>>>>>>>>> wrote:
> >>>> >>>>>>>>>>>>
> >>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi
> style
> >>>> >>>>>>>>>>>> on an
> >>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW
> format.
> >>>> >>>>>>>>>>>> Otherwise you're
> >>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> >>>> >>>>>>>>>>>> QCOW2 disk image,
> >>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
> >>>> >>>>>>>>>>>>
> >>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
> >>>> >>>>>>>>>>>> VM, and
> >>>> >>>>>>>>>>>> handling snapshots on the San side via the storage plugin
> >>>> >>>>>>>>>>>> is best. My
> >>>> >>>>>>>>>>>> impression from the storage plugin refactor was that
> there
> >>>> >>>>>>>>>>>> was a snapshot
> >>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
> >>>> >>>>>>>>>>>>
> >>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
> >>>> >>>>>>>>>>>> <sh...@gmail.com>
> >>>> >>>>>>>>>>>> wrote:
> >>>> >>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
> >>>> >>>>>>>>>>>>> end, if
> >>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could
> call
> >>>> >>>>>>>>>>>>> your plugin for
> >>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
> >>>> >>>>>>>>>>>>> far as space, that
> >>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
> >>>> >>>>>>>>>>>>> carve out luns from a
> >>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> >>>> >>>>>>>>>>>>> independent of the
> >>>> >>>>>>>>>>>>> LUN size the host sees.
> >>>> >>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> Hey Marcus,
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt
> won't
> >>>> >>>>>>>>>>>>>> work
> >>>> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
> >>>> >>>>>>>>>>>>>> VDI for
> >>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository
> as
> >>>> >>>>>>>>>>>>>> the volume is on.
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for
> XenServer
> >>>> >>>>>>>>>>>>>> and
> >>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
> >>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
> >>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> >>>> >>>>>>>>>>>>>> requested for the
> >>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> >>>> >>>>>>>>>>>>>> provisions volumes,
> >>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to
> be).
> >>>> >>>>>>>>>>>>>> The CloudStack
> >>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume
> until
> >>>> >>>>>>>>>>>>>> a hypervisor
> >>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on
> the
> >>>> >>>>>>>>>>>>>> SAN volume.
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no
> creation
> >>>> >>>>>>>>>>>>>> of
> >>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even
> if
> >>>> >>>>>>>>>>>>>> there were support
> >>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per
> iSCSI
> >>>> >>>>>>>>>>>>>> target), then I
> >>>> >>>>>>>>>>>>>> don't see how using this model will work.
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
> >>>> >>>>>>>>>>>>>> works
> >>>> >>>>>>>>>>>>>> with DIR?
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> What do you think?
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> Thanks
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> >>>> >>>>>>>>>>>>>>> today.
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
> >>>> >>>>>>>>>>>>>>> well
> >>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>> >>>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
> >>>> >>>>>>>>>>>>>>>> just
> >>>> >>>>>>>>>>>>>>>> acts like a
> >>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> >>>> >>>>>>>>>>>>>>>> end-user
> >>>> >>>>>>>>>>>>>>>> is
> >>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
> >>>> >>>>>>>>>>>>>>>> hosts can
> >>>> >>>>>>>>>>>>>>>> access,
> >>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> >>>> >>>>>>>>>>>>>>>> storage.
> >>>> >>>>>>>>>>>>>>>> It could
> >>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> >>>> >>>>>>>>>>>>>>>> cloudstack just
> >>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> >>>> >>>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
> >>>> >>>>>>>>>>>>>>>> > same
> >>>> >>>>>>>>>>>>>>>> > time.
> >>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
> >>>> >>>>>>>>>>>>>>>> >
> >>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
> >>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
> >>>> >>>>>>>>>>>>>>>> >> default              active     yes
> >>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
> >>>> >>>>>>>>>>>>>>>> >>> based on
> >>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
> >>>> >>>>>>>>>>>>>>>> >>> LUN, so
> >>>> >>>>>>>>>>>>>>>> >>> there would only
> >>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
> >>>> >>>>>>>>>>>>>>>> >>> (libvirt)
> >>>> >>>>>>>>>>>>>>>> >>> storage pool.
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
> >>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that
> libvirt
> >>>> >>>>>>>>>>>>>>>> >>> does
> >>>> >>>>>>>>>>>>>>>> >>> not support
> >>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> >>>> >>>>>>>>>>>>>>>> >>> libvirt
> >>>> >>>>>>>>>>>>>>>> >>> supports
> >>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
> >>>> >>>>>>>>>>>>>>>> >>> since
> >>>> >>>>>>>>>>>>>>>> >>> each one of its
> >>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> >>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         }
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         @Override
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>         }
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>     }
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> >>>> >>>>>>>>>>>>>>>> >>>> being
> >>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
> >>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when
> someone
> >>>> >>>>>>>>>>>>>>>> >>>> selects the
> >>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI,
> is
> >>>> >>>>>>>>>>>>>>>> >>>> that
> >>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
> >>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> Thanks!
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> >>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >>>> >>>>>>>>>>>>>>>> >>>> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>>
> http://libvirt.org/storage.html#StorageBackendISCSI
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
> >>>> >>>>>>>>>>>>>>>> >>>>> server, and
> >>>> >>>>>>>>>>>>>>>> >>>>> cannot be
> >>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
> >>>> >>>>>>>>>>>>>>>> >>>>> your
> >>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
> >>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging
> in
> >>>> >>>>>>>>>>>>>>>> >>>>> and
> >>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
> >>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
> >>>> >>>>>>>>>>>>>>>> >>>>> the Xen
> >>>> >>>>>>>>>>>>>>>> >>>>> stuff).
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this
> provides
> >>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
> >>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
> >>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device
> as
> >>>> >>>>>>>>>>>>>>>> >>>>> a
> >>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
> >>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
> >>>> >>>>>>>>>>>>>>>> >>>>> about
> >>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
> >>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
> >>>> >>>>>>>>>>>>>>>> >>>>> own
> >>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
> >>>> >>>>>>>>>>>>>>>> >>>>> rather than changing
> LibvirtStorageAdaptor.java.
> >>>> >>>>>>>>>>>>>>>> >>>>> We
> >>>> >>>>>>>>>>>>>>>> >>>>> can cross that
> >>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the
> java
> >>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
> >>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
> >>>> >>>>>>>>>>>>>>>> >>>>> Normally,
> >>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
> >>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
> >>>> >>>>>>>>>>>>>>>> >>>>> that
> >>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see
> how
> >>>> >>>>>>>>>>>>>>>> >>>>> that
> >>>> >>>>>>>>>>>>>>>> >>>>> is done for
> >>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test
> java
> >>>> >>>>>>>>>>>>>>>> >>>>> code
> >>>> >>>>>>>>>>>>>>>> >>>>> to see if you
> >>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> >>>> >>>>>>>>>>>>>>>> >>>>> storage
> >>>> >>>>>>>>>>>>>>>> >>>>> pools before you
> >>>> >>>>>>>>>>>>>>>> >>>>> get started.
> >>>> >>>>>>>>>>>>>>>> >>>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt
> more,
> >>>> >>>>>>>>>>>>>>>> >>>>> > but
> >>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
> >>>> >>>>>>>>>>>>>>>> >>>>> > supports
> >>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI
> targets,
> >>>> >>>>>>>>>>>>>>>> >>>>> > right?
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike
> Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> >>>> >>>>>>>>>>>>>>>> >>>>> >> classes
> >>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
> >>>> >>>>>>>>>>>>>>>> >>>>> >> last
> >>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
> >>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
> >>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the
> iscsi
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard
> packages
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> for
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the
> initiator
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> and
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2
> release
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected
> the
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they
> could
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with
> KVM.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire
> Inc.
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the
> cloud™
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>>>> >> --
> >>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> >
> >>>> >>>>>>>>>>>>>>>> >>>>> > --
> >>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>>
> >>>> >>>>>>>>>>>>>>>> >>>> --
> >>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>>
> >>>> >>>>>>>>>>>>>>>> >>> --
> >>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >>
> >>>> >>>>>>>>>>>>>>>> >> --
> >>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>> --
> >>>> >>>>>>>>>>>>>>> Mike Tutkowski
> >>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>>> o: 303.746.7302
> >>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>>
> >>>> >>>>>>>>>>>>>> --
> >>>> >>>>>>>>>>>>>> Mike Tutkowski
> >>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>>>>>> o: 303.746.7302
> >>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>>
> >>>> >>>>>>>>>> --
> >>>> >>>>>>>>>> Mike Tutkowski
> >>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>>>>> o: 303.746.7302
> >>>> >>>>>>>>>> Advancing the way the world uses the cloud™
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>>
> >>>> >>>>>>> --
> >>>> >>>>>>> Mike Tutkowski
> >>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>>>>> e: mike.tutkowski@solidfire.com
> >>>> >>>>>>> o: 303.746.7302
> >>>> >>>>>>> Advancing the way the world uses the cloud™
> >>>> >>>>
> >>>> >>>>
> >>>> >>>>
> >>>> >>>>
> >>>> >>>> --
> >>>> >>>> Mike Tutkowski
> >>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> >>>> e: mike.tutkowski@solidfire.com
> >>>> >>>> o: 303.746.7302
> >>>> >>>> Advancing the way the world uses the cloud™
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >>
> >>>> >> --
> >>>> >> Mike Tutkowski
> >>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>> >> e: mike.tutkowski@solidfire.com
> >>>> >> o: 303.746.7302
> >>>> >> Advancing the way the world uses the cloud™
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Mike Tutkowski
> >>> Senior CloudStack Developer, SolidFire Inc.
> >>> e: mike.tutkowski@solidfire.com
> >>> o: 303.746.7302
> >>> Advancing the way the world uses the cloud™
> >>
> >>
> >>
> >>
> >> --
> >> Mike Tutkowski
> >> Senior CloudStack Developer, SolidFire Inc.
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud™
> >
> >
> >
> >
> > --
> > Mike Tutkowski
> > Senior CloudStack Developer, SolidFire Inc.
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the cloud™
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, see my previous email from the 13th. You can create your own
KVMStoragePool class, and StorageAdaptor class, like the libvirt ones
have. The previous email outlines how to add your own StorageAdaptor
alongside LibvirtStorageAdaptor to take over all of the calls
(createStoragePool, getStoragePool, etc). As mentioned,
getPhysicalDisk I believe will be the one you use to actually attach a
lun.

Ignore CreateStoragePoolCommand. When the agent connects to the
management server, it registers all pools in the cluster with the
agent. It will call ModifyStoragePoolCommand, passing your storage
pool object (with all of the settings for your SAN). This in turn
calls _storagePoolMgr.createStoragePool, which will route through
KVMStoragePoolManager to your storage adapter that you've registered.
The last argument to createStoragePool is the pool type, which is used
to select a StorageAdaptor.

>From then on, most calls will only pass the volume info, and the
volume will have the uuid of the storage pool. For this reason, your
adaptor class needs to have a static Map variable that contains pool
uuid and pool object. Whenever they call createStoragePool on your
adaptor you add that pool to the map so that subsequent volume calls
can look up the pool details for the volume by pool uuid. With the
Libvirt adaptor, libvirt keeps track of that for you.

When createStoragePool is called, you can log into the iscsi target
(or make sure you are already logged in, as it can be called over
again at any time), and when attach volume commands are fired off, you
can attach individual LUNs that are asked for, or rescan (say that the
plugin created a new ACL just prior to calling attach), or whatever is
necessary.

KVM is a bit more work, but you can do anything you want. Actually, I
think you can call host scripts with Xen, but having the agent there
that runs your own code gives you the flexibility to do whatever.

On Sun, Sep 15, 2013 at 10:44 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> I see right now LibvirtComputingResource.java has the following method that
> I might be able to leverage (it's probably not called at present and would
> need to be implemented in my case to discover my iSCSI target and log in to
> it):
>
>     protected Answer execute(CreateStoragePoolCommand cmd) {
>
>         return new Answer(cmd, true, "success");
>
>     }
>
> I would probably be able to call the KVMStorageManager to have it use my
> StorageAdaptor to do what's necessary here.
>
>
>
>
> On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>>
>> Hey Marcus,
>>
>> When I implemented support in the XenServer and VMware plug-ins for
>> "managed" storage, I started at the execute(AttachVolumeCommand) methods in
>> both plug-ins.
>>
>> The code there was changed to check the AttachVolumeCommand instance for a
>> "managed" property.
>>
>> If managed was false, the normal attach/detach logic would just run and
>> the volume would be attached or detached.
>>
>> If managed was true, new 4.2 logic would run to create (let's talk
>> XenServer here) a new SR and a new VDI inside of that SR (or to reattach an
>> existing VDI inside an existing SR, if this wasn't the first time the volume
>> was attached). If managed was true and we were detaching the volume, the SR
>> would be detached from the XenServer hosts.
>>
>> I am currently walking through the execute(AttachVolumeCommand) in
>> LibvirtComputingResource.java.
>>
>> I see how the XML is constructed to describe whether a disk should be
>> attached or detached. I also see how we call in to get a StorageAdapter (and
>> how I will likely need to write a new one of these).
>>
>> So, talking in XenServer terminology again, I was wondering if you think
>> the approach we took in 4.2 with creating and deleting SRs in the
>> execute(AttachVolumeCommand) method would work here or if there is some
>> other way I should be looking at this for KVM?
>>
>> As it is right now for KVM, storage has to be set up ahead of time.
>> Assuming this is the case, there probably isn't currently a place I can
>> easily inject my logic to discover and log in to iSCSI targets. This is why
>> we did it as needed in the execute(AttachVolumeCommand) for XenServer and
>> VMware, but I wanted to see if you have an alternative way that might be
>> better for KVM.
>>
>> One possible way to do this would be to modify VolumeManagerImpl (or
>> whatever its equivalent is in 4.3) before it issues an attach-volume command
>> to KVM to check to see if the volume is to be attached to managed storage.
>> If it is, then (before calling the attach-volume command in KVM) call the
>> create-storage-pool command in KVM (or whatever it might be called).
>>
>> Just wanted to get some of your thoughts on this.
>>
>> Thanks!
>>
>>
>> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>>>
>>> Yeah, I remember that StorageProcessor stuff being put in the codebase
>>> and having to merge my code into it in 4.2.
>>>
>>> Thanks for all the details, Marcus! :)
>>>
>>> I can start digging into what you were talking about now.
>>>
>>>
>>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>>>
>>>> Looks like things might be slightly different now in 4.2, with
>>>> KVMStorageProcessor.java in the mix.This looks more or less like some
>>>> of the commands were ripped out verbatim from LibvirtComputingResource
>>>> and placed here, so in general what I've said is probably still true,
>>>> just that the location of things like AttachVolumeCommand might be
>>>> different, in this file rather than LibvirtComputingResource.java.
>>>>
>>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com>
>>>> wrote:
>>>> > Ok, KVM will be close to that, of course, because only the hypervisor
>>>> > classes differ, the rest is all mgmt server. Creating a volume is just
>>>> > a db entry until it's deployed for the first time. AttachVolumeCommand
>>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>>>> > StorageAdaptor) to log in the host to the target and then you have a
>>>> > block device.  Maybe libvirt will do that for you, but my quick read
>>>> > made it sound like the iscsi libvirt pool type is actually a pool, not
>>>> > a lun or volume, so you'll need to figure out if that works or if
>>>> > you'll have to use iscsiadm commands.
>>>> >
>>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>>>> > doesn't really manage your pool the way you want), you're going to
>>>> > have to create a version of KVMStoragePool class and a StorageAdaptor
>>>> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>>>> > implementing all of the methods, then in KVMStorageManager.java
>>>> > there's a "_storageMapper" map. This is used to select the correct
>>>> > adaptor, you can see in this file that every call first pulls the
>>>> > correct adaptor out of this map via getStorageAdaptor. So you can see
>>>> > a comment in this file that says "add other storage adaptors here",
>>>> > where it puts to this map, this is where you'd register your adaptor.
>>>> >
>>>> > So, referencing StorageAdaptor.java, createStoragePool accepts all of
>>>> > the pool data (host, port, name, path) which would be used to log the
>>>> > host into the initiator. I *believe* the method getPhysicalDisk will
>>>> > need to do the work of attaching the lun.  AttachVolumeCommand calls
>>>> > this and then creates the XML diskdef and attaches it to the VM. Now,
>>>> > one thing you need to know is that createStoragePool is called often,
>>>> > sometimes just to make sure the pool is there. You may want to create
>>>> > a map in your adaptor class and keep track of pools that have been
>>>> > created, LibvirtStorageAdaptor doesn't have to do this because it asks
>>>> > libvirt about which storage pools exist. There are also calls to
>>>> > refresh the pool stats, and all of the other calls can be seen in the
>>>> > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>>>> > it's probably a hold-over from 4.1, as I have the vague idea that
>>>> > volumes are created on the mgmt server via the plugin now, so whatever
>>>> > doesn't apply can just be stubbed out (or optionally
>>>> > extended/reimplemented here, if you don't mind the hosts talking to
>>>> > the san api).
>>>> >
>>>> > There is a difference between attaching new volumes and launching a VM
>>>> > with existing volumes.  In the latter case, the VM definition that was
>>>> > passed to the KVM agent includes the disks, (StartCommand).
>>>> >
>>>> > I'd be interested in how your pool is defined for Xen, I imagine it
>>>> > would need to be kept the same. Is it just a definition to the SAN
>>>> > (ip address or some such, port number) and perhaps a volume pool name?
>>>> >
>>>> >> If there is a way for me to update the ACL list on the SAN to have
>>>> >> only a
>>>> >> single KVM host have access to the volume, that would be ideal.
>>>> >
>>>> > That depends on your SAN API.  I was under the impression that the
>>>> > storage plugin framework allowed for acls, or for you to do whatever
>>>> > you want for create/attach/delete/snapshot, etc. You'd just call your
>>>> > SAN API with the host info for the ACLs prior to when the disk is
>>>> > attached (or the VM is started).  I'd have to look more at the
>>>> > framework to know the details, in 4.1 I would do this in
>>>> > getPhysicalDisk just prior to connecting up the LUN.
>>>> >
>>>> >
>>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>>> > <mi...@solidfire.com> wrote:
>>>> >> OK, yeah, the ACL part will be interesting. That is a bit different
>>>> >> from how
>>>> >> it works with XenServer and VMware.
>>>> >>
>>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>>> >>
>>>> >> * The user creates a CS volume (this is just recorded in the
>>>> >> cloud.volumes
>>>> >> table).
>>>> >>
>>>> >> * The user attaches the volume as a disk to a VM for the first time
>>>> >> (if the
>>>> >> storage allocator picks the SolidFire plug-in, the storage framework
>>>> >> invokes
>>>> >> a method on the plug-in that creates a volume on the SAN...info like
>>>> >> the IQN
>>>> >> of the SAN volume is recorded in the DB).
>>>> >>
>>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>>>> >> determines based on a flag passed in that the storage in question is
>>>> >> "CloudStack-managed" storage (as opposed to "traditional"
>>>> >> preallocated
>>>> >> storage). This tells it to discover the iSCSI target. Once discovered
>>>> >> it
>>>> >> determines if the iSCSI target already contains a storage repository
>>>> >> (it
>>>> >> would if this were a re-attach situation). If it does contain an SR
>>>> >> already,
>>>> >> then there should already be one VDI, as well. If there is no SR, an
>>>> >> SR is
>>>> >> created and a single VDI is created within it (that takes up about as
>>>> >> much
>>>> >> space as was requested for the CloudStack volume).
>>>> >>
>>>> >> * The normal attach-volume logic continues (it depends on the
>>>> >> existence of
>>>> >> an SR and a VDI).
>>>> >>
>>>> >> The VMware case is essentially the same (mainly just substitute
>>>> >> datastore
>>>> >> for SR and VMDK for VDI).
>>>> >>
>>>> >> In both cases, all hosts in the cluster have discovered the iSCSI
>>>> >> target,
>>>> >> but only the host that is currently running the VM that is using the
>>>> >> VDI (or
>>>> >> VMKD) is actually using the disk.
>>>> >>
>>>> >> Live Migration should be OK because the hypervisors communicate with
>>>> >> whatever metadata they have on the SR (or datastore).
>>>> >>
>>>> >> I see what you're saying with KVM, though.
>>>> >>
>>>> >> In that case, the hosts are clustered only in CloudStack's eyes. CS
>>>> >> controls
>>>> >> Live Migration. You don't really need a clustered filesystem on the
>>>> >> LUN. The
>>>> >> LUN could be handed over raw to the VM using it.
>>>> >>
>>>> >> If there is a way for me to update the ACL list on the SAN to have
>>>> >> only a
>>>> >> single KVM host have access to the volume, that would be ideal.
>>>> >>
>>>> >> Also, I agree I'll need to use iscsiadm to discover and log in to the
>>>> >> iSCSI
>>>> >> target. I'll also need to take the resultant new device and pass it
>>>> >> into the
>>>> >> VM.
>>>> >>
>>>> >> Does this sound reasonable? Please call me out on anything I seem
>>>> >> incorrect
>>>> >> about. :)
>>>> >>
>>>> >> Thanks for all the thought on this, Marcus!
>>>> >>
>>>> >>
>>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen
>>>> >> <sh...@gmail.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the
>>>> >>> attach
>>>> >>> the disk def to the vm. You may need to do your own StorageAdaptor
>>>> >>> and run
>>>> >>> iscsiadm commands to accomplish that, depending on how the libvirt
>>>> >>> iscsi
>>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>>>> >>> works on
>>>> >>> xen at the momen., nor is it ideal.
>>>> >>>
>>>> >>> Your plugin will handle acls as far as which host can see which luns
>>>> >>> as
>>>> >>> well, I remember discussing that months ago, so that a disk won't be
>>>> >>> connected until the hypervisor has exclusive access, so it will be
>>>> >>> safe and
>>>> >>> fence the disk from rogue nodes that cloudstack loses connectivity
>>>> >>> with. It
>>>> >>> should revoke access to everything but the target host... Except for
>>>> >>> during
>>>> >>> migration but we can discuss that later, there's a migration prep
>>>> >>> process
>>>> >>> where the new host can be added to the acls, and the old host can be
>>>> >>> removed
>>>> >>> post migration.
>>>> >>>
>>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski"
>>>> >>> <mi...@solidfire.com>
>>>> >>> wrote:
>>>> >>>>
>>>> >>>> Yeah, that would be ideal.
>>>> >>>>
>>>> >>>> So, I would still need to discover the iSCSI target, log in to it,
>>>> >>>> then
>>>> >>>> figure out what /dev/sdX was created as a result (and leave it as
>>>> >>>> is - do
>>>> >>>> not format it with any file system...clustered or not). I would
>>>> >>>> pass that
>>>> >>>> device into the VM.
>>>> >>>>
>>>> >>>> Kind of accurate?
>>>> >>>>
>>>> >>>>
>>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen
>>>> >>>> <sh...@gmail.com>
>>>> >>>> wrote:
>>>> >>>>>
>>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>>>> >>>>> There are
>>>> >>>>> ones that work for block devices rather than files. You can piggy
>>>> >>>>> back off
>>>> >>>>> of the existing disk definitions and attach it to the vm as a
>>>> >>>>> block device.
>>>> >>>>> The definition is an XML string per libvirt XML format. You may
>>>> >>>>> want to use
>>>> >>>>> an alternate path to the disk rather than just /dev/sdx like I
>>>> >>>>> mentioned,
>>>> >>>>> there are by-id paths to the block devices, as well as other ones
>>>> >>>>> that will
>>>> >>>>> be consistent and easier for management, not sure how familiar you
>>>> >>>>> are with
>>>> >>>>> device naming on Linux.
>>>> >>>>>
>>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>> >>>>> wrote:
>>>> >>>>>>
>>>> >>>>>> No, as that would rely on virtualized network/iscsi initiator
>>>> >>>>>> inside
>>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>>>> >>>>>> hypervisor) as
>>>> >>>>>> a disk to the VM, rather than attaching some image file that
>>>> >>>>>> resides on a
>>>> >>>>>> filesystem, mounted on the host, living on a target.
>>>> >>>>>>
>>>> >>>>>> Actually, if you plan on the storage supporting live migration I
>>>> >>>>>> think
>>>> >>>>>> this is the only way. You can't put a filesystem on it and mount
>>>> >>>>>> it in two
>>>> >>>>>> places to facilitate migration unless its a clustered filesystem,
>>>> >>>>>> in which
>>>> >>>>>> case you're back to shared mount point.
>>>> >>>>>>
>>>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM with
>>>> >>>>>> a xen
>>>> >>>>>> specific cluster management, a custom CLVM. They don't use a
>>>> >>>>>> filesystem
>>>> >>>>>> either.
>>>> >>>>>>
>>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>> >>>>>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>
>>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>> >>>>>>> circumventing the hypervisor? I didn't think we could do that in
>>>> >>>>>>> CS.
>>>> >>>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
>>>> >>>>>>> as far as I
>>>> >>>>>>> know.
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen
>>>> >>>>>>> <sh...@gmail.com>
>>>> >>>>>>> wrote:
>>>> >>>>>>>>
>>>> >>>>>>>> Better to wire up the lun directly to the vm unless there is a
>>>> >>>>>>>> good
>>>> >>>>>>>> reason not to.
>>>> >>>>>>>>
>>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen"
>>>> >>>>>>>> <sh...@gmail.com>
>>>> >>>>>>>> wrote:
>>>> >>>>>>>>>
>>>> >>>>>>>>> You could do that, but as mentioned I think its a mistake to
>>>> >>>>>>>>> go to
>>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>>>> >>>>>>>>> and then putting
>>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
>>>> >>>>>>>>> even RAW disk
>>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along the
>>>> >>>>>>>>> way, and have
>>>> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
>>>> >>>>>>>>>
>>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with
>>>> >>>>>>>>>> CS.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location of the
>>>> >>>>>>>>>> share.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> They can set up their share using Open iSCSI by discovering
>>>> >>>>>>>>>> their
>>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
>>>> >>>>>>>>>> their file
>>>> >>>>>>>>>> system.
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> Would it make sense for me to just do that discovery, logging
>>>> >>>>>>>>>> in,
>>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
>>>> >>>>>>>>>> current code manage
>>>> >>>>>>>>>> the rest as it currently does?
>>>> >>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>>>> >>>>>>>>>>> catch up
>>>> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
>>>> >>>>>>>>>>> snapshots + memory
>>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
>>>> >>>>>>>>>>> handled by the SAN,
>>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>>>> >>>>>>>>>>> something else. This is
>>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>>>> >>>>>>>>>>> see how others are
>>>> >>>>>>>>>>> planning theirs.
>>>> >>>>>>>>>>>
>>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen"
>>>> >>>>>>>>>>> <sh...@gmail.com>
>>>> >>>>>>>>>>> wrote:
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style
>>>> >>>>>>>>>>>> on an
>>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
>>>> >>>>>>>>>>>> Otherwise you're
>>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
>>>> >>>>>>>>>>>> QCOW2 disk image,
>>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>>>> >>>>>>>>>>>> VM, and
>>>> >>>>>>>>>>>> handling snapshots on the San side via the storage plugin
>>>> >>>>>>>>>>>> is best. My
>>>> >>>>>>>>>>>> impression from the storage plugin refactor was that there
>>>> >>>>>>>>>>>> was a snapshot
>>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
>>>> >>>>>>>>>>>>
>>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen"
>>>> >>>>>>>>>>>> <sh...@gmail.com>
>>>> >>>>>>>>>>>> wrote:
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>>>> >>>>>>>>>>>>> end, if
>>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
>>>> >>>>>>>>>>>>> your plugin for
>>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>>>> >>>>>>>>>>>>> far as space, that
>>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>>>> >>>>>>>>>>>>> carve out luns from a
>>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>>>> >>>>>>>>>>>>> independent of the
>>>> >>>>>>>>>>>>> LUN size the host sees.
>>>> >>>>>>>>>>>>>
>>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Hey Marcus,
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
>>>> >>>>>>>>>>>>>> work
>>>> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>>>> >>>>>>>>>>>>>> VDI for
>>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository as
>>>> >>>>>>>>>>>>>> the volume is on.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
>>>> >>>>>>>>>>>>>> and
>>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>>> >>>>>>>>>>>>>> snapshots in 4.2) is I'd
>>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>>>> >>>>>>>>>>>>>> requested for the
>>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
>>>> >>>>>>>>>>>>>> provisions volumes,
>>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to be).
>>>> >>>>>>>>>>>>>> The CloudStack
>>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until
>>>> >>>>>>>>>>>>>> a hypervisor
>>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
>>>> >>>>>>>>>>>>>> SAN volume.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation
>>>> >>>>>>>>>>>>>> of
>>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
>>>> >>>>>>>>>>>>>> there were support
>>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
>>>> >>>>>>>>>>>>>> target), then I
>>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>>>> >>>>>>>>>>>>>> works
>>>> >>>>>>>>>>>>>> with DIR?
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> What do you think?
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> Thanks
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>>>> >>>>>>>>>>>>>>> today.
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as
>>>> >>>>>>>>>>>>>>> well
>>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >>>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
>>>> >>>>>>>>>>>>>>>> just
>>>> >>>>>>>>>>>>>>>> acts like a
>>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>>>> >>>>>>>>>>>>>>>> end-user
>>>> >>>>>>>>>>>>>>>> is
>>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>>>> >>>>>>>>>>>>>>>> hosts can
>>>> >>>>>>>>>>>>>>>> access,
>>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>>>> >>>>>>>>>>>>>>>> storage.
>>>> >>>>>>>>>>>>>>>> It could
>>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>> >>>>>>>>>>>>>>>> cloudstack just
>>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>> >>>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>>>> >>>>>>>>>>>>>>>> > same
>>>> >>>>>>>>>>>>>>>> > time.
>>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>>> >>>>>>>>>>>>>>>> >
>>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>>>> >>>>>>>>>>>>>>>> >>> based on
>>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>>>> >>>>>>>>>>>>>>>> >>> LUN, so
>>>> >>>>>>>>>>>>>>>> >>> there would only
>>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>>>> >>>>>>>>>>>>>>>> >>> (libvirt)
>>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
>>>> >>>>>>>>>>>>>>>> >>> does
>>>> >>>>>>>>>>>>>>>> >>> not support
>>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
>>>> >>>>>>>>>>>>>>>> >>> libvirt
>>>> >>>>>>>>>>>>>>>> >>> supports
>>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>>>> >>>>>>>>>>>>>>>> >>> since
>>>> >>>>>>>>>>>>>>>> >>> each one of its
>>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>>> >>>>>>>>>>>>>>>> >>> targets/LUNs).
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>         }
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>     }
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
>>>> >>>>>>>>>>>>>>>> >>>> being
>>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>> >>>>>>>>>>>>>>>> >>>> selects the
>>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
>>>> >>>>>>>>>>>>>>>> >>>> that
>>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>>>> >>>>>>>>>>>>>>>> >>>>> server, and
>>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
>>>> >>>>>>>>>>>>>>>> >>>>> your
>>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
>>>> >>>>>>>>>>>>>>>> >>>>> and
>>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
>>>> >>>>>>>>>>>>>>>> >>>>> the Xen
>>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides
>>>> >>>>>>>>>>>>>>>> >>>>> a 1:1
>>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as
>>>> >>>>>>>>>>>>>>>> >>>>> a
>>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>>>> >>>>>>>>>>>>>>>> >>>>> about
>>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your
>>>> >>>>>>>>>>>>>>>> >>>>> own
>>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>> >>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>>>> >>>>>>>>>>>>>>>> >>>>> We
>>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/
>>>> >>>>>>>>>>>>>>>> >>>>> Normally,
>>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to
>>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
>>>> >>>>>>>>>>>>>>>> >>>>> that
>>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
>>>> >>>>>>>>>>>>>>>> >>>>> code
>>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>>>> >>>>>>>>>>>>>>>> >>>>> storage
>>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>>> >>>>>>>>>>>>>>>> >>>>>
>>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
>>>> >>>>>>>>>>>>>>>> >>>>> > but
>>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>>>> >>>>>>>>>>>>>>>> >>>>> >> classes
>>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>>>> >>>>>>>>>>>>>>>> >>>>> >> Sorensen
>>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
>>>> >>>>>>>>>>>>>>>> >>>>> >>> for
>>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
>>>> >>>>>>>>>>>>>>>> >>>>> >>> login.
>>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
>>>> >>>>>>>>>>>>>>>> >>>>> >>> and
>>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> I
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> framework
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> delete
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> mapping
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> admin
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> would
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> needed to
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> work on
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> will need
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> expect
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> this to
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >>
>>>> >>>>>>>>>>>>>>>> >>>>> >> --
>>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> >
>>>> >>>>>>>>>>>>>>>> >>>>> > --
>>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>>
>>>> >>>>>>>>>>>>>>>> >>>> --
>>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>>
>>>> >>>>>>>>>>>>>>>> >>> --
>>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >>
>>>> >>>>>>>>>>>>>>>> >> --
>>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
>>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
>>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>> --
>>>> >>>>>>>>>>>>>>> Mike Tutkowski
>>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>>> o: 303.746.7302
>>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>>
>>>> >>>>>>>>>>>>>> --
>>>> >>>>>>>>>>>>>> Mike Tutkowski
>>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>>>>>> o: 303.746.7302
>>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>> >>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>>
>>>> >>>>>>>>>> --
>>>> >>>>>>>>>> Mike Tutkowski
>>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>>>>> o: 303.746.7302
>>>> >>>>>>>>>> Advancing the way the world uses the cloud™
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> --
>>>> >>>>>>> Mike Tutkowski
>>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>>>> e: mike.tutkowski@solidfire.com
>>>> >>>>>>> o: 303.746.7302
>>>> >>>>>>> Advancing the way the world uses the cloud™
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> --
>>>> >>>> Mike Tutkowski
>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>> e: mike.tutkowski@solidfire.com
>>>> >>>> o: 303.746.7302
>>>> >>>> Advancing the way the world uses the cloud™
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Mike Tutkowski
>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>> >> e: mike.tutkowski@solidfire.com
>>>> >> o: 303.746.7302
>>>> >> Advancing the way the world uses the cloud™
>>>
>>>
>>>
>>>
>>> --
>>> Mike Tutkowski
>>> Senior CloudStack Developer, SolidFire Inc.
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud™
>>
>>
>>
>>
>> --
>> Mike Tutkowski
>> Senior CloudStack Developer, SolidFire Inc.
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud™
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
I see right now LibvirtComputingResource.java has the following method that
I might be able to leverage (it's probably not called at present and would
need to be implemented in my case to discover my iSCSI target and log in to
it):

    protected Answer execute(CreateStoragePoolCommand cmd) {

        return new Answer(cmd, true, "success");

    }

I would probably be able to call the KVMStorageManager to have it use my
StorageAdaptor to do what's necessary here.




On Sun, Sep 15, 2013 at 10:37 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Hey Marcus,
>
> When I implemented support in the XenServer and VMware plug-ins for
> "managed" storage, I started at the execute(AttachVolumeCommand) methods in
> both plug-ins.
>
> The code there was changed to check the AttachVolumeCommand instance for a
> "managed" property.
>
> If managed was false, the normal attach/detach logic would just run and
> the volume would be attached or detached.
>
> If managed was true, new 4.2 logic would run to create (let's talk
> XenServer here) a new SR and a new VDI inside of that SR (or to reattach an
> existing VDI inside an existing SR, if this wasn't the first time the
> volume was attached). If managed was true and we were detaching the volume,
> the SR would be detached from the XenServer hosts.
>
> I am currently walking through the execute(AttachVolumeCommand) in
> LibvirtComputingResource.java.
>
> I see how the XML is constructed to describe whether a disk should be
> attached or detached. I also see how we call in to get a StorageAdapter
> (and how I will likely need to write a new one of these).
>
> So, talking in XenServer terminology again, I was wondering if you think
> the approach we took in 4.2 with creating and deleting SRs in the
> execute(AttachVolumeCommand) method would work here or if there is some
> other way I should be looking at this for KVM?
>
> As it is right now for KVM, storage has to be set up ahead of time.
> Assuming this is the case, there probably isn't currently a place I can
> easily inject my logic to discover and log in to iSCSI targets. This is why
> we did it as needed in the execute(AttachVolumeCommand) for XenServer and
> VMware, but I wanted to see if you have an alternative way that might be
> better for KVM.
>
> One possible way to do this would be to modify VolumeManagerImpl (or
> whatever its equivalent is in 4.3) before it issues an attach-volume
> command to KVM to check to see if the volume is to be attached to managed
> storage. If it is, then (before calling the attach-volume command in KVM)
> call the create-storage-pool command in KVM (or whatever it might be
> called).
>
> Just wanted to get some of your thoughts on this.
>
> Thanks!
>
>
> On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> Yeah, I remember that StorageProcessor stuff being put in the codebase
>> and having to merge my code into it in 4.2.
>>
>> Thanks for all the details, Marcus! :)
>>
>> I can start digging into what you were talking about now.
>>
>>
>> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Looks like things might be slightly different now in 4.2, with
>>> KVMStorageProcessor.java in the mix.This looks more or less like some
>>> of the commands were ripped out verbatim from LibvirtComputingResource
>>> and placed here, so in general what I've said is probably still true,
>>> just that the location of things like AttachVolumeCommand might be
>>> different, in this file rather than LibvirtComputingResource.java.
>>>
>>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>> > Ok, KVM will be close to that, of course, because only the hypervisor
>>> > classes differ, the rest is all mgmt server. Creating a volume is just
>>> > a db entry until it's deployed for the first time. AttachVolumeCommand
>>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>>> > StorageAdaptor) to log in the host to the target and then you have a
>>> > block device.  Maybe libvirt will do that for you, but my quick read
>>> > made it sound like the iscsi libvirt pool type is actually a pool, not
>>> > a lun or volume, so you'll need to figure out if that works or if
>>> > you'll have to use iscsiadm commands.
>>> >
>>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>>> > doesn't really manage your pool the way you want), you're going to
>>> > have to create a version of KVMStoragePool class and a StorageAdaptor
>>> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>>> > implementing all of the methods, then in KVMStorageManager.java
>>> > there's a "_storageMapper" map. This is used to select the correct
>>> > adaptor, you can see in this file that every call first pulls the
>>> > correct adaptor out of this map via getStorageAdaptor. So you can see
>>> > a comment in this file that says "add other storage adaptors here",
>>> > where it puts to this map, this is where you'd register your adaptor.
>>> >
>>> > So, referencing StorageAdaptor.java, createStoragePool accepts all of
>>> > the pool data (host, port, name, path) which would be used to log the
>>> > host into the initiator. I *believe* the method getPhysicalDisk will
>>> > need to do the work of attaching the lun.  AttachVolumeCommand calls
>>> > this and then creates the XML diskdef and attaches it to the VM. Now,
>>> > one thing you need to know is that createStoragePool is called often,
>>> > sometimes just to make sure the pool is there. You may want to create
>>> > a map in your adaptor class and keep track of pools that have been
>>> > created, LibvirtStorageAdaptor doesn't have to do this because it asks
>>> > libvirt about which storage pools exist. There are also calls to
>>> > refresh the pool stats, and all of the other calls can be seen in the
>>> > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>>> > it's probably a hold-over from 4.1, as I have the vague idea that
>>> > volumes are created on the mgmt server via the plugin now, so whatever
>>> > doesn't apply can just be stubbed out (or optionally
>>> > extended/reimplemented here, if you don't mind the hosts talking to
>>> > the san api).
>>> >
>>> > There is a difference between attaching new volumes and launching a VM
>>> > with existing volumes.  In the latter case, the VM definition that was
>>> > passed to the KVM agent includes the disks, (StartCommand).
>>> >
>>> > I'd be interested in how your pool is defined for Xen, I imagine it
>>> > would need to be kept the same. Is it just a definition to the SAN
>>> > (ip address or some such, port number) and perhaps a volume pool name?
>>> >
>>> >> If there is a way for me to update the ACL list on the SAN to have
>>> only a
>>> >> single KVM host have access to the volume, that would be ideal.
>>> >
>>> > That depends on your SAN API.  I was under the impression that the
>>> > storage plugin framework allowed for acls, or for you to do whatever
>>> > you want for create/attach/delete/snapshot, etc. You'd just call your
>>> > SAN API with the host info for the ACLs prior to when the disk is
>>> > attached (or the VM is started).  I'd have to look more at the
>>> > framework to know the details, in 4.1 I would do this in
>>> > getPhysicalDisk just prior to connecting up the LUN.
>>> >
>>> >
>>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>>> > <mi...@solidfire.com> wrote:
>>> >> OK, yeah, the ACL part will be interesting. That is a bit different
>>> from how
>>> >> it works with XenServer and VMware.
>>> >>
>>> >> Just to give you an idea how it works in 4.2 with XenServer:
>>> >>
>>> >> * The user creates a CS volume (this is just recorded in the
>>> cloud.volumes
>>> >> table).
>>> >>
>>> >> * The user attaches the volume as a disk to a VM for the first time
>>> (if the
>>> >> storage allocator picks the SolidFire plug-in, the storage framework
>>> invokes
>>> >> a method on the plug-in that creates a volume on the SAN...info like
>>> the IQN
>>> >> of the SAN volume is recorded in the DB).
>>> >>
>>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>>> >> determines based on a flag passed in that the storage in question is
>>> >> "CloudStack-managed" storage (as opposed to "traditional" preallocated
>>> >> storage). This tells it to discover the iSCSI target. Once discovered
>>> it
>>> >> determines if the iSCSI target already contains a storage repository
>>> (it
>>> >> would if this were a re-attach situation). If it does contain an SR
>>> already,
>>> >> then there should already be one VDI, as well. If there is no SR, an
>>> SR is
>>> >> created and a single VDI is created within it (that takes up about as
>>> much
>>> >> space as was requested for the CloudStack volume).
>>> >>
>>> >> * The normal attach-volume logic continues (it depends on the
>>> existence of
>>> >> an SR and a VDI).
>>> >>
>>> >> The VMware case is essentially the same (mainly just substitute
>>> datastore
>>> >> for SR and VMDK for VDI).
>>> >>
>>> >> In both cases, all hosts in the cluster have discovered the iSCSI
>>> target,
>>> >> but only the host that is currently running the VM that is using the
>>> VDI (or
>>> >> VMKD) is actually using the disk.
>>> >>
>>> >> Live Migration should be OK because the hypervisors communicate with
>>> >> whatever metadata they have on the SR (or datastore).
>>> >>
>>> >> I see what you're saying with KVM, though.
>>> >>
>>> >> In that case, the hosts are clustered only in CloudStack's eyes. CS
>>> controls
>>> >> Live Migration. You don't really need a clustered filesystem on the
>>> LUN. The
>>> >> LUN could be handed over raw to the VM using it.
>>> >>
>>> >> If there is a way for me to update the ACL list on the SAN to have
>>> only a
>>> >> single KVM host have access to the volume, that would be ideal.
>>> >>
>>> >> Also, I agree I'll need to use iscsiadm to discover and log in to the
>>> iSCSI
>>> >> target. I'll also need to take the resultant new device and pass it
>>> into the
>>> >> VM.
>>> >>
>>> >> Does this sound reasonable? Please call me out on anything I seem
>>> incorrect
>>> >> about. :)
>>> >>
>>> >> Thanks for all the thought on this, Marcus!
>>> >>
>>> >>
>>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <shadowsor@gmail.com
>>> >
>>> >> wrote:
>>> >>>
>>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the
>>> attach
>>> >>> the disk def to the vm. You may need to do your own StorageAdaptor
>>> and run
>>> >>> iscsiadm commands to accomplish that, depending on how the libvirt
>>> iscsi
>>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>>> works on
>>> >>> xen at the momen., nor is it ideal.
>>> >>>
>>> >>> Your plugin will handle acls as far as which host can see which luns
>>> as
>>> >>> well, I remember discussing that months ago, so that a disk won't be
>>> >>> connected until the hypervisor has exclusive access, so it will be
>>> safe and
>>> >>> fence the disk from rogue nodes that cloudstack loses connectivity
>>> with. It
>>> >>> should revoke access to everything but the target host... Except for
>>> during
>>> >>> migration but we can discuss that later, there's a migration prep
>>> process
>>> >>> where the new host can be added to the acls, and the old host can be
>>> removed
>>> >>> post migration.
>>> >>>
>>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>>> mike.tutkowski@solidfire.com>
>>> >>> wrote:
>>> >>>>
>>> >>>> Yeah, that would be ideal.
>>> >>>>
>>> >>>> So, I would still need to discover the iSCSI target, log in to it,
>>> then
>>> >>>> figure out what /dev/sdX was created as a result (and leave it as
>>> is - do
>>> >>>> not format it with any file system...clustered or not). I would
>>> pass that
>>> >>>> device into the VM.
>>> >>>>
>>> >>>> Kind of accurate?
>>> >>>>
>>> >>>>
>>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>>> shadowsor@gmail.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions.
>>> There are
>>> >>>>> ones that work for block devices rather than files. You can piggy
>>> back off
>>> >>>>> of the existing disk definitions and attach it to the vm as a
>>> block device.
>>> >>>>> The definition is an XML string per libvirt XML format. You may
>>> want to use
>>> >>>>> an alternate path to the disk rather than just /dev/sdx like I
>>> mentioned,
>>> >>>>> there are by-id paths to the block devices, as well as other ones
>>> that will
>>> >>>>> be consistent and easier for management, not sure how familiar you
>>> are with
>>> >>>>> device naming on Linux.
>>> >>>>>
>>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
>>> wrote:
>>> >>>>>>
>>> >>>>>> No, as that would rely on virtualized network/iscsi initiator
>>> inside
>>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>>> hypervisor) as
>>> >>>>>> a disk to the VM, rather than attaching some image file that
>>> resides on a
>>> >>>>>> filesystem, mounted on the host, living on a target.
>>> >>>>>>
>>> >>>>>> Actually, if you plan on the storage supporting live migration I
>>> think
>>> >>>>>> this is the only way. You can't put a filesystem on it and mount
>>> it in two
>>> >>>>>> places to facilitate migration unless its a clustered filesystem,
>>> in which
>>> >>>>>> case you're back to shared mount point.
>>> >>>>>>
>>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM with
>>> a xen
>>> >>>>>> specific cluster management, a custom CLVM. They don't use a
>>> filesystem
>>> >>>>>> either.
>>> >>>>>>
>>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>> >>>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>
>>> >>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>> >>>>>>> circumventing the hypervisor? I didn't think we could do that in
>>> CS.
>>> >>>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
>>> as far as I
>>> >>>>>>> know.
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>>> shadowsor@gmail.com>
>>> >>>>>>> wrote:
>>> >>>>>>>>
>>> >>>>>>>> Better to wire up the lun directly to the vm unless there is a
>>> good
>>> >>>>>>>> reason not to.
>>> >>>>>>>>
>>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <shadowsor@gmail.com
>>> >
>>> >>>>>>>> wrote:
>>> >>>>>>>>>
>>> >>>>>>>>> You could do that, but as mentioned I think its a mistake to
>>> go to
>>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns
>>> and then putting
>>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
>>> even RAW disk
>>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along the
>>> way, and have
>>> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
>>> >>>>>>>>>
>>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>> >>>>>>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>
>>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>> >>>>>>>>>>
>>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>> >>>>>>>>>> selecting SharedMountPoint and specifying the location of the
>>> share.
>>> >>>>>>>>>>
>>> >>>>>>>>>> They can set up their share using Open iSCSI by discovering
>>> their
>>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
>>> their file
>>> >>>>>>>>>> system.
>>> >>>>>>>>>>
>>> >>>>>>>>>> Would it make sense for me to just do that discovery, logging
>>> in,
>>> >>>>>>>>>> and mounting behind the scenes for them and letting the
>>> current code manage
>>> >>>>>>>>>> the rest as it currently does?
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>> >>>>>>>>>> <sh...@gmail.com> wrote:
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to
>>> catch up
>>> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
>>> snapshots + memory
>>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
>>> handled by the SAN,
>>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>>> something else. This is
>>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to
>>> see how others are
>>> >>>>>>>>>>> planning theirs.
>>> >>>>>>>>>>>
>>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>>> shadowsor@gmail.com>
>>> >>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style
>>> on an
>>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
>>> Otherwise you're
>>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
>>> QCOW2 disk image,
>>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the
>>> VM, and
>>> >>>>>>>>>>>> handling snapshots on the San side via the storage plugin
>>> is best. My
>>> >>>>>>>>>>>> impression from the storage plugin refactor was that there
>>> was a snapshot
>>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
>>> >>>>>>>>>>>>
>>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>>> shadowsor@gmail.com>
>>> >>>>>>>>>>>> wrote:
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>>> end, if
>>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
>>> your plugin for
>>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As
>>> far as space, that
>>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>>> carve out luns from a
>>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>>> independent of the
>>> >>>>>>>>>>>>> LUN size the host sees.
>>> >>>>>>>>>>>>>
>>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> Hey Marcus,
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
>>> work
>>> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the
>>> VDI for
>>> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository as
>>> the volume is on.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
>>> and
>>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>>> snapshots in 4.2) is I'd
>>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>>> requested for the
>>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
>>> provisions volumes,
>>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to be).
>>> The CloudStack
>>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until
>>> a hypervisor
>>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
>>> SAN volume.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation
>>> of
>>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
>>> there were support
>>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
>>> target), then I
>>> >>>>>>>>>>>>>> don't see how using this model will work.
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>>> works
>>> >>>>>>>>>>>>>> with DIR?
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> What do you think?
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> Thanks
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>>> today.
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
>>> just
>>> >>>>>>>>>>>>>>>> acts like a
>>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>>> end-user
>>> >>>>>>>>>>>>>>>> is
>>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>>> hosts can
>>> >>>>>>>>>>>>>>>> access,
>>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>>> storage.
>>> >>>>>>>>>>>>>>>> It could
>>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>> >>>>>>>>>>>>>>>> cloudstack just
>>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>> >>>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the
>>> same
>>> >>>>>>>>>>>>>>>> > time.
>>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>>> >>>>>>>>>>>>>>>> >
>>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>>> >>>>>>>>>>>>>>>> >> default              active     yes
>>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool
>>> based on
>>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>>> LUN, so
>>> >>>>>>>>>>>>>>>> >>> there would only
>>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>>> (libvirt)
>>> >>>>>>>>>>>>>>>> >>> storage pool.
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
>>> does
>>> >>>>>>>>>>>>>>>> >>> not support
>>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
>>> libvirt
>>> >>>>>>>>>>>>>>>> >>> supports
>>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned,
>>> since
>>> >>>>>>>>>>>>>>>> >>> each one of its
>>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>>> targets/LUNs).
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         }
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         @Override
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>         }
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>     }
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
>>> being
>>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>> >>>>>>>>>>>>>>>> >>>> selects the
>>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
>>> that
>>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> Thanks!
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>> >>>>>>>>>>>>>>>> >>>> wrote:
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>>
>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>>> server, and
>>> >>>>>>>>>>>>>>>> >>>>> cannot be
>>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
>>> your
>>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
>>> and
>>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
>>> the Xen
>>> >>>>>>>>>>>>>>>> >>>>> stuff).
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides
>>> a 1:1
>>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more
>>> about
>>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>>> >>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>>>  We
>>> >>>>>>>>>>>>>>>> >>>>> can cross that
>>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/ Normally,
>>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
>>> that
>>> >>>>>>>>>>>>>>>> >>>>> is done for
>>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
>>> code
>>> >>>>>>>>>>>>>>>> >>>>> to see if you
>>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>>> storage
>>> >>>>>>>>>>>>>>>> >>>>> pools before you
>>> >>>>>>>>>>>>>>>> >>>>> get started.
>>> >>>>>>>>>>>>>>>> >>>>>
>>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
>>> but
>>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>>> >>>>>>>>>>>>>>>> >>>>> > supports
>>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>>> >>>>>>>>>>>>>>>> >>>>> > right?
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>>> classes
>>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>> >>>>>>>>>>>>>>>> >>>>> >> last
>>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus
>>> Sorensen
>>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
>>> for
>>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
>>> login.
>>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>>> >>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java
>>> and
>>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>> >>>>>>>>>>>>>>>> >>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>>> framework
>>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
>>> mapping
>>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
>>> admin
>>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>>> needed to
>>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
>>> work on
>>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I
>>> will need
>>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>>> expect
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for
>>> this to
>>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>> >>>>>>>>>>>>>>>> >>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >>
>>> >>>>>>>>>>>>>>>> >>>>> >> --
>>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> >
>>> >>>>>>>>>>>>>>>> >>>>> > --
>>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>>
>>> >>>>>>>>>>>>>>>> >>>> --
>>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>>
>>> >>>>>>>>>>>>>>>> >>> --
>>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >>
>>> >>>>>>>>>>>>>>>> >> --
>>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
>>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
>>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>> --
>>> >>>>>>>>>>>>>>> Mike Tutkowski
>>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>>
>>> >>>>>>>>>>>>>> --
>>> >>>>>>>>>>>>>> Mike Tutkowski
>>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>>
>>> >>>>>>>>>> --
>>> >>>>>>>>>> Mike Tutkowski
>>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>>>>> o: 303.746.7302
>>> >>>>>>>>>> Advancing the way the world uses the cloud™
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> Mike Tutkowski
>>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>>>> e: mike.tutkowski@solidfire.com
>>> >>>>>>> o: 303.746.7302
>>> >>>>>>> Advancing the way the world uses the cloud™
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> Mike Tutkowski
>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>> o: 303.746.7302
>>> >>>> Advancing the way the world uses the cloud™
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Mike Tutkowski
>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud™
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

When I implemented support in the XenServer and VMware plug-ins for
"managed" storage, I started at the execute(AttachVolumeCommand) methods in
both plug-ins.

The code there was changed to check the AttachVolumeCommand instance for a
"managed" property.

If managed was false, the normal attach/detach logic would just run and the
volume would be attached or detached.

If managed was true, new 4.2 logic would run to create (let's talk
XenServer here) a new SR and a new VDI inside of that SR (or to reattach an
existing VDI inside an existing SR, if this wasn't the first time the
volume was attached). If managed was true and we were detaching the volume,
the SR would be detached from the XenServer hosts.

I am currently walking through the execute(AttachVolumeCommand) in
LibvirtComputingResource.java.

I see how the XML is constructed to describe whether a disk should be
attached or detached. I also see how we call in to get a StorageAdapter
(and how I will likely need to write a new one of these).

So, talking in XenServer terminology again, I was wondering if you think
the approach we took in 4.2 with creating and deleting SRs in the
execute(AttachVolumeCommand) method would work here or if there is some
other way I should be looking at this for KVM?

As it is right now for KVM, storage has to be set up ahead of time.
Assuming this is the case, there probably isn't currently a place I can
easily inject my logic to discover and log in to iSCSI targets. This is why
we did it as needed in the execute(AttachVolumeCommand) for XenServer and
VMware, but I wanted to see if you have an alternative way that might be
better for KVM.

One possible way to do this would be to modify VolumeManagerImpl (or
whatever its equivalent is in 4.3) before it issues an attach-volume
command to KVM to check to see if the volume is to be attached to managed
storage. If it is, then (before calling the attach-volume command in KVM)
call the create-storage-pool command in KVM (or whatever it might be
called).

Just wanted to get some of your thoughts on this.

Thanks!


On Sat, Sep 14, 2013 at 12:07 AM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> Yeah, I remember that StorageProcessor stuff being put in the codebase and
> having to merge my code into it in 4.2.
>
> Thanks for all the details, Marcus! :)
>
> I can start digging into what you were talking about now.
>
>
> On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Looks like things might be slightly different now in 4.2, with
>> KVMStorageProcessor.java in the mix.This looks more or less like some
>> of the commands were ripped out verbatim from LibvirtComputingResource
>> and placed here, so in general what I've said is probably still true,
>> just that the location of things like AttachVolumeCommand might be
>> different, in this file rather than LibvirtComputingResource.java.
>>
>> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>> > Ok, KVM will be close to that, of course, because only the hypervisor
>> > classes differ, the rest is all mgmt server. Creating a volume is just
>> > a db entry until it's deployed for the first time. AttachVolumeCommand
>> > on the agent side (LibvirtStorageAdaptor.java is analogous to
>> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> > StorageAdaptor) to log in the host to the target and then you have a
>> > block device.  Maybe libvirt will do that for you, but my quick read
>> > made it sound like the iscsi libvirt pool type is actually a pool, not
>> > a lun or volume, so you'll need to figure out if that works or if
>> > you'll have to use iscsiadm commands.
>> >
>> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> > doesn't really manage your pool the way you want), you're going to
>> > have to create a version of KVMStoragePool class and a StorageAdaptor
>> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> > implementing all of the methods, then in KVMStorageManager.java
>> > there's a "_storageMapper" map. This is used to select the correct
>> > adaptor, you can see in this file that every call first pulls the
>> > correct adaptor out of this map via getStorageAdaptor. So you can see
>> > a comment in this file that says "add other storage adaptors here",
>> > where it puts to this map, this is where you'd register your adaptor.
>> >
>> > So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> > the pool data (host, port, name, path) which would be used to log the
>> > host into the initiator. I *believe* the method getPhysicalDisk will
>> > need to do the work of attaching the lun.  AttachVolumeCommand calls
>> > this and then creates the XML diskdef and attaches it to the VM. Now,
>> > one thing you need to know is that createStoragePool is called often,
>> > sometimes just to make sure the pool is there. You may want to create
>> > a map in your adaptor class and keep track of pools that have been
>> > created, LibvirtStorageAdaptor doesn't have to do this because it asks
>> > libvirt about which storage pools exist. There are also calls to
>> > refresh the pool stats, and all of the other calls can be seen in the
>> > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>> > it's probably a hold-over from 4.1, as I have the vague idea that
>> > volumes are created on the mgmt server via the plugin now, so whatever
>> > doesn't apply can just be stubbed out (or optionally
>> > extended/reimplemented here, if you don't mind the hosts talking to
>> > the san api).
>> >
>> > There is a difference between attaching new volumes and launching a VM
>> > with existing volumes.  In the latter case, the VM definition that was
>> > passed to the KVM agent includes the disks, (StartCommand).
>> >
>> > I'd be interested in how your pool is defined for Xen, I imagine it
>> > would need to be kept the same. Is it just a definition to the SAN
>> > (ip address or some such, port number) and perhaps a volume pool name?
>> >
>> >> If there is a way for me to update the ACL list on the SAN to have
>> only a
>> >> single KVM host have access to the volume, that would be ideal.
>> >
>> > That depends on your SAN API.  I was under the impression that the
>> > storage plugin framework allowed for acls, or for you to do whatever
>> > you want for create/attach/delete/snapshot, etc. You'd just call your
>> > SAN API with the host info for the ACLs prior to when the disk is
>> > attached (or the VM is started).  I'd have to look more at the
>> > framework to know the details, in 4.1 I would do this in
>> > getPhysicalDisk just prior to connecting up the LUN.
>> >
>> >
>> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> > <mi...@solidfire.com> wrote:
>> >> OK, yeah, the ACL part will be interesting. That is a bit different
>> from how
>> >> it works with XenServer and VMware.
>> >>
>> >> Just to give you an idea how it works in 4.2 with XenServer:
>> >>
>> >> * The user creates a CS volume (this is just recorded in the
>> cloud.volumes
>> >> table).
>> >>
>> >> * The user attaches the volume as a disk to a VM for the first time
>> (if the
>> >> storage allocator picks the SolidFire plug-in, the storage framework
>> invokes
>> >> a method on the plug-in that creates a volume on the SAN...info like
>> the IQN
>> >> of the SAN volume is recorded in the DB).
>> >>
>> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>> >> determines based on a flag passed in that the storage in question is
>> >> "CloudStack-managed" storage (as opposed to "traditional" preallocated
>> >> storage). This tells it to discover the iSCSI target. Once discovered
>> it
>> >> determines if the iSCSI target already contains a storage repository
>> (it
>> >> would if this were a re-attach situation). If it does contain an SR
>> already,
>> >> then there should already be one VDI, as well. If there is no SR, an
>> SR is
>> >> created and a single VDI is created within it (that takes up about as
>> much
>> >> space as was requested for the CloudStack volume).
>> >>
>> >> * The normal attach-volume logic continues (it depends on the
>> existence of
>> >> an SR and a VDI).
>> >>
>> >> The VMware case is essentially the same (mainly just substitute
>> datastore
>> >> for SR and VMDK for VDI).
>> >>
>> >> In both cases, all hosts in the cluster have discovered the iSCSI
>> target,
>> >> but only the host that is currently running the VM that is using the
>> VDI (or
>> >> VMKD) is actually using the disk.
>> >>
>> >> Live Migration should be OK because the hypervisors communicate with
>> >> whatever metadata they have on the SR (or datastore).
>> >>
>> >> I see what you're saying with KVM, though.
>> >>
>> >> In that case, the hosts are clustered only in CloudStack's eyes. CS
>> controls
>> >> Live Migration. You don't really need a clustered filesystem on the
>> LUN. The
>> >> LUN could be handed over raw to the VM using it.
>> >>
>> >> If there is a way for me to update the ACL list on the SAN to have
>> only a
>> >> single KVM host have access to the volume, that would be ideal.
>> >>
>> >> Also, I agree I'll need to use iscsiadm to discover and log in to the
>> iSCSI
>> >> target. I'll also need to take the resultant new device and pass it
>> into the
>> >> VM.
>> >>
>> >> Does this sound reasonable? Please call me out on anything I seem
>> incorrect
>> >> about. :)
>> >>
>> >> Thanks for all the thought on this, Marcus!
>> >>
>> >>
>> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the
>> attach
>> >>> the disk def to the vm. You may need to do your own StorageAdaptor
>> and run
>> >>> iscsiadm commands to accomplish that, depending on how the libvirt
>> iscsi
>> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
>> works on
>> >>> xen at the momen., nor is it ideal.
>> >>>
>> >>> Your plugin will handle acls as far as which host can see which luns
>> as
>> >>> well, I remember discussing that months ago, so that a disk won't be
>> >>> connected until the hypervisor has exclusive access, so it will be
>> safe and
>> >>> fence the disk from rogue nodes that cloudstack loses connectivity
>> with. It
>> >>> should revoke access to everything but the target host... Except for
>> during
>> >>> migration but we can discuss that later, there's a migration prep
>> process
>> >>> where the new host can be added to the acls, and the old host can be
>> removed
>> >>> post migration.
>> >>>
>> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
>> mike.tutkowski@solidfire.com>
>> >>> wrote:
>> >>>>
>> >>>> Yeah, that would be ideal.
>> >>>>
>> >>>> So, I would still need to discover the iSCSI target, log in to it,
>> then
>> >>>> figure out what /dev/sdX was created as a result (and leave it as is
>> - do
>> >>>> not format it with any file system...clustered or not). I would pass
>> that
>> >>>> device into the VM.
>> >>>>
>> >>>> Kind of accurate?
>> >>>>
>> >>>>
>> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There
>> are
>> >>>>> ones that work for block devices rather than files. You can piggy
>> back off
>> >>>>> of the existing disk definitions and attach it to the vm as a block
>> device.
>> >>>>> The definition is an XML string per libvirt XML format. You may
>> want to use
>> >>>>> an alternate path to the disk rather than just /dev/sdx like I
>> mentioned,
>> >>>>> there are by-id paths to the block devices, as well as other ones
>> that will
>> >>>>> be consistent and easier for management, not sure how familiar you
>> are with
>> >>>>> device naming on Linux.
>> >>>>>
>> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
>> wrote:
>> >>>>>>
>> >>>>>> No, as that would rely on virtualized network/iscsi initiator
>> inside
>> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
>> hypervisor) as
>> >>>>>> a disk to the VM, rather than attaching some image file that
>> resides on a
>> >>>>>> filesystem, mounted on the host, living on a target.
>> >>>>>>
>> >>>>>> Actually, if you plan on the storage supporting live migration I
>> think
>> >>>>>> this is the only way. You can't put a filesystem on it and mount
>> it in two
>> >>>>>> places to facilitate migration unless its a clustered filesystem,
>> in which
>> >>>>>> case you're back to shared mount point.
>> >>>>>>
>> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM with
>> a xen
>> >>>>>> specific cluster management, a custom CLVM. They don't use a
>> filesystem
>> >>>>>> either.
>> >>>>>>
>> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>> >>>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>
>> >>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>> >>>>>>> circumventing the hypervisor? I didn't think we could do that in
>> CS.
>> >>>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
>> as far as I
>> >>>>>>> know.
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> >>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>> Better to wire up the lun directly to the vm unless there is a
>> good
>> >>>>>>>> reason not to.
>> >>>>>>>>
>> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>> >>>>>>>> wrote:
>> >>>>>>>>>
>> >>>>>>>>> You could do that, but as mentioned I think its a mistake to go
>> to
>> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and
>> then putting
>> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
>> even RAW disk
>> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along the
>> way, and have
>> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
>> >>>>>>>>>
>> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>> >>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>
>> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>> >>>>>>>>>>
>> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>> >>>>>>>>>> selecting SharedMountPoint and specifying the location of the
>> share.
>> >>>>>>>>>>
>> >>>>>>>>>> They can set up their share using Open iSCSI by discovering
>> their
>> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
>> their file
>> >>>>>>>>>> system.
>> >>>>>>>>>>
>> >>>>>>>>>> Would it make sense for me to just do that discovery, logging
>> in,
>> >>>>>>>>>> and mounting behind the scenes for them and letting the
>> current code manage
>> >>>>>>>>>> the rest as it currently does?
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>> >>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>>>>>>>>>
>> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch
>> up
>> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
>> snapshots + memory
>> >>>>>>>>>>> dump. I still think disk snapshots would preferably be
>> handled by the SAN,
>> >>>>>>>>>>> and then memory dumps can go to secondary storage or
>> something else. This is
>> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to see
>> how others are
>> >>>>>>>>>>> planning theirs.
>> >>>>>>>>>>>
>> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
>> shadowsor@gmail.com>
>> >>>>>>>>>>> wrote:
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style
>> on an
>> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
>> Otherwise you're
>> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
>> QCOW2 disk image,
>> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM,
>> and
>> >>>>>>>>>>>> handling snapshots on the San side via the storage plugin is
>> best. My
>> >>>>>>>>>>>> impression from the storage plugin refactor was that there
>> was a snapshot
>> >>>>>>>>>>>> service that would allow the San to handle snapshots.
>> >>>>>>>>>>>>
>> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
>> shadowsor@gmail.com>
>> >>>>>>>>>>>> wrote:
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back
>> end, if
>> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
>> your plugin for
>> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far
>> as space, that
>> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we
>> carve out luns from a
>> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
>> independent of the
>> >>>>>>>>>>>>> LUN size the host sees.
>> >>>>>>>>>>>>>
>> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Hey Marcus,
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
>> work
>> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI
>> for
>> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository as
>> the volume is on.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Same idea for VMware, I believe.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
>> and
>> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor
>> snapshots in 4.2) is I'd
>> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
>> requested for the
>> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
>> provisions volumes,
>> >>>>>>>>>>>>>> so the space is not actually used unless it needs to be).
>> The CloudStack
>> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until
>> a hypervisor
>> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
>> SAN volume.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
>> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
>> there were support
>> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
>> target), then I
>> >>>>>>>>>>>>>> don't see how using this model will work.
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this
>> works
>> >>>>>>>>>>>>>> with DIR?
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> What do you think?
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> Thanks
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
>> today.
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it
>> just
>> >>>>>>>>>>>>>>>> acts like a
>> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
>> end-user
>> >>>>>>>>>>>>>>>> is
>> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM
>> hosts can
>> >>>>>>>>>>>>>>>> access,
>> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
>> storage.
>> >>>>>>>>>>>>>>>> It could
>> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>> >>>>>>>>>>>>>>>> cloudstack just
>> >>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>> >>>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
>> >>>>>>>>>>>>>>>> > time.
>> >>>>>>>>>>>>>>>> > Multiples, in fact.
>> >>>>>>>>>>>>>>>> >
>> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
>> >>>>>>>>>>>>>>>> >> -----------------------------------------
>> >>>>>>>>>>>>>>>> >> default              active     yes
>> >>>>>>>>>>>>>>>> >> iSCSI                active     no
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based
>> on
>> >>>>>>>>>>>>>>>> >>> an iSCSI target.
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one
>> LUN, so
>> >>>>>>>>>>>>>>>> >>> there would only
>> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the
>> (libvirt)
>> >>>>>>>>>>>>>>>> >>> storage pool.
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
>> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
>> does
>> >>>>>>>>>>>>>>>> >>> not support
>> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
>> libvirt
>> >>>>>>>>>>>>>>>> >>> supports
>> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
>> >>>>>>>>>>>>>>>> >>> each one of its
>> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
>> targets/LUNs).
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         String _poolType;
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         }
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         @Override
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         public String toString() {
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>             return _poolType;
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>         }
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>     }
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
>> being
>> >>>>>>>>>>>>>>>> >>>> used, but I'm
>> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>> >>>>>>>>>>>>>>>> >>>> selects the
>> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
>> that
>> >>>>>>>>>>>>>>>> >>>> the "netfs" option
>> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> Thanks!
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>> >>>>>>>>>>>>>>>> >>>> wrote:
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>>
>> http://libvirt.org/storage.html#StorageBackendISCSI
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI
>> server, and
>> >>>>>>>>>>>>>>>> >>>>> cannot be
>> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe
>> your
>> >>>>>>>>>>>>>>>> >>>>> plugin will take
>> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
>> and
>> >>>>>>>>>>>>>>>> >>>>> hooking it up to
>> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in
>> the Xen
>> >>>>>>>>>>>>>>>> >>>>> stuff).
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a
>> 1:1
>> >>>>>>>>>>>>>>>> >>>>> mapping, or if
>> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>> >>>>>>>>>>>>>>>> >>>>> pool. You may need
>> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
>> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>> >>>>>>>>>>>>>>>> >>>>> storage adaptor
>> >>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.
>>  We
>> >>>>>>>>>>>>>>>> >>>>> can cross that
>> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>> >>>>>>>>>>>>>>>> >>>>> bindings doc.
>> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/ Normally,
>> >>>>>>>>>>>>>>>> >>>>> you'll see a
>> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
>> that
>> >>>>>>>>>>>>>>>> >>>>> is done for
>> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
>> code
>> >>>>>>>>>>>>>>>> >>>>> to see if you
>> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
>> storage
>> >>>>>>>>>>>>>>>> >>>>> pools before you
>> >>>>>>>>>>>>>>>> >>>>> get started.
>> >>>>>>>>>>>>>>>> >>>>>
>> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
>> but
>> >>>>>>>>>>>>>>>> >>>>> > you figure it
>> >>>>>>>>>>>>>>>> >>>>> > supports
>> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>> >>>>>>>>>>>>>>>> >>>>> > right?
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
>> classes
>> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
>> >>>>>>>>>>>>>>>> >>>>> >> last
>> >>>>>>>>>>>>>>>> >>>>> >> week or so.
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>> >>>>>>>>>>>>>>>> >>>>> >> wrote:
>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages
>> for
>> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
>> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
>> login.
>> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
>> >>>>>>>>>>>>>>>> >>>>> >>> sent
>> >>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>> >>>>>>>>>>>>>>>> >>>>> >>> storage type
>> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>> >>>>>>>>>>>>>>>> >>>>> >>>
>> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
>> framework
>> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>> >>>>>>>>>>>>>>>> >>>>> >>>> times
>> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
>> mapping
>> >>>>>>>>>>>>>>>> >>>>> >>>> between a
>> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
>> admin
>> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
>> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>> >>>>>>>>>>>>>>>> >>>>> >>>> root and
>> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I
>> needed to
>> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
>> work on
>> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>> >>>>>>>>>>>>>>>> >>>>> >>>> still
>> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will
>> need
>> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>> >>>>>>>>>>>>>>>> >>>>> >>>> the
>> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
>> expect
>> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this
>> to
>> >>>>>>>>>>>>>>>> >>>>> >>>> work?
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
>> >>>>>>>>>>>>>>>> >>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>> >>>> --
>> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >>
>> >>>>>>>>>>>>>>>> >>>>> >> --
>> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> >
>> >>>>>>>>>>>>>>>> >>>>> > --
>> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>>
>> >>>>>>>>>>>>>>>> >>>> --
>> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>>
>> >>>>>>>>>>>>>>>> >>> --
>> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
>> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
>> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >>
>> >>>>>>>>>>>>>>>> >> --
>> >>>>>>>>>>>>>>>> >> Mike Tutkowski
>> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>>> >> o: 303.746.7302
>> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>> --
>> >>>>>>>>>>>>>>> Mike Tutkowski
>> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>>> o: 303.746.7302
>> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>>
>> >>>>>>>>>>>>>> --
>> >>>>>>>>>>>>>> Mike Tutkowski
>> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>>>>>> o: 303.746.7302
>> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>>
>> >>>>>>>>>> --
>> >>>>>>>>>> Mike Tutkowski
>> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>>>>>> o: 303.746.7302
>> >>>>>>>>>> Advancing the way the world uses the cloud™
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> Mike Tutkowski
>> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>>>> e: mike.tutkowski@solidfire.com
>> >>>>>>> o: 303.746.7302
>> >>>>>>> Advancing the way the world uses the cloud™
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Mike Tutkowski
>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> o: 303.746.7302
>> >>>> Advancing the way the world uses the cloud™
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Tutkowski
>> >> Senior CloudStack Developer, SolidFire Inc.
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud™
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Yeah, I remember that StorageProcessor stuff being put in the codebase and
having to merge my code into it in 4.2.

Thanks for all the details, Marcus! :)

I can start digging into what you were talking about now.


On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <sh...@gmail.com>wrote:

> Looks like things might be slightly different now in 4.2, with
> KVMStorageProcessor.java in the mix.This looks more or less like some
> of the commands were ripped out verbatim from LibvirtComputingResource
> and placed here, so in general what I've said is probably still true,
> just that the location of things like AttachVolumeCommand might be
> different, in this file rather than LibvirtComputingResource.java.
>
> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
> > Ok, KVM will be close to that, of course, because only the hypervisor
> > classes differ, the rest is all mgmt server. Creating a volume is just
> > a db entry until it's deployed for the first time. AttachVolumeCommand
> > on the agent side (LibvirtStorageAdaptor.java is analogous to
> > CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> > StorageAdaptor) to log in the host to the target and then you have a
> > block device.  Maybe libvirt will do that for you, but my quick read
> > made it sound like the iscsi libvirt pool type is actually a pool, not
> > a lun or volume, so you'll need to figure out if that works or if
> > you'll have to use iscsiadm commands.
> >
> > If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> > doesn't really manage your pool the way you want), you're going to
> > have to create a version of KVMStoragePool class and a StorageAdaptor
> > class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> > implementing all of the methods, then in KVMStorageManager.java
> > there's a "_storageMapper" map. This is used to select the correct
> > adaptor, you can see in this file that every call first pulls the
> > correct adaptor out of this map via getStorageAdaptor. So you can see
> > a comment in this file that says "add other storage adaptors here",
> > where it puts to this map, this is where you'd register your adaptor.
> >
> > So, referencing StorageAdaptor.java, createStoragePool accepts all of
> > the pool data (host, port, name, path) which would be used to log the
> > host into the initiator. I *believe* the method getPhysicalDisk will
> > need to do the work of attaching the lun.  AttachVolumeCommand calls
> > this and then creates the XML diskdef and attaches it to the VM. Now,
> > one thing you need to know is that createStoragePool is called often,
> > sometimes just to make sure the pool is there. You may want to create
> > a map in your adaptor class and keep track of pools that have been
> > created, LibvirtStorageAdaptor doesn't have to do this because it asks
> > libvirt about which storage pools exist. There are also calls to
> > refresh the pool stats, and all of the other calls can be seen in the
> > StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> > it's probably a hold-over from 4.1, as I have the vague idea that
> > volumes are created on the mgmt server via the plugin now, so whatever
> > doesn't apply can just be stubbed out (or optionally
> > extended/reimplemented here, if you don't mind the hosts talking to
> > the san api).
> >
> > There is a difference between attaching new volumes and launching a VM
> > with existing volumes.  In the latter case, the VM definition that was
> > passed to the KVM agent includes the disks, (StartCommand).
> >
> > I'd be interested in how your pool is defined for Xen, I imagine it
> > would need to be kept the same. Is it just a definition to the SAN
> > (ip address or some such, port number) and perhaps a volume pool name?
> >
> >> If there is a way for me to update the ACL list on the SAN to have only
> a
> >> single KVM host have access to the volume, that would be ideal.
> >
> > That depends on your SAN API.  I was under the impression that the
> > storage plugin framework allowed for acls, or for you to do whatever
> > you want for create/attach/delete/snapshot, etc. You'd just call your
> > SAN API with the host info for the ACLs prior to when the disk is
> > attached (or the VM is started).  I'd have to look more at the
> > framework to know the details, in 4.1 I would do this in
> > getPhysicalDisk just prior to connecting up the LUN.
> >
> >
> > On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> > <mi...@solidfire.com> wrote:
> >> OK, yeah, the ACL part will be interesting. That is a bit different
> from how
> >> it works with XenServer and VMware.
> >>
> >> Just to give you an idea how it works in 4.2 with XenServer:
> >>
> >> * The user creates a CS volume (this is just recorded in the
> cloud.volumes
> >> table).
> >>
> >> * The user attaches the volume as a disk to a VM for the first time (if
> the
> >> storage allocator picks the SolidFire plug-in, the storage framework
> invokes
> >> a method on the plug-in that creates a volume on the SAN...info like
> the IQN
> >> of the SAN volume is recorded in the DB).
> >>
> >> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> >> determines based on a flag passed in that the storage in question is
> >> "CloudStack-managed" storage (as opposed to "traditional" preallocated
> >> storage). This tells it to discover the iSCSI target. Once discovered it
> >> determines if the iSCSI target already contains a storage repository (it
> >> would if this were a re-attach situation). If it does contain an SR
> already,
> >> then there should already be one VDI, as well. If there is no SR, an SR
> is
> >> created and a single VDI is created within it (that takes up about as
> much
> >> space as was requested for the CloudStack volume).
> >>
> >> * The normal attach-volume logic continues (it depends on the existence
> of
> >> an SR and a VDI).
> >>
> >> The VMware case is essentially the same (mainly just substitute
> datastore
> >> for SR and VMDK for VDI).
> >>
> >> In both cases, all hosts in the cluster have discovered the iSCSI
> target,
> >> but only the host that is currently running the VM that is using the
> VDI (or
> >> VMKD) is actually using the disk.
> >>
> >> Live Migration should be OK because the hypervisors communicate with
> >> whatever metadata they have on the SR (or datastore).
> >>
> >> I see what you're saying with KVM, though.
> >>
> >> In that case, the hosts are clustered only in CloudStack's eyes. CS
> controls
> >> Live Migration. You don't really need a clustered filesystem on the
> LUN. The
> >> LUN could be handed over raw to the VM using it.
> >>
> >> If there is a way for me to update the ACL list on the SAN to have only
> a
> >> single KVM host have access to the volume, that would be ideal.
> >>
> >> Also, I agree I'll need to use iscsiadm to discover and log in to the
> iSCSI
> >> target. I'll also need to take the resultant new device and pass it
> into the
> >> VM.
> >>
> >> Does this sound reasonable? Please call me out on anything I seem
> incorrect
> >> about. :)
> >>
> >> Thanks for all the thought on this, Marcus!
> >>
> >>
> >> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
> >> wrote:
> >>>
> >>> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
> >>> the disk def to the vm. You may need to do your own StorageAdaptor and
> run
> >>> iscsiadm commands to accomplish that, depending on how the libvirt
> iscsi
> >>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it
> works on
> >>> xen at the momen., nor is it ideal.
> >>>
> >>> Your plugin will handle acls as far as which host can see which luns as
> >>> well, I remember discussing that months ago, so that a disk won't be
> >>> connected until the hypervisor has exclusive access, so it will be
> safe and
> >>> fence the disk from rogue nodes that cloudstack loses connectivity
> with. It
> >>> should revoke access to everything but the target host... Except for
> during
> >>> migration but we can discuss that later, there's a migration prep
> process
> >>> where the new host can be added to the acls, and the old host can be
> removed
> >>> post migration.
> >>>
> >>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <
> mike.tutkowski@solidfire.com>
> >>> wrote:
> >>>>
> >>>> Yeah, that would be ideal.
> >>>>
> >>>> So, I would still need to discover the iSCSI target, log in to it,
> then
> >>>> figure out what /dev/sdX was created as a result (and leave it as is
> - do
> >>>> not format it with any file system...clustered or not). I would pass
> that
> >>>> device into the VM.
> >>>>
> >>>> Kind of accurate?
> >>>>
> >>>>
> >>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <shadowsor@gmail.com
> >
> >>>> wrote:
> >>>>>
> >>>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There
> are
> >>>>> ones that work for block devices rather than files. You can piggy
> back off
> >>>>> of the existing disk definitions and attach it to the vm as a block
> device.
> >>>>> The definition is an XML string per libvirt XML format. You may want
> to use
> >>>>> an alternate path to the disk rather than just /dev/sdx like I
> mentioned,
> >>>>> there are by-id paths to the block devices, as well as other ones
> that will
> >>>>> be consistent and easier for management, not sure how familiar you
> are with
> >>>>> device naming on Linux.
> >>>>>
> >>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com>
> wrote:
> >>>>>>
> >>>>>> No, as that would rely on virtualized network/iscsi initiator inside
> >>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on
> hypervisor) as
> >>>>>> a disk to the VM, rather than attaching some image file that
> resides on a
> >>>>>> filesystem, mounted on the host, living on a target.
> >>>>>>
> >>>>>> Actually, if you plan on the storage supporting live migration I
> think
> >>>>>> this is the only way. You can't put a filesystem on it and mount it
> in two
> >>>>>> places to facilitate migration unless its a clustered filesystem,
> in which
> >>>>>> case you're back to shared mount point.
> >>>>>>
> >>>>>> As far as I'm aware, the xenserver SR style is basically LVM with a
> xen
> >>>>>> specific cluster management, a custom CLVM. They don't use a
> filesystem
> >>>>>> either.
> >>>>>>
> >>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
> >>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>
> >>>>>>> When you say, "wire up the lun directly to the vm," do you mean
> >>>>>>> circumventing the hypervisor? I didn't think we could do that in
> CS.
> >>>>>>> OpenStack, on the other hand, always circumvents the hypervisor,
> as far as I
> >>>>>>> know.
> >>>>>>>
> >>>>>>>
> >>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <
> shadowsor@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Better to wire up the lun directly to the vm unless there is a
> good
> >>>>>>>> reason not to.
> >>>>>>>>
> >>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
> >>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>> You could do that, but as mentioned I think its a mistake to go
> to
> >>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and
> then putting
> >>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or
> even RAW disk
> >>>>>>>>> image on that filesystem. You'll lose a lot of iops along the
> way, and have
> >>>>>>>>> more overhead with the filesystem and its journaling, etc.
> >>>>>>>>>
> >>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
> >>>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
> >>>>>>>>>>
> >>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
> >>>>>>>>>> selecting SharedMountPoint and specifying the location of the
> share.
> >>>>>>>>>>
> >>>>>>>>>> They can set up their share using Open iSCSI by discovering
> their
> >>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on
> their file
> >>>>>>>>>> system.
> >>>>>>>>>>
> >>>>>>>>>> Would it make sense for me to just do that discovery, logging
> in,
> >>>>>>>>>> and mounting behind the scenes for them and letting the current
> code manage
> >>>>>>>>>> the rest as it currently does?
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
> >>>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch
> up
> >>>>>>>>>>> on the work done in KVM, but this is basically just disk
> snapshots + memory
> >>>>>>>>>>> dump. I still think disk snapshots would preferably be handled
> by the SAN,
> >>>>>>>>>>> and then memory dumps can go to secondary storage or something
> else. This is
> >>>>>>>>>>> relatively new ground with CS and KVM, so we will want to see
> how others are
> >>>>>>>>>>> planning theirs.
> >>>>>>>>>>>
> >>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <
> shadowsor@gmail.com>
> >>>>>>>>>>> wrote:
> >>>>>>>>>>>>
> >>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on
> an
> >>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format.
> Otherwise you're
> >>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a
> QCOW2 disk image,
> >>>>>>>>>>>> and that seems unnecessary and a performance killer.
> >>>>>>>>>>>>
> >>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM,
> and
> >>>>>>>>>>>> handling snapshots on the San side via the storage plugin is
> best. My
> >>>>>>>>>>>> impression from the storage plugin refactor was that there
> was a snapshot
> >>>>>>>>>>>> service that would allow the San to handle snapshots.
> >>>>>>>>>>>>
> >>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <
> shadowsor@gmail.com>
> >>>>>>>>>>>> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end,
> if
> >>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call
> your plugin for
> >>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far
> as space, that
> >>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve
> out luns from a
> >>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is
> independent of the
> >>>>>>>>>>>>> LUN size the host sees.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
> >>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Hey Marcus,
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't
> work
> >>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI
> for
> >>>>>>>>>>>>>> the snapshot is placed on the same storage repository as
> the volume is on.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Same idea for VMware, I believe.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer
> and
> >>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots
> in 4.2) is I'd
> >>>>>>>>>>>>>> make an iSCSI target that is larger than what the user
> requested for the
> >>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly
> provisions volumes,
> >>>>>>>>>>>>>> so the space is not actually used unless it needs to be).
> The CloudStack
> >>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a
> hypervisor
> >>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the
> SAN volume.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
> >>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if
> there were support
> >>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI
> target), then I
> >>>>>>>>>>>>>> don't see how using this model will work.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
> >>>>>>>>>>>>>> with DIR?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> What do you think?
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> Thanks
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
> >>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access
> today.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
> >>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
> >>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
> >>>>>>>>>>>>>>>> acts like a
> >>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The
> end-user
> >>>>>>>>>>>>>>>> is
> >>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts
> can
> >>>>>>>>>>>>>>>> access,
> >>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the
> storage.
> >>>>>>>>>>>>>>>> It could
> >>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
> >>>>>>>>>>>>>>>> cloudstack just
> >>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
> >>>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
> >>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
> >>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
> >>>>>>>>>>>>>>>> > time.
> >>>>>>>>>>>>>>>> > Multiples, in fact.
> >>>>>>>>>>>>>>>> >
> >>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
> >>>>>>>>>>>>>>>> >> Name                 State      Autostart
> >>>>>>>>>>>>>>>> >> -----------------------------------------
> >>>>>>>>>>>>>>>> >> default              active     yes
> >>>>>>>>>>>>>>>> >> iSCSI                active     no
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> I see what you're saying now.
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based
> on
> >>>>>>>>>>>>>>>> >>> an iSCSI target.
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN,
> so
> >>>>>>>>>>>>>>>> >>> there would only
> >>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
> >>>>>>>>>>>>>>>> >>> storage pool.
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
> >>>>>>>>>>>>>>>> >>> targets/LUNs on the
> >>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt
> does
> >>>>>>>>>>>>>>>> >>> not support
> >>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if
> libvirt
> >>>>>>>>>>>>>>>> >>> supports
> >>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
> >>>>>>>>>>>>>>>> >>> each one of its
> >>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI
> targets/LUNs).
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>     public enum poolType {
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
> >>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
> >>>>>>>>>>>>>>>> >>>> RBD("rbd");
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         String _poolType;
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         }
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         @Override
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         public String toString() {
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>             return _poolType;
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>         }
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>     }
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently
> being
> >>>>>>>>>>>>>>>> >>>> used, but I'm
> >>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
> >>>>>>>>>>>>>>>> >>>> selects the
> >>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is
> that
> >>>>>>>>>>>>>>>> >>>> the "netfs" option
> >>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> Thanks!
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
> >>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
> >>>>>>>>>>>>>>>> >>>> wrote:
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> Take a look at this:
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server,
> and
> >>>>>>>>>>>>>>>> >>>>> cannot be
> >>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
> >>>>>>>>>>>>>>>> >>>>> plugin will take
> >>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in
> and
> >>>>>>>>>>>>>>>> >>>>> hooking it up to
> >>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the
> Xen
> >>>>>>>>>>>>>>>> >>>>> stuff).
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a
> 1:1
> >>>>>>>>>>>>>>>> >>>>> mapping, or if
> >>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
> >>>>>>>>>>>>>>>> >>>>> pool. You may need
> >>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
> >>>>>>>>>>>>>>>> >>>>> this. Let us know.
> >>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
> >>>>>>>>>>>>>>>> >>>>> storage adaptor
> >>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
> >>>>>>>>>>>>>>>> >>>>> can cross that
> >>>>>>>>>>>>>>>> >>>>> bridge when we get there.
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
> >>>>>>>>>>>>>>>> >>>>> bindings doc.
> >>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
> >>>>>>>>>>>>>>>> >>>>> you'll see a
> >>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
> >>>>>>>>>>>>>>>> >>>>> 'conn' object. You
> >>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how
> that
> >>>>>>>>>>>>>>>> >>>>> is done for
> >>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java
> code
> >>>>>>>>>>>>>>>> >>>>> to see if you
> >>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi
> storage
> >>>>>>>>>>>>>>>> >>>>> pools before you
> >>>>>>>>>>>>>>>> >>>>> get started.
> >>>>>>>>>>>>>>>> >>>>>
> >>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more,
> but
> >>>>>>>>>>>>>>>> >>>>> > you figure it
> >>>>>>>>>>>>>>>> >>>>> > supports
> >>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
> >>>>>>>>>>>>>>>> >>>>> > right?
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the
> classes
> >>>>>>>>>>>>>>>> >>>>> >> you pointed out
> >>>>>>>>>>>>>>>> >>>>> >> last
> >>>>>>>>>>>>>>>> >>>>> >> week or so.
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
> >>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
> >>>>>>>>>>>>>>>> >>>>> >> wrote:
> >>>>>>>>>>>>>>>> >>>>> >>>
> >>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
> >>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
> >>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
> >>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
> >>>>>>>>>>>>>>>> >>>>> >>> you'd call
> >>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator
> login.
> >>>>>>>>>>>>>>>> >>>>> >>> See the info I
> >>>>>>>>>>>>>>>> >>>>> >>> sent
> >>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
> >>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
> >>>>>>>>>>>>>>>> >>>>> >>> storage type
> >>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
> >>>>>>>>>>>>>>>> >>>>> >>>
> >>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
> >>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
> >>>>>>>>>>>>>>>> >>>>> >>> wrote:
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> Hi,
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
> >>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
> >>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage
> framework
> >>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
> >>>>>>>>>>>>>>>> >>>>> >>>> times
> >>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
> >>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
> >>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
> >>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1
> mapping
> >>>>>>>>>>>>>>>> >>>>> >>>> between a
> >>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
> >>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the
> admin
> >>>>>>>>>>>>>>>> >>>>> >>>> to create large
> >>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
> >>>>>>>>>>>>>>>> >>>>> >>>> likely house many
> >>>>>>>>>>>>>>>> >>>>> >>>> root and
> >>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed
> to
> >>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
> >>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
> >>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might
> work on
> >>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
> >>>>>>>>>>>>>>>> >>>>> >>>> still
> >>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will
> need
> >>>>>>>>>>>>>>>> >>>>> >>>> to interact with
> >>>>>>>>>>>>>>>> >>>>> >>>> the
> >>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to
> expect
> >>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
> >>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this
> to
> >>>>>>>>>>>>>>>> >>>>> >>>> work?
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
> >>>>>>>>>>>>>>>> >>>>> >>>> Mike
> >>>>>>>>>>>>>>>> >>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>> >>>> --
> >>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
> >>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >>
> >>>>>>>>>>>>>>>> >>>>> >> --
> >>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
> >>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> >
> >>>>>>>>>>>>>>>> >>>>> > --
> >>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
> >>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>>
> >>>>>>>>>>>>>>>> >>>> --
> >>>>>>>>>>>>>>>> >>>> Mike Tutkowski
> >>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >>>> o: 303.746.7302
> >>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>>
> >>>>>>>>>>>>>>>> >>> --
> >>>>>>>>>>>>>>>> >>> Mike Tutkowski
> >>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >>> o: 303.746.7302
> >>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >>
> >>>>>>>>>>>>>>>> >> --
> >>>>>>>>>>>>>>>> >> Mike Tutkowski
> >>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>>> >> o: 303.746.7302
> >>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>>
> >>>>>>>>>>>>>>> --
> >>>>>>>>>>>>>>> Mike Tutkowski
> >>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> --
> >>>>>>>>>>>>>> Mike Tutkowski
> >>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>>>>>> o: 303.746.7302
> >>>>>>>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> --
> >>>>>>>>>> Mike Tutkowski
> >>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>>>>> o: 303.746.7302
> >>>>>>>>>> Advancing the way the world uses the cloud™
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> Mike Tutkowski
> >>>>>>> Senior CloudStack Developer, SolidFire Inc.
> >>>>>>> e: mike.tutkowski@solidfire.com
> >>>>>>> o: 303.746.7302
> >>>>>>> Advancing the way the world uses the cloud™
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Mike Tutkowski
> >>>> Senior CloudStack Developer, SolidFire Inc.
> >>>> e: mike.tutkowski@solidfire.com
> >>>> o: 303.746.7302
> >>>> Advancing the way the world uses the cloud™
> >>
> >>
> >>
> >>
> >> --
> >> Mike Tutkowski
> >> Senior CloudStack Developer, SolidFire Inc.
> >> e: mike.tutkowski@solidfire.com
> >> o: 303.746.7302
> >> Advancing the way the world uses the cloud™
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
It looks like this KVMStorageProcessor is meant to handle
StorageSubSystemCommand commands. Probably to handle the new storage
framework for things that are now triggered via the mgmt server's
storage stuff.

On Sat, Sep 14, 2013 at 12:02 AM, Marcus Sorensen <sh...@gmail.com> wrote:
> Looks like things might be slightly different now in 4.2, with
> KVMStorageProcessor.java in the mix.This looks more or less like some
> of the commands were ripped out verbatim from LibvirtComputingResource
> and placed here, so in general what I've said is probably still true,
> just that the location of things like AttachVolumeCommand might be
> different, in this file rather than LibvirtComputingResource.java.
>
> On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com> wrote:
>> Ok, KVM will be close to that, of course, because only the hypervisor
>> classes differ, the rest is all mgmt server. Creating a volume is just
>> a db entry until it's deployed for the first time. AttachVolumeCommand
>> on the agent side (LibvirtStorageAdaptor.java is analogous to
>> CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
>> StorageAdaptor) to log in the host to the target and then you have a
>> block device.  Maybe libvirt will do that for you, but my quick read
>> made it sound like the iscsi libvirt pool type is actually a pool, not
>> a lun or volume, so you'll need to figure out if that works or if
>> you'll have to use iscsiadm commands.
>>
>> If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
>> doesn't really manage your pool the way you want), you're going to
>> have to create a version of KVMStoragePool class and a StorageAdaptor
>> class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
>> implementing all of the methods, then in KVMStorageManager.java
>> there's a "_storageMapper" map. This is used to select the correct
>> adaptor, you can see in this file that every call first pulls the
>> correct adaptor out of this map via getStorageAdaptor. So you can see
>> a comment in this file that says "add other storage adaptors here",
>> where it puts to this map, this is where you'd register your adaptor.
>>
>> So, referencing StorageAdaptor.java, createStoragePool accepts all of
>> the pool data (host, port, name, path) which would be used to log the
>> host into the initiator. I *believe* the method getPhysicalDisk will
>> need to do the work of attaching the lun.  AttachVolumeCommand calls
>> this and then creates the XML diskdef and attaches it to the VM. Now,
>> one thing you need to know is that createStoragePool is called often,
>> sometimes just to make sure the pool is there. You may want to create
>> a map in your adaptor class and keep track of pools that have been
>> created, LibvirtStorageAdaptor doesn't have to do this because it asks
>> libvirt about which storage pools exist. There are also calls to
>> refresh the pool stats, and all of the other calls can be seen in the
>> StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
>> it's probably a hold-over from 4.1, as I have the vague idea that
>> volumes are created on the mgmt server via the plugin now, so whatever
>> doesn't apply can just be stubbed out (or optionally
>> extended/reimplemented here, if you don't mind the hosts talking to
>> the san api).
>>
>> There is a difference between attaching new volumes and launching a VM
>> with existing volumes.  In the latter case, the VM definition that was
>> passed to the KVM agent includes the disks, (StartCommand).
>>
>> I'd be interested in how your pool is defined for Xen, I imagine it
>> would need to be kept the same. Is it just a definition to the SAN
>> (ip address or some such, port number) and perhaps a volume pool name?
>>
>>> If there is a way for me to update the ACL list on the SAN to have only a
>>> single KVM host have access to the volume, that would be ideal.
>>
>> That depends on your SAN API.  I was under the impression that the
>> storage plugin framework allowed for acls, or for you to do whatever
>> you want for create/attach/delete/snapshot, etc. You'd just call your
>> SAN API with the host info for the ACLs prior to when the disk is
>> attached (or the VM is started).  I'd have to look more at the
>> framework to know the details, in 4.1 I would do this in
>> getPhysicalDisk just prior to connecting up the LUN.
>>
>>
>> On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
>> <mi...@solidfire.com> wrote:
>>> OK, yeah, the ACL part will be interesting. That is a bit different from how
>>> it works with XenServer and VMware.
>>>
>>> Just to give you an idea how it works in 4.2 with XenServer:
>>>
>>> * The user creates a CS volume (this is just recorded in the cloud.volumes
>>> table).
>>>
>>> * The user attaches the volume as a disk to a VM for the first time (if the
>>> storage allocator picks the SolidFire plug-in, the storage framework invokes
>>> a method on the plug-in that creates a volume on the SAN...info like the IQN
>>> of the SAN volume is recorded in the DB).
>>>
>>> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>>> determines based on a flag passed in that the storage in question is
>>> "CloudStack-managed" storage (as opposed to "traditional" preallocated
>>> storage). This tells it to discover the iSCSI target. Once discovered it
>>> determines if the iSCSI target already contains a storage repository (it
>>> would if this were a re-attach situation). If it does contain an SR already,
>>> then there should already be one VDI, as well. If there is no SR, an SR is
>>> created and a single VDI is created within it (that takes up about as much
>>> space as was requested for the CloudStack volume).
>>>
>>> * The normal attach-volume logic continues (it depends on the existence of
>>> an SR and a VDI).
>>>
>>> The VMware case is essentially the same (mainly just substitute datastore
>>> for SR and VMDK for VDI).
>>>
>>> In both cases, all hosts in the cluster have discovered the iSCSI target,
>>> but only the host that is currently running the VM that is using the VDI (or
>>> VMKD) is actually using the disk.
>>>
>>> Live Migration should be OK because the hypervisors communicate with
>>> whatever metadata they have on the SR (or datastore).
>>>
>>> I see what you're saying with KVM, though.
>>>
>>> In that case, the hosts are clustered only in CloudStack's eyes. CS controls
>>> Live Migration. You don't really need a clustered filesystem on the LUN. The
>>> LUN could be handed over raw to the VM using it.
>>>
>>> If there is a way for me to update the ACL list on the SAN to have only a
>>> single KVM host have access to the volume, that would be ideal.
>>>
>>> Also, I agree I'll need to use iscsiadm to discover and log in to the iSCSI
>>> target. I'll also need to take the resultant new device and pass it into the
>>> VM.
>>>
>>> Does this sound reasonable? Please call me out on anything I seem incorrect
>>> about. :)
>>>
>>> Thanks for all the thought on this, Marcus!
>>>
>>>
>>> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>>>
>>>> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
>>>> the disk def to the vm. You may need to do your own StorageAdaptor and run
>>>> iscsiadm commands to accomplish that, depending on how the libvirt iscsi
>>>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
>>>> xen at the momen., nor is it ideal.
>>>>
>>>> Your plugin will handle acls as far as which host can see which luns as
>>>> well, I remember discussing that months ago, so that a disk won't be
>>>> connected until the hypervisor has exclusive access, so it will be safe and
>>>> fence the disk from rogue nodes that cloudstack loses connectivity with. It
>>>> should revoke access to everything but the target host... Except for during
>>>> migration but we can discuss that later, there's a migration prep process
>>>> where the new host can be added to the acls, and the old host can be removed
>>>> post migration.
>>>>
>>>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>>
>>>>> Yeah, that would be ideal.
>>>>>
>>>>> So, I would still need to discover the iSCSI target, log in to it, then
>>>>> figure out what /dev/sdX was created as a result (and leave it as is - do
>>>>> not format it with any file system...clustered or not). I would pass that
>>>>> device into the VM.
>>>>>
>>>>> Kind of accurate?
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
>>>>>> ones that work for block devices rather than files. You can piggy back off
>>>>>> of the existing disk definitions and attach it to the vm as a block device.
>>>>>> The definition is an XML string per libvirt XML format. You may want to use
>>>>>> an alternate path to the disk rather than just /dev/sdx like I mentioned,
>>>>>> there are by-id paths to the block devices, as well as other ones that will
>>>>>> be consistent and easier for management, not sure how familiar you are with
>>>>>> device naming on Linux.
>>>>>>
>>>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>>>>>
>>>>>>> No, as that would rely on virtualized network/iscsi initiator inside
>>>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as
>>>>>>> a disk to the VM, rather than attaching some image file that resides on a
>>>>>>> filesystem, mounted on the host, living on a target.
>>>>>>>
>>>>>>> Actually, if you plan on the storage supporting live migration I think
>>>>>>> this is the only way. You can't put a filesystem on it and mount it in two
>>>>>>> places to facilitate migration unless its a clustered filesystem, in which
>>>>>>> case you're back to shared mount point.
>>>>>>>
>>>>>>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>>>>>>> specific cluster management, a custom CLVM. They don't use a filesystem
>>>>>>> either.
>>>>>>>
>>>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>
>>>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>>>>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>>>>>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as I
>>>>>>>> know.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Better to wire up the lun directly to the vm unless there is a good
>>>>>>>>> reason not to.
>>>>>>>>>
>>>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> You could do that, but as mentioned I think its a mistake to go to
>>>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and then putting
>>>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>>>>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>>>>>>>> more overhead with the filesystem and its journaling, etc.
>>>>>>>>>>
>>>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>>>>>>
>>>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>>>>>>>>>> selecting SharedMountPoint and specifying the location of the share.
>>>>>>>>>>>
>>>>>>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>>>>>>> system.
>>>>>>>>>>>
>>>>>>>>>>> Would it make sense for me to just do that discovery, logging in,
>>>>>>>>>>> and mounting behind the scenes for them and letting the current code manage
>>>>>>>>>>> the rest as it currently does?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up
>>>>>>>>>>>> on the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>>>>>>> and then memory dumps can go to secondary storage or something else. This is
>>>>>>>>>>>> relatively new ground with CS and KVM, so we will want to see how others are
>>>>>>>>>>>> planning theirs.
>>>>>>>>>>>>
>>>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from a
>>>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is independent of the
>>>>>>>>>>>>>> LUN size the host sees.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hey Marcus,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
>>>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if there were support
>>>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI target), then I
>>>>>>>>>>>>>>> don't see how using this model will work.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
>>>>>>>>>>>>>>> with DIR?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> What do you think?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
>>>>>>>>>>>>>>>>> acts like a
>>>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user
>>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>>>>>>> access,
>>>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>>>>>>> It could
>>>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
>>>>>>>>>>>>>>>>> > time.
>>>>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on
>>>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>>>>>>> >>> there would only
>>>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>>>>>>> >>> storage pool.
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
>>>>>>>>>>>>>>>>> >>> not support
>>>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>>>>>>> >>> supports
>>>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
>>>>>>>>>>>>>>>>> >>> each one of its
>>>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>>>>>>> >>>> used, but I'm
>>>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>>>>>>>>>>>>>>> >>>> selects the
>>>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>>>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>>>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>>>>>>> >>>>> cannot be
>>>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>>>>>>> >>>>> plugin will take
>>>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>>>>>>> >>>>> stuff).
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>>>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>>>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
>>>>>>>>>>>>>>>>> >>>>> can cross that
>>>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>>>>>>> >>>>> you'll see a
>>>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that
>>>>>>>>>>>>>>>>> >>>>> is done for
>>>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code
>>>>>>>>>>>>>>>>> >>>>> to see if you
>>>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>>>>>>> >>>>> pools before you
>>>>>>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
>>>>>>>>>>>>>>>>> >>>>> > you figure it
>>>>>>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>>>>>>>>>>>>>>>>> >>>>> > right?
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
>>>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>>>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
>>>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework
>>>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin
>>>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>>> >>>> --
>>>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>>> >> --
>>>>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Mike Tutkowski
>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the cloud™
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Mike Tutkowski
>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud™
>>>
>>>
>>>
>>>
>>> --
>>> Mike Tutkowski
>>> Senior CloudStack Developer, SolidFire Inc.
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Looks like things might be slightly different now in 4.2, with
KVMStorageProcessor.java in the mix.This looks more or less like some
of the commands were ripped out verbatim from LibvirtComputingResource
and placed here, so in general what I've said is probably still true,
just that the location of things like AttachVolumeCommand might be
different, in this file rather than LibvirtComputingResource.java.

On Fri, Sep 13, 2013 at 11:42 PM, Marcus Sorensen <sh...@gmail.com> wrote:
> Ok, KVM will be close to that, of course, because only the hypervisor
> classes differ, the rest is all mgmt server. Creating a volume is just
> a db entry until it's deployed for the first time. AttachVolumeCommand
> on the agent side (LibvirtStorageAdaptor.java is analogous to
> CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
> StorageAdaptor) to log in the host to the target and then you have a
> block device.  Maybe libvirt will do that for you, but my quick read
> made it sound like the iscsi libvirt pool type is actually a pool, not
> a lun or volume, so you'll need to figure out if that works or if
> you'll have to use iscsiadm commands.
>
> If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
> doesn't really manage your pool the way you want), you're going to
> have to create a version of KVMStoragePool class and a StorageAdaptor
> class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
> implementing all of the methods, then in KVMStorageManager.java
> there's a "_storageMapper" map. This is used to select the correct
> adaptor, you can see in this file that every call first pulls the
> correct adaptor out of this map via getStorageAdaptor. So you can see
> a comment in this file that says "add other storage adaptors here",
> where it puts to this map, this is where you'd register your adaptor.
>
> So, referencing StorageAdaptor.java, createStoragePool accepts all of
> the pool data (host, port, name, path) which would be used to log the
> host into the initiator. I *believe* the method getPhysicalDisk will
> need to do the work of attaching the lun.  AttachVolumeCommand calls
> this and then creates the XML diskdef and attaches it to the VM. Now,
> one thing you need to know is that createStoragePool is called often,
> sometimes just to make sure the pool is there. You may want to create
> a map in your adaptor class and keep track of pools that have been
> created, LibvirtStorageAdaptor doesn't have to do this because it asks
> libvirt about which storage pools exist. There are also calls to
> refresh the pool stats, and all of the other calls can be seen in the
> StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
> it's probably a hold-over from 4.1, as I have the vague idea that
> volumes are created on the mgmt server via the plugin now, so whatever
> doesn't apply can just be stubbed out (or optionally
> extended/reimplemented here, if you don't mind the hosts talking to
> the san api).
>
> There is a difference between attaching new volumes and launching a VM
> with existing volumes.  In the latter case, the VM definition that was
> passed to the KVM agent includes the disks, (StartCommand).
>
> I'd be interested in how your pool is defined for Xen, I imagine it
> would need to be kept the same. Is it just a definition to the SAN
> (ip address or some such, port number) and perhaps a volume pool name?
>
>> If there is a way for me to update the ACL list on the SAN to have only a
>> single KVM host have access to the volume, that would be ideal.
>
> That depends on your SAN API.  I was under the impression that the
> storage plugin framework allowed for acls, or for you to do whatever
> you want for create/attach/delete/snapshot, etc. You'd just call your
> SAN API with the host info for the ACLs prior to when the disk is
> attached (or the VM is started).  I'd have to look more at the
> framework to know the details, in 4.1 I would do this in
> getPhysicalDisk just prior to connecting up the LUN.
>
>
> On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>> OK, yeah, the ACL part will be interesting. That is a bit different from how
>> it works with XenServer and VMware.
>>
>> Just to give you an idea how it works in 4.2 with XenServer:
>>
>> * The user creates a CS volume (this is just recorded in the cloud.volumes
>> table).
>>
>> * The user attaches the volume as a disk to a VM for the first time (if the
>> storage allocator picks the SolidFire plug-in, the storage framework invokes
>> a method on the plug-in that creates a volume on the SAN...info like the IQN
>> of the SAN volume is recorded in the DB).
>>
>> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
>> determines based on a flag passed in that the storage in question is
>> "CloudStack-managed" storage (as opposed to "traditional" preallocated
>> storage). This tells it to discover the iSCSI target. Once discovered it
>> determines if the iSCSI target already contains a storage repository (it
>> would if this were a re-attach situation). If it does contain an SR already,
>> then there should already be one VDI, as well. If there is no SR, an SR is
>> created and a single VDI is created within it (that takes up about as much
>> space as was requested for the CloudStack volume).
>>
>> * The normal attach-volume logic continues (it depends on the existence of
>> an SR and a VDI).
>>
>> The VMware case is essentially the same (mainly just substitute datastore
>> for SR and VMDK for VDI).
>>
>> In both cases, all hosts in the cluster have discovered the iSCSI target,
>> but only the host that is currently running the VM that is using the VDI (or
>> VMKD) is actually using the disk.
>>
>> Live Migration should be OK because the hypervisors communicate with
>> whatever metadata they have on the SR (or datastore).
>>
>> I see what you're saying with KVM, though.
>>
>> In that case, the hosts are clustered only in CloudStack's eyes. CS controls
>> Live Migration. You don't really need a clustered filesystem on the LUN. The
>> LUN could be handed over raw to the VM using it.
>>
>> If there is a way for me to update the ACL list on the SAN to have only a
>> single KVM host have access to the volume, that would be ideal.
>>
>> Also, I agree I'll need to use iscsiadm to discover and log in to the iSCSI
>> target. I'll also need to take the resultant new device and pass it into the
>> VM.
>>
>> Does this sound reasonable? Please call me out on anything I seem incorrect
>> about. :)
>>
>> Thanks for all the thought on this, Marcus!
>>
>>
>> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>>>
>>> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
>>> the disk def to the vm. You may need to do your own StorageAdaptor and run
>>> iscsiadm commands to accomplish that, depending on how the libvirt iscsi
>>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
>>> xen at the momen., nor is it ideal.
>>>
>>> Your plugin will handle acls as far as which host can see which luns as
>>> well, I remember discussing that months ago, so that a disk won't be
>>> connected until the hypervisor has exclusive access, so it will be safe and
>>> fence the disk from rogue nodes that cloudstack loses connectivity with. It
>>> should revoke access to everything but the target host... Except for during
>>> migration but we can discuss that later, there's a migration prep process
>>> where the new host can be added to the acls, and the old host can be removed
>>> post migration.
>>>
>>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>>
>>>> Yeah, that would be ideal.
>>>>
>>>> So, I would still need to discover the iSCSI target, log in to it, then
>>>> figure out what /dev/sdX was created as a result (and leave it as is - do
>>>> not format it with any file system...clustered or not). I would pass that
>>>> device into the VM.
>>>>
>>>> Kind of accurate?
>>>>
>>>>
>>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>
>>>> wrote:
>>>>>
>>>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
>>>>> ones that work for block devices rather than files. You can piggy back off
>>>>> of the existing disk definitions and attach it to the vm as a block device.
>>>>> The definition is an XML string per libvirt XML format. You may want to use
>>>>> an alternate path to the disk rather than just /dev/sdx like I mentioned,
>>>>> there are by-id paths to the block devices, as well as other ones that will
>>>>> be consistent and easier for management, not sure how familiar you are with
>>>>> device naming on Linux.
>>>>>
>>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>>>>
>>>>>> No, as that would rely on virtualized network/iscsi initiator inside
>>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as
>>>>>> a disk to the VM, rather than attaching some image file that resides on a
>>>>>> filesystem, mounted on the host, living on a target.
>>>>>>
>>>>>> Actually, if you plan on the storage supporting live migration I think
>>>>>> this is the only way. You can't put a filesystem on it and mount it in two
>>>>>> places to facilitate migration unless its a clustered filesystem, in which
>>>>>> case you're back to shared mount point.
>>>>>>
>>>>>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>>>>>> specific cluster management, a custom CLVM. They don't use a filesystem
>>>>>> either.
>>>>>>
>>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>
>>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>>>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>>>>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as I
>>>>>>> know.
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Better to wire up the lun directly to the vm unless there is a good
>>>>>>>> reason not to.
>>>>>>>>
>>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> You could do that, but as mentioned I think its a mistake to go to
>>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and then putting
>>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>>>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>>>>>>> more overhead with the filesystem and its journaling, etc.
>>>>>>>>>
>>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>>>>>
>>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>>>>>>>>> selecting SharedMountPoint and specifying the location of the share.
>>>>>>>>>>
>>>>>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>>>>>> system.
>>>>>>>>>>
>>>>>>>>>> Would it make sense for me to just do that discovery, logging in,
>>>>>>>>>> and mounting behind the scenes for them and letting the current code manage
>>>>>>>>>> the rest as it currently does?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up
>>>>>>>>>>> on the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>>>>>> and then memory dumps can go to secondary storage or something else. This is
>>>>>>>>>>> relatively new ground with CS and KVM, so we will want to see how others are
>>>>>>>>>>> planning theirs.
>>>>>>>>>>>
>>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>>>>>
>>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>>>>>>
>>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from a
>>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is independent of the
>>>>>>>>>>>>> LUN size the host sees.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hey Marcus,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
>>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if there were support
>>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI target), then I
>>>>>>>>>>>>>> don't see how using this model will work.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
>>>>>>>>>>>>>> with DIR?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What do you think?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
>>>>>>>>>>>>>>>> acts like a
>>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user
>>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>>>>>> access,
>>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>>>>>> It could
>>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
>>>>>>>>>>>>>>>> > time.
>>>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on
>>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>>>>>> >>> there would only
>>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>>>>>> >>> storage pool.
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
>>>>>>>>>>>>>>>> >>> not support
>>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>>>>>> >>> supports
>>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
>>>>>>>>>>>>>>>> >>> each one of its
>>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>>>>>> >>>> used, but I'm
>>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>>>>>>>>>>>>>> >>>> selects the
>>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>>>>>> >>>>> cannot be
>>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>>>>>> >>>>> plugin will take
>>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>>>>>> >>>>> stuff).
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
>>>>>>>>>>>>>>>> >>>>> can cross that
>>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>>>>>> >>>>> you'll see a
>>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that
>>>>>>>>>>>>>>>> >>>>> is done for
>>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code
>>>>>>>>>>>>>>>> >>>>> to see if you
>>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>>>>>> >>>>> pools before you
>>>>>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
>>>>>>>>>>>>>>>> >>>>> > you figure it
>>>>>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>>>>>>>>>>>>>>>> >>>>> > right?
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
>>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
>>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework
>>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin
>>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>>> >>>> --
>>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>>> >> --
>>>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Mike Tutkowski
>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Mike Tutkowski
>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the cloud™
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Mike Tutkowski
>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud™
>>
>>
>>
>>
>> --
>> Mike Tutkowski
>> Senior CloudStack Developer, SolidFire Inc.
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Ok, KVM will be close to that, of course, because only the hypervisor
classes differ, the rest is all mgmt server. Creating a volume is just
a db entry until it's deployed for the first time. AttachVolumeCommand
on the agent side (LibvirtStorageAdaptor.java is analogous to
CitrixResourceBase.java) will do the iscsiadm commands (via a KVM
StorageAdaptor) to log in the host to the target and then you have a
block device.  Maybe libvirt will do that for you, but my quick read
made it sound like the iscsi libvirt pool type is actually a pool, not
a lun or volume, so you'll need to figure out if that works or if
you'll have to use iscsiadm commands.

If you're NOT going to use LibvirtStorageAdaptor (because Libvirt
doesn't really manage your pool the way you want), you're going to
have to create a version of KVMStoragePool class and a StorageAdaptor
class (see LibvirtStoragePool.java and LibvirtStorageAdaptor.java),
implementing all of the methods, then in KVMStorageManager.java
there's a "_storageMapper" map. This is used to select the correct
adaptor, you can see in this file that every call first pulls the
correct adaptor out of this map via getStorageAdaptor. So you can see
a comment in this file that says "add other storage adaptors here",
where it puts to this map, this is where you'd register your adaptor.

So, referencing StorageAdaptor.java, createStoragePool accepts all of
the pool data (host, port, name, path) which would be used to log the
host into the initiator. I *believe* the method getPhysicalDisk will
need to do the work of attaching the lun.  AttachVolumeCommand calls
this and then creates the XML diskdef and attaches it to the VM. Now,
one thing you need to know is that createStoragePool is called often,
sometimes just to make sure the pool is there. You may want to create
a map in your adaptor class and keep track of pools that have been
created, LibvirtStorageAdaptor doesn't have to do this because it asks
libvirt about which storage pools exist. There are also calls to
refresh the pool stats, and all of the other calls can be seen in the
StorageAdaptor as well. There's a createPhysical disk, clone, etc, but
it's probably a hold-over from 4.1, as I have the vague idea that
volumes are created on the mgmt server via the plugin now, so whatever
doesn't apply can just be stubbed out (or optionally
extended/reimplemented here, if you don't mind the hosts talking to
the san api).

There is a difference between attaching new volumes and launching a VM
with existing volumes.  In the latter case, the VM definition that was
passed to the KVM agent includes the disks, (StartCommand).

I'd be interested in how your pool is defined for Xen, I imagine it
would need to be kept the same. Is it just a definition to the SAN
(ip address or some such, port number) and perhaps a volume pool name?

> If there is a way for me to update the ACL list on the SAN to have only a
> single KVM host have access to the volume, that would be ideal.

That depends on your SAN API.  I was under the impression that the
storage plugin framework allowed for acls, or for you to do whatever
you want for create/attach/delete/snapshot, etc. You'd just call your
SAN API with the host info for the ACLs prior to when the disk is
attached (or the VM is started).  I'd have to look more at the
framework to know the details, in 4.1 I would do this in
getPhysicalDisk just prior to connecting up the LUN.


On Fri, Sep 13, 2013 at 10:27 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> OK, yeah, the ACL part will be interesting. That is a bit different from how
> it works with XenServer and VMware.
>
> Just to give you an idea how it works in 4.2 with XenServer:
>
> * The user creates a CS volume (this is just recorded in the cloud.volumes
> table).
>
> * The user attaches the volume as a disk to a VM for the first time (if the
> storage allocator picks the SolidFire plug-in, the storage framework invokes
> a method on the plug-in that creates a volume on the SAN...info like the IQN
> of the SAN volume is recorded in the DB).
>
> * CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
> determines based on a flag passed in that the storage in question is
> "CloudStack-managed" storage (as opposed to "traditional" preallocated
> storage). This tells it to discover the iSCSI target. Once discovered it
> determines if the iSCSI target already contains a storage repository (it
> would if this were a re-attach situation). If it does contain an SR already,
> then there should already be one VDI, as well. If there is no SR, an SR is
> created and a single VDI is created within it (that takes up about as much
> space as was requested for the CloudStack volume).
>
> * The normal attach-volume logic continues (it depends on the existence of
> an SR and a VDI).
>
> The VMware case is essentially the same (mainly just substitute datastore
> for SR and VMDK for VDI).
>
> In both cases, all hosts in the cluster have discovered the iSCSI target,
> but only the host that is currently running the VM that is using the VDI (or
> VMKD) is actually using the disk.
>
> Live Migration should be OK because the hypervisors communicate with
> whatever metadata they have on the SR (or datastore).
>
> I see what you're saying with KVM, though.
>
> In that case, the hosts are clustered only in CloudStack's eyes. CS controls
> Live Migration. You don't really need a clustered filesystem on the LUN. The
> LUN could be handed over raw to the VM using it.
>
> If there is a way for me to update the ACL list on the SAN to have only a
> single KVM host have access to the volume, that would be ideal.
>
> Also, I agree I'll need to use iscsiadm to discover and log in to the iSCSI
> target. I'll also need to take the resultant new device and pass it into the
> VM.
>
> Does this sound reasonable? Please call me out on anything I seem incorrect
> about. :)
>
> Thanks for all the thought on this, Marcus!
>
>
> On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>
> wrote:
>>
>> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
>> the disk def to the vm. You may need to do your own StorageAdaptor and run
>> iscsiadm commands to accomplish that, depending on how the libvirt iscsi
>> works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
>> xen at the momen., nor is it ideal.
>>
>> Your plugin will handle acls as far as which host can see which luns as
>> well, I remember discussing that months ago, so that a disk won't be
>> connected until the hypervisor has exclusive access, so it will be safe and
>> fence the disk from rogue nodes that cloudstack loses connectivity with. It
>> should revoke access to everything but the target host... Except for during
>> migration but we can discuss that later, there's a migration prep process
>> where the new host can be added to the acls, and the old host can be removed
>> post migration.
>>
>> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>>
>>> Yeah, that would be ideal.
>>>
>>> So, I would still need to discover the iSCSI target, log in to it, then
>>> figure out what /dev/sdX was created as a result (and leave it as is - do
>>> not format it with any file system...clustered or not). I would pass that
>>> device into the VM.
>>>
>>> Kind of accurate?
>>>
>>>
>>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>>>
>>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
>>>> ones that work for block devices rather than files. You can piggy back off
>>>> of the existing disk definitions and attach it to the vm as a block device.
>>>> The definition is an XML string per libvirt XML format. You may want to use
>>>> an alternate path to the disk rather than just /dev/sdx like I mentioned,
>>>> there are by-id paths to the block devices, as well as other ones that will
>>>> be consistent and easier for management, not sure how familiar you are with
>>>> device naming on Linux.
>>>>
>>>> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>>>
>>>>> No, as that would rely on virtualized network/iscsi initiator inside
>>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as
>>>>> a disk to the VM, rather than attaching some image file that resides on a
>>>>> filesystem, mounted on the host, living on a target.
>>>>>
>>>>> Actually, if you plan on the storage supporting live migration I think
>>>>> this is the only way. You can't put a filesystem on it and mount it in two
>>>>> places to facilitate migration unless its a clustered filesystem, in which
>>>>> case you're back to shared mount point.
>>>>>
>>>>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>>>>> specific cluster management, a custom CLVM. They don't use a filesystem
>>>>> either.
>>>>>
>>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski"
>>>>> <mi...@solidfire.com> wrote:
>>>>>>
>>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>>>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as I
>>>>>> know.
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> Better to wire up the lun directly to the vm unless there is a good
>>>>>>> reason not to.
>>>>>>>
>>>>>>> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> You could do that, but as mentioned I think its a mistake to go to
>>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and then putting
>>>>>>>> a filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>>>>>> more overhead with the filesystem and its journaling, etc.
>>>>>>>>
>>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski"
>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>>>>
>>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>>>>>>>> selecting SharedMountPoint and specifying the location of the share.
>>>>>>>>>
>>>>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>>>>> system.
>>>>>>>>>
>>>>>>>>> Would it make sense for me to just do that discovery, logging in,
>>>>>>>>> and mounting behind the scenes for them and letting the current code manage
>>>>>>>>> the rest as it currently does?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen
>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up
>>>>>>>>>> on the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>>>>> and then memory dumps can go to secondary storage or something else. This is
>>>>>>>>>> relatively new ground with CS and KVM, so we will want to see how others are
>>>>>>>>>> planning theirs.
>>>>>>>>>>
>>>>>>>>>> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>>>>
>>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>>>>>
>>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from a
>>>>>>>>>>>> pool, and the snapshot spave comes from the pool and is independent of the
>>>>>>>>>>>> LUN size the host sees.
>>>>>>>>>>>>
>>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski"
>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hey Marcus,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>>>>
>>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>>>>
>>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
>>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if there were support
>>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI target), then I
>>>>>>>>>>>>> don't see how using this model will work.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
>>>>>>>>>>>>> with DIR?
>>>>>>>>>>>>>
>>>>>>>>>>>>> What do you think?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski
>>>>>>>>>>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen
>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
>>>>>>>>>>>>>>> acts like a
>>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user
>>>>>>>>>>>>>>> is
>>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>>>>> access,
>>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>>>>> It could
>>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen
>>>>>>>>>>>>>>> <sh...@gmail.com> wrote:
>>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
>>>>>>>>>>>>>>> > time.
>>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on
>>>>>>>>>>>>>>> >>> an iSCSI target.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>>>>> >>> there would only
>>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>>>>> >>> storage pool.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>>>>> >>> targets/LUNs on the
>>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
>>>>>>>>>>>>>>> >>> not support
>>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>>>>> >>> supports
>>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
>>>>>>>>>>>>>>> >>> each one of its
>>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>>>>> >>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>>>>> >>>> used, but I'm
>>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>>>>>>>>>>>>> >>>> selects the
>>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>>>>> >>>> the "netfs" option
>>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen
>>>>>>>>>>>>>>> >>>> <sh...@gmail.com>
>>>>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>>>>> >>>>> cannot be
>>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>>>>> >>>>> plugin will take
>>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>>>>> >>>>> hooking it up to
>>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>>>>> >>>>> stuff).
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>>>>> >>>>> mapping, or if
>>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>>>>>>>>>>>>>>> >>>>> pool. You may need
>>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>>>>>>>>>>>>>>> >>>>> this. Let us know.
>>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>>>>> >>>>> storage adaptor
>>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
>>>>>>>>>>>>>>> >>>>> can cross that
>>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>>>>>>>>>>>>> >>>>> bindings doc.
>>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>>>>> >>>>> you'll see a
>>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>>>>>>>>>>>>>> >>>>> 'conn' object. You
>>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that
>>>>>>>>>>>>>>> >>>>> is done for
>>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code
>>>>>>>>>>>>>>> >>>>> to see if you
>>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>>>>> >>>>> pools before you
>>>>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
>>>>>>>>>>>>>>> >>>>> > you figure it
>>>>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets,
>>>>>>>>>>>>>>> >>>>> > right?
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
>>>>>>>>>>>>>>> >>>>> >> you pointed out
>>>>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>>>>> >>>>> >>> initiator utilities
>>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
>>>>>>>>>>>>>>> >>>>> >>> any distro. Then
>>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>>>>> >>>>> >>> See the info I
>>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>>>>> >>>>> >>> libvirt iscsi
>>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>>>>> >>>>> >>>> developed a SolidFire
>>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework
>>>>>>>>>>>>>>> >>>>> >>>> at the necessary
>>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>>>>> >>>>> >>>> volumes on the
>>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>>>>> >>>>> >>>> between a
>>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin
>>>>>>>>>>>>>>> >>>>> >>>> to create large
>>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>>>>>>>>>>>>>> >>>>> >>>> likely house many
>>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>>>>> >>>>> >>>> modify logic in
>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>>>>> >>>>> >>>> create/delete storage
>>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>>>>> >>>>> >>>> KVM, but I'm
>>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>>>>> >>>>> >>>> to interact with
>>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>>>>> >>>>> >>>> Open iSCSI will be
>>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>>>>> >>>>> >>>> work?
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>>> >>>> --
>>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>>> >> --
>>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Mike Tutkowski
>>>>>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Mike Tutkowski
>>>>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the cloud™
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Mike Tutkowski
>>>>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud™
>>>
>>>
>>>
>>>
>>> --
>>> Mike Tutkowski
>>> Senior CloudStack Developer, SolidFire Inc.
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud™
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
OK, yeah, the ACL part will be interesting. That is a bit different from
how it works with XenServer and VMware.

Just to give you an idea how it works in 4.2 with XenServer:

* The user creates a CS volume (this is just recorded in the cloud.volumes
table).

* The user attaches the volume as a disk to a VM for the first time (if the
storage allocator picks the SolidFire plug-in, the storage framework
invokes a method on the plug-in that creates a volume on the SAN...info
like the IQN of the SAN volume is recorded in the DB).

* CitrixResourceBase's execute(AttachVolumeCommand) is executed. It
determines based on a flag passed in that the storage in question is
"CloudStack-managed" storage (as opposed to "traditional" preallocated
storage). This tells it to discover the iSCSI target. Once discovered it
determines if the iSCSI target already contains a storage repository (it
would if this were a re-attach situation). If it does contain an SR
already, then there should already be one VDI, as well. If there is no SR,
an SR is created and a single VDI is created within it (that takes up about
as much space as was requested for the CloudStack volume).

* The normal attach-volume logic continues (it depends on the existence of
an SR and a VDI).

The VMware case is essentially the same (mainly just substitute datastore
for SR and VMDK for VDI).

In both cases, all hosts in the cluster have discovered the iSCSI target,
but only the host that is currently running the VM that is using the VDI
(or VMKD) is actually using the disk.

Live Migration should be OK because the hypervisors communicate with
whatever metadata they have on the SR (or datastore).

I see what you're saying with KVM, though.

In that case, the hosts are clustered only in CloudStack's eyes. CS
controls Live Migration. You don't really need a clustered filesystem on
the LUN. The LUN could be handed over raw to the VM using it.

If there is a way for me to update the ACL list on the SAN to have only a
single KVM host have access to the volume, that would be ideal.

Also, I agree I'll need to use iscsiadm to discover and log in to the iSCSI
target. I'll also need to take the resultant new device and pass it into
the VM.

Does this sound reasonable? Please call me out on anything I seem incorrect
about. :)

Thanks for all the thought on this, Marcus!


On Fri, Sep 13, 2013 at 8:25 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Perfect. You'll have a domain def ( the VM), a disk def, and the attach
> the disk def to the vm. You may need to do your own StorageAdaptor and run
> iscsiadm commands to accomplish that, depending on how the libvirt iscsi
> works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
> xen at the momen., nor is it ideal.
>
> Your plugin will handle acls as far as which host can see which luns as
> well, I remember discussing that months ago, so that a disk won't be
> connected until the hypervisor has exclusive access, so it will be safe and
> fence the disk from rogue nodes that cloudstack loses connectivity with. It
> should revoke access to everything but the target host... Except for during
> migration but we can discuss that later, there's a migration prep process
> where the new host can be added to the acls, and the old host can be
> removed post migration.
> On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> Yeah, that would be ideal.
>>
>> So, I would still need to discover the iSCSI target, log in to it, then
>> figure out what /dev/sdX was created as a result (and leave it as is - do
>> not format it with any file system...clustered or not). I would pass that
>> device into the VM.
>>
>> Kind of accurate?
>>
>>
>> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
>>> ones that work for block devices rather than files. You can piggy back off
>>> of the existing disk definitions and attach it to the vm as a block device.
>>> The definition is an XML string per libvirt XML format. You may want to use
>>> an alternate path to the disk rather than just /dev/sdx like I mentioned,
>>> there are by-id paths to the block devices, as well as other ones that will
>>> be consistent and easier for management, not sure how familiar you are with
>>> device naming on Linux.
>>>  On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> No, as that would rely on virtualized network/iscsi initiator inside
>>>> the vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor)
>>>> as a disk to the VM, rather than attaching some image file that resides on
>>>> a filesystem, mounted on the host, living on a target.
>>>>
>>>> Actually, if you plan on the storage supporting live migration I think
>>>> this is the only way. You can't put a filesystem on it and mount it in two
>>>> places to facilitate migration unless its a clustered filesystem, in which
>>>> case you're back to shared mount point.
>>>>
>>>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>>>> specific cluster management, a custom CLVM. They don't use a filesystem
>>>> either.
>>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>
>>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>>>>> I know.
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> Better to wire up the lun directly to the vm unless there is a good
>>>>>> reason not to.
>>>>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> You could do that, but as mentioned I think its a mistake to go to
>>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and then
>>>>>>> putting a filesystem on it, mounting it, and then putting a QCOW2 or even
>>>>>>> RAW disk image on that filesystem. You'll lose a lot of iops along the way,
>>>>>>> and have more overhead with the filesystem and its journaling, etc.
>>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <
>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>
>>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>>>
>>>>>>>> So, the way people use our SAN with KVM and CS today is by
>>>>>>>> selecting SharedMountPoint and specifying the location of the share.
>>>>>>>>
>>>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>>>> system.
>>>>>>>>
>>>>>>>> Would it make sense for me to just do that discovery, logging in,
>>>>>>>> and mounting behind the scenes for them and letting the current code manage
>>>>>>>> the rest as it currently does?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up
>>>>>>>>> on the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>>>>> are planning theirs.
>>>>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>>>
>>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>>>>> the LUN size the host sees.
>>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hey Marcus,
>>>>>>>>>>>>
>>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>>>
>>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>>>
>>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>>>
>>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>>>
>>>>>>>>>>>> If this is also how KVM behaves and there is no creation of
>>>>>>>>>>>> LUNs within an iSCSI target from libvirt (which, even if there were support
>>>>>>>>>>>> for this, our SAN currently only allows one LUN per iSCSI target), then I
>>>>>>>>>>>> don't see how using this model will work.
>>>>>>>>>>>>
>>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
>>>>>>>>>>>> with DIR?
>>>>>>>>>>>>
>>>>>>>>>>>> What do you think?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
>>>>>>>>>>>>>> acts like a
>>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user
>>>>>>>>>>>>>> is
>>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>>>> access,
>>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>>>> It could
>>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same
>>>>>>>>>>>>>> time.
>>>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>>>> >
>>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on
>>>>>>>>>>>>>> an iSCSI target.
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>>>> there would only
>>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>>>> storage pool.
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
>>>>>>>>>>>>>> not support
>>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>>>> supports
>>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since
>>>>>>>>>>>>>> each one of its
>>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>>>> used, but I'm
>>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>>>>>>>>>>>> selects the
>>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>>>> the "netfs" option
>>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>>>> cannot be
>>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>>>> plugin will take
>>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>>>> hooking it up to
>>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>>>> stuff).
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>>>> mapping, or if
>>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a
>>>>>>>>>>>>>> pool. You may need
>>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>>>>>>>>>>>>>> this. Let us know.
>>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>>>> storage adaptor
>>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We
>>>>>>>>>>>>>> can cross that
>>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>>>>>>>>>>>> bindings doc.
>>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>>>> you'll see a
>>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>>>>>>>>>>>>> 'conn' object. You
>>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that
>>>>>>>>>>>>>> is done for
>>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code
>>>>>>>>>>>>>> to see if you
>>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>>>> pools before you
>>>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
>>>>>>>>>>>>>> you figure it
>>>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
>>>>>>>>>>>>>> you pointed out
>>>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>>>> initiator utilities
>>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for
>>>>>>>>>>>>>> any distro. Then
>>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>>>> See the info I
>>>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework
>>>>>>>>>>>>>> at the necessary
>>>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>>>> volumes on the
>>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>>>> between a
>>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin
>>>>>>>>>>>>>> to create large
>>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>>>>>>>>>>>>> likely house many
>>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>>>> modify logic in
>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>>>> create/delete storage
>>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>>>> to interact with
>>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>>>> Open iSCSI will be
>>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>>>> work?
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>>> >>>> --
>>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>>
>>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >>
>>>>>>>>>>>>>> >> --
>>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>>> *™*
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>> *™*
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Perfect. You'll have a domain def ( the VM), a disk def, and the attach the
disk def to the vm. You may need to do your own StorageAdaptor and run
iscsiadm commands to accomplish that, depending on how the libvirt iscsi
works. My impression is that a 1:1:1 pool/lun/volume isn't how it works on
xen at the momen., nor is it ideal.

Your plugin will handle acls as far as which host can see which luns as
well, I remember discussing that months ago, so that a disk won't be
connected until the hypervisor has exclusive access, so it will be safe and
fence the disk from rogue nodes that cloudstack loses connectivity with. It
should revoke access to everything but the target host... Except for during
migration but we can discuss that later, there's a migration prep process
where the new host can be added to the acls, and the old host can be
removed post migration.
On Sep 13, 2013 8:16 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Yeah, that would be ideal.
>
> So, I would still need to discover the iSCSI target, log in to it, then
> figure out what /dev/sdX was created as a result (and leave it as is - do
> not format it with any file system...clustered or not). I would pass that
> device into the VM.
>
> Kind of accurate?
>
>
> On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
>> ones that work for block devices rather than files. You can piggy back off
>> of the existing disk definitions and attach it to the vm as a block device.
>> The definition is an XML string per libvirt XML format. You may want to use
>> an alternate path to the disk rather than just /dev/sdx like I mentioned,
>> there are by-id paths to the block devices, as well as other ones that will
>> be consistent and easier for management, not sure how familiar you are with
>> device naming on Linux.
>>  On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>
>>> No, as that would rely on virtualized network/iscsi initiator inside the
>>> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
>>> disk to the VM, rather than attaching some image file that resides on a
>>> filesystem, mounted on the host, living on a target.
>>>
>>> Actually, if you plan on the storage supporting live migration I think
>>> this is the only way. You can't put a filesystem on it and mount it in two
>>> places to facilitate migration unless its a clustered filesystem, in which
>>> case you're back to shared mount point.
>>>
>>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>>> specific cluster management, a custom CLVM. They don't use a filesystem
>>> either.
>>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>
>>>> When you say, "wire up the lun directly to the vm," do you mean
>>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>>>> I know.
>>>>
>>>>
>>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> Better to wire up the lun directly to the vm unless there is a good
>>>>> reason not to.
>>>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> You could do that, but as mentioned I think its a mistake to go to
>>>>>> the trouble of creating a 1:1 mapping of CS volumes to luns and then
>>>>>> putting a filesystem on it, mounting it, and then putting a QCOW2 or even
>>>>>> RAW disk image on that filesystem. You'll lose a lot of iops along the way,
>>>>>> and have more overhead with the filesystem and its journaling, etc.
>>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <
>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>
>>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>>
>>>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>>>> SharedMountPoint and specifying the location of the share.
>>>>>>>
>>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>>> system.
>>>>>>>
>>>>>>> Would it make sense for me to just do that discovery, logging in,
>>>>>>> and mounting behind the scenes for them and letting the current code manage
>>>>>>> the rest as it currently does?
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>
>>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>>>> are planning theirs.
>>>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>>
>>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>>>> the LUN size the host sees.
>>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hey Marcus,
>>>>>>>>>>>
>>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>>
>>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>>
>>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>>
>>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>>
>>>>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>>>>> see how using this model will work.
>>>>>>>>>>>
>>>>>>>>>>> Perhaps I will have to go enhance the current way this works
>>>>>>>>>>> with DIR?
>>>>>>>>>>>
>>>>>>>>>>> What do you think?
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>>
>>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just
>>>>>>>>>>>>> acts like a
>>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user
>>>>>>>>>>>>> is
>>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>>> access,
>>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>>> It could
>>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>>> >
>>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>>>>> iSCSI target.
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>>> there would only
>>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>>> storage pool.
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does
>>>>>>>>>>>>> not support
>>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>>> supports
>>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>>>>> one of its
>>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>>> used, but I'm
>>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone
>>>>>>>>>>>>> selects the
>>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>>> the "netfs" option
>>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>>> cannot be
>>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>>> plugin will take
>>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>>> hooking it up to
>>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>>> stuff).
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>>> mapping, or if
>>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>>>>> You may need
>>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about
>>>>>>>>>>>>> this. Let us know.
>>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>>> storage adaptor
>>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>>>> cross that
>>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java
>>>>>>>>>>>>> bindings doc.
>>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>>> you'll see a
>>>>>>>>>>>>> >>>>> connection object be made, then calls made to that
>>>>>>>>>>>>> 'conn' object. You
>>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>>>>> done for
>>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>>>>> see if you
>>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>>> pools before you
>>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>>> >>>>>
>>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but
>>>>>>>>>>>>> you figure it
>>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes
>>>>>>>>>>>>> you pointed out
>>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>>> initiator utilities
>>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>>>>> distro. Then
>>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>>> See the info I
>>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework
>>>>>>>>>>>>> at the necessary
>>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>>> volumes on the
>>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>>> between a
>>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin
>>>>>>>>>>>>> to create large
>>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would
>>>>>>>>>>>>> likely house many
>>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>>> modify logic in
>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>>> create/delete storage
>>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>>> to interact with
>>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>>> Open iSCSI will be
>>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>>> work?
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>>
>>>>>>>>>>>>> >>>> --
>>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>>
>>>>>>>>>>>>> >>> --
>>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >>
>>>>>>>>>>>>> >> --
>>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>>> *™*
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>>
>>>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Yeah, that would be ideal.

So, I would still need to discover the iSCSI target, log in to it, then
figure out what /dev/sdX was created as a result (and leave it as is - do
not format it with any file system...clustered or not). I would pass that
device into the VM.

Kind of accurate?


On Fri, Sep 13, 2013 at 8:07 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Look in LibvirtVMDef.java (I think) for the disk definitions. There are
> ones that work for block devices rather than files. You can piggy back off
> of the existing disk definitions and attach it to the vm as a block device.
> The definition is an XML string per libvirt XML format. You may want to use
> an alternate path to the disk rather than just /dev/sdx like I mentioned,
> there are by-id paths to the block devices, as well as other ones that will
> be consistent and easier for management, not sure how familiar you are with
> device naming on Linux.
> On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>> No, as that would rely on virtualized network/iscsi initiator inside the
>> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
>> disk to the VM, rather than attaching some image file that resides on a
>> filesystem, mounted on the host, living on a target.
>>
>> Actually, if you plan on the storage supporting live migration I think
>> this is the only way. You can't put a filesystem on it and mount it in two
>> places to facilitate migration unless its a clustered filesystem, in which
>> case you're back to shared mount point.
>>
>> As far as I'm aware, the xenserver SR style is basically LVM with a xen
>> specific cluster management, a custom CLVM. They don't use a filesystem
>> either.
>> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> When you say, "wire up the lun directly to the vm," do you mean
>>> circumventing the hypervisor? I didn't think we could do that in CS.
>>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>>> I know.
>>>
>>>
>>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> Better to wire up the lun directly to the vm unless there is a good
>>>> reason not to.
>>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>> wrote:
>>>>
>>>>> You could do that, but as mentioned I think its a mistake to go to the
>>>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>>>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>>> more overhead with the filesystem and its journaling, etc.
>>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>>
>>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>>> SharedMountPoint and specifying the location of the share.
>>>>>>
>>>>>> They can set up their share using Open iSCSI by discovering their
>>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>>> system.
>>>>>>
>>>>>> Would it make sense for me to just do that discovery, logging in, and
>>>>>> mounting behind the scenes for them and letting the current code manage the
>>>>>> rest as it currently does?
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>>> are planning theirs.
>>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>>
>>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>>> service that would allow the San to handle snapshots.
>>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if
>>>>>>>>> the SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>>> the LUN size the host sees.
>>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hey Marcus,
>>>>>>>>>>
>>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>>
>>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for
>>>>>>>>>> the snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>>
>>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>>
>>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>>
>>>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>>>> see how using this model will work.
>>>>>>>>>>
>>>>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>>>>> DIR?
>>>>>>>>>>
>>>>>>>>>> What do you think?
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>>
>>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>>>>> like a
>>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>>> access,
>>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage.
>>>>>>>>>>>> It could
>>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem,
>>>>>>>>>>>> cloudstack just
>>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>>> >
>>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>>>> iSCSI target.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>>> there would only
>>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>>> storage pool.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>>>>> support
>>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>>> supports
>>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>>>> one of its
>>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"),
>>>>>>>>>>>> LOGICAL("logical"), DIR("dir"),
>>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>         }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>     }
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>>> used, but I'm
>>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects
>>>>>>>>>>>> the
>>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that
>>>>>>>>>>>> the "netfs" option
>>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>>> cannot be
>>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>>> plugin will take
>>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>>> hooking it up to
>>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>>> stuff).
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>>> mapping, or if
>>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>>>> You may need
>>>>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>>>>> Let us know.
>>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own
>>>>>>>>>>>> storage adaptor
>>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>>> cross that
>>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>>>>> doc.
>>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>>> you'll see a
>>>>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>>>>> object. You
>>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>>>> done for
>>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>>>> see if you
>>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>>> pools before you
>>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>>> >>>>>
>>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>>>>> figure it
>>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>>>>> pointed out
>>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>>> initiator utilities
>>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>>>> distro. Then
>>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>>> See the info I
>>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>>>>> the necessary
>>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete
>>>>>>>>>>>> volumes on the
>>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>>> between a
>>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>>>>> create large
>>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>>>>> house many
>>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>>> modify logic in
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>>> create/delete storage
>>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need
>>>>>>>>>>>> to interact with
>>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect
>>>>>>>>>>>> Open iSCSI will be
>>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to
>>>>>>>>>>>> work?
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >>
>>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> >
>>>>>>>>>>>> >>>>> > --
>>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>>
>>>>>>>>>>>> >>>> --
>>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>>
>>>>>>>>>>>> >>> --
>>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >>
>>>>>>>>>>>> >> --
>>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> o: 303.746.7302
>>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>>> *™*
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Look in LibvirtVMDef.java (I think) for the disk definitions. There are
ones that work for block devices rather than files. You can piggy back off
of the existing disk definitions and attach it to the vm as a block device.
The definition is an XML string per libvirt XML format. You may want to use
an alternate path to the disk rather than just /dev/sdx like I mentioned,
there are by-id paths to the block devices, as well as other ones that will
be consistent and easier for management, not sure how familiar you are with
device naming on Linux.
On Sep 13, 2013 8:00 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:

> No, as that would rely on virtualized network/iscsi initiator inside the
> vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
> disk to the VM, rather than attaching some image file that resides on a
> filesystem, mounted on the host, living on a target.
>
> Actually, if you plan on the storage supporting live migration I think
> this is the only way. You can't put a filesystem on it and mount it in two
> places to facilitate migration unless its a clustered filesystem, in which
> case you're back to shared mount point.
>
> As far as I'm aware, the xenserver SR style is basically LVM with a xen
> specific cluster management, a custom CLVM. They don't use a filesystem
> either.
> On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> When you say, "wire up the lun directly to the vm," do you mean
>> circumventing the hypervisor? I didn't think we could do that in CS.
>> OpenStack, on the other hand, always circumvents the hypervisor, as far as
>> I know.
>>
>>
>> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Better to wire up the lun directly to the vm unless there is a good
>>> reason not to.
>>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> You could do that, but as mentioned I think its a mistake to go to the
>>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>>> more overhead with the filesystem and its journaling, etc.
>>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>
>>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>>
>>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>>> SharedMountPoint and specifying the location of the share.
>>>>>
>>>>> They can set up their share using Open iSCSI by discovering their
>>>>> iSCSI target, logging in to it, then mounting it somewhere on their file
>>>>> system.
>>>>>
>>>>> Would it make sense for me to just do that discovery, logging in, and
>>>>> mounting behind the scenes for them and letting the current code manage the
>>>>> rest as it currently does?
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>>> are planning theirs.
>>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>>> and that seems unnecessary and a performance killer.
>>>>>>>
>>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>>> service that would allow the San to handle snapshots.
>>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if the
>>>>>>>> SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>>> the LUN size the host sees.
>>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>
>>>>>>>>> Hey Marcus,
>>>>>>>>>
>>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work
>>>>>>>>> when you take into consideration hypervisor snapshots?
>>>>>>>>>
>>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>>
>>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>>
>>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>>
>>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>>> see how using this model will work.
>>>>>>>>>
>>>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>>>> DIR?
>>>>>>>>>
>>>>>>>>> What do you think?
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>>
>>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>>
>>>>>>>>>> I suppose I could go that route, too, but I might as well
>>>>>>>>>> leverage what libvirt has for iSCSI instead.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>>>> like a
>>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>>> access,
>>>>>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>>>>>> could
>>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack
>>>>>>>>>>> just
>>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>>> >
>>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>>> >>
>>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>>> >> default              active     yes
>>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>>> iSCSI target.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>>> there would only
>>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>>> storage pool.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>>> targets/LUNs on the
>>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>>>> support
>>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>>> supports
>>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>>> one of its
>>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>>>>>> DIR("dir"),
>>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         @Override
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>         }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>     }
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being
>>>>>>>>>>> used, but I'm
>>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects
>>>>>>>>>>> the
>>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>>>>>> "netfs" option
>>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> Thanks!
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>>> >>>> wrote:
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>>> cannot be
>>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your
>>>>>>>>>>> plugin will take
>>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>>> hooking it up to
>>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>>> stuff).
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>>> mapping, or if
>>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>>> You may need
>>>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>>>> Let us know.
>>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>>>>>> adaptor
>>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>>> cross that
>>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>>>> doc.
>>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally,
>>>>>>>>>>> you'll see a
>>>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>>>> object. You
>>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>>> done for
>>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>>> see if you
>>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage
>>>>>>>>>>> pools before you
>>>>>>>>>>> >>>>> get started.
>>>>>>>>>>> >>>>>
>>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>>>> figure it
>>>>>>>>>>> >>>>> > supports
>>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>>>> pointed out
>>>>>>>>>>> >>>>> >> last
>>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi
>>>>>>>>>>> initiator utilities
>>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>>> distro. Then
>>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login.
>>>>>>>>>>> See the info I
>>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and
>>>>>>>>>>> libvirt iscsi
>>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>>> >>>>> >>>
>>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>>> developed a SolidFire
>>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>>>> the necessary
>>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes
>>>>>>>>>>> on the
>>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>>> between a
>>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>>>> create large
>>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>>>> house many
>>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>>> modify logic in
>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>>> create/delete storage
>>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>>> KVM, but I'm
>>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>>>>>> interact with
>>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>>>>>> iSCSI will be
>>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>>> >>>>> >>>>
>>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >>
>>>>>>>>>>> >>>>> >> --
>>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> >
>>>>>>>>>>> >>>>> > --
>>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>>
>>>>>>>>>>> >>>> --
>>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>>
>>>>>>>>>>> >>> --
>>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> --
>>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Mike Tutkowski*
>>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> o: 303.746.7302
>>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>>> *™*
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>>
>>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
No, as that would rely on virtualized network/iscsi initiator inside the
vm, which also sucks. I mean attach /dev/sdx (your lun on hypervisor) as a
disk to the VM, rather than attaching some image file that resides on a
filesystem, mounted on the host, living on a target.

Actually, if you plan on the storage supporting live migration I think this
is the only way. You can't put a filesystem on it and mount it in two
places to facilitate migration unless its a clustered filesystem, in which
case you're back to shared mount point.

As far as I'm aware, the xenserver SR style is basically LVM with a xen
specific cluster management, a custom CLVM. They don't use a filesystem
either.
On Sep 13, 2013 7:44 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> When you say, "wire up the lun directly to the vm," do you mean
> circumventing the hypervisor? I didn't think we could do that in CS.
> OpenStack, on the other hand, always circumvents the hypervisor, as far as
> I know.
>
>
> On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Better to wire up the lun directly to the vm unless there is a good
>> reason not to.
>>  On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>
>>> You could do that, but as mentioned I think its a mistake to go to the
>>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>>> image on that filesystem. You'll lose a lot of iops along the way, and have
>>> more overhead with the filesystem and its journaling, etc.
>>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>
>>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>>
>>>> So, the way people use our SAN with KVM and CS today is by selecting
>>>> SharedMountPoint and specifying the location of the share.
>>>>
>>>> They can set up their share using Open iSCSI by discovering their iSCSI
>>>> target, logging in to it, then mounting it somewhere on their file system.
>>>>
>>>> Would it make sense for me to just do that discovery, logging in, and
>>>> mounting behind the scenes for them and letting the current code manage the
>>>> rest as it currently does?
>>>>
>>>>
>>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on
>>>>> the work done in KVM, but this is basically just disk snapshots + memory
>>>>> dump. I still think disk snapshots would preferably be handled by the SAN,
>>>>> and then memory dumps can go to secondary storage or something else. This
>>>>> is relatively new ground with CS and KVM, so we will want to see how others
>>>>> are planning theirs.
>>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Let me back up and say I don't think you'd use a vdi style on an
>>>>>> iscsi lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>>> and that seems unnecessary and a performance killer.
>>>>>>
>>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>>> service that would allow the San to handle snapshots.
>>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Ideally volume snapshots can be handled by the SAN back end, if the
>>>>>>> SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>>> the LUN size the host sees.
>>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>
>>>>>>>> Hey Marcus,
>>>>>>>>
>>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work when
>>>>>>>> you take into consideration hypervisor snapshots?
>>>>>>>>
>>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>>>>
>>>>>>>> Same idea for VMware, I believe.
>>>>>>>>
>>>>>>>> So, what would happen in my case (let's say for XenServer and
>>>>>>>> VMware for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd
>>>>>>>> make an iSCSI target that is larger than what the user requested for the
>>>>>>>> CloudStack volume (which is fine because our SAN thinly provisions volumes,
>>>>>>>> so the space is not actually used unless it needs to be). The CloudStack
>>>>>>>> volume would be the only "object" on the SAN volume until a hypervisor
>>>>>>>> snapshot is taken. This snapshot would also reside on the SAN volume.
>>>>>>>>
>>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>>> see how using this model will work.
>>>>>>>>
>>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>>> DIR?
>>>>>>>>
>>>>>>>> What do you think?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>>
>>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>>
>>>>>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>>>>>> what libvirt has for iSCSI instead.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>>> like a
>>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>>> access,
>>>>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>>>>> could
>>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack
>>>>>>>>>> just
>>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>>
>>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>>> > Multiples, in fact.
>>>>>>>>>> >
>>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>>> >>
>>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>>> >> -----------------------------------------
>>>>>>>>>> >> default              active     yes
>>>>>>>>>> >> iSCSI                active     no
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>>> >>>
>>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>>> >>>
>>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>>> >>>
>>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>>> iSCSI target.
>>>>>>>>>> >>>
>>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so
>>>>>>>>>> there would only
>>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt)
>>>>>>>>>> storage pool.
>>>>>>>>>> >>>
>>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI
>>>>>>>>>> targets/LUNs on the
>>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>>> support
>>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>>> >>>
>>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>>> supports
>>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each
>>>>>>>>>> one of its
>>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>>> >>>
>>>>>>>>>> >>>
>>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>>>>> DIR("dir"),
>>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         String _poolType;
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         }
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         @Override
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         public String toString() {
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>             return _poolType;
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>         }
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>     }
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being used,
>>>>>>>>>> but I'm
>>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects
>>>>>>>>>> the
>>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>>>>> "netfs" option
>>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> Thanks!
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>>> >>>> wrote:
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>>> cannot be
>>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>>>>>> will take
>>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>>> hooking it up to
>>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>>> stuff).
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>>> mapping, or if
>>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool.
>>>>>>>>>> You may need
>>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>>> Let us know.
>>>>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>>>>> adaptor
>>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>>> cross that
>>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>>> doc.
>>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll
>>>>>>>>>> see a
>>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>>> object. You
>>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>>> done for
>>>>>>>>>> >>>>> other pool types, and maybe write some test java code to
>>>>>>>>>> see if you
>>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>>>>>> before you
>>>>>>>>>> >>>>> get started.
>>>>>>>>>> >>>>>
>>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>>> figure it
>>>>>>>>>> >>>>> > supports
>>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>>> pointed out
>>>>>>>>>> >>>>> >> last
>>>>>>>>>> >>>>> >> week or so.
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>>> >>>>> >> wrote:
>>>>>>>>>> >>>>> >>>
>>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>>>>>> utilities
>>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>>> distro. Then
>>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See
>>>>>>>>>> the info I
>>>>>>>>>> >>>>> >>> sent
>>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>>>>>> iscsi
>>>>>>>>>> >>>>> >>> storage type
>>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>>> >>>>> >>>
>>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I
>>>>>>>>>> developed a SolidFire
>>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>>> the necessary
>>>>>>>>>> >>>>> >>>> times
>>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes
>>>>>>>>>> on the
>>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>>> between a
>>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>>> create large
>>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>>> house many
>>>>>>>>>> >>>>> >>>> root and
>>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>>> modify logic in
>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>>> create/delete storage
>>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on
>>>>>>>>>> KVM, but I'm
>>>>>>>>>> >>>>> >>>> still
>>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>>>>> interact with
>>>>>>>>>> >>>>> >>>> the
>>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>>>>> iSCSI will be
>>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>>> >>>>> >>>>
>>>>>>>>>> >>>>> >>>> --
>>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >>
>>>>>>>>>> >>>>> >> --
>>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> >
>>>>>>>>>> >>>>> > --
>>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>>
>>>>>>>>>> >>>> --
>>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>>> >>>
>>>>>>>>>> >>>
>>>>>>>>>> >>>
>>>>>>>>>> >>>
>>>>>>>>>> >>> --
>>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >>
>>>>>>>>>> >> --
>>>>>>>>>> >> Mike Tutkowski
>>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>>> >> o: 303.746.7302
>>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Mike Tutkowski*
>>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> o: 303.746.7302
>>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>>> *™*
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>>>
>>>>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
When you say, "wire up the lun directly to the vm," do you mean
circumventing the hypervisor? I didn't think we could do that in CS.
OpenStack, on the other hand, always circumvents the hypervisor, as far as
I know.


On Fri, Sep 13, 2013 at 7:40 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Better to wire up the lun directly to the vm unless there is a good reason
> not to.
> On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>> You could do that, but as mentioned I think its a mistake to go to the
>> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
>> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
>> image on that filesystem. You'll lose a lot of iops along the way, and have
>> more overhead with the filesystem and its journaling, etc.
>> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>>
>>> So, the way people use our SAN with KVM and CS today is by selecting
>>> SharedMountPoint and specifying the location of the share.
>>>
>>> They can set up their share using Open iSCSI by discovering their iSCSI
>>> target, logging in to it, then mounting it somewhere on their file system.
>>>
>>> Would it make sense for me to just do that discovery, logging in, and
>>> mounting behind the scenes for them and letting the current code manage the
>>> rest as it currently does?
>>>
>>>
>>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> Oh, hypervisor snapshots are a bit different. I need to catch up on the
>>>> work done in KVM, but this is basically just disk snapshots + memory dump.
>>>> I still think disk snapshots would preferably be handled by the SAN, and
>>>> then memory dumps can go to secondary storage or something else. This is
>>>> relatively new ground with CS and KVM, so we will want to see how others
>>>> are planning theirs.
>>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>> wrote:
>>>>
>>>>> Let me back up and say I don't think you'd use a vdi style on an iscsi
>>>>> lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>>> and that seems unnecessary and a performance killer.
>>>>>
>>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>>> handling snapshots on the San side via the storage plugin is best. My
>>>>> impression from the storage plugin refactor was that there was a snapshot
>>>>> service that would allow the San to handle snapshots.
>>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Ideally volume snapshots can be handled by the SAN back end, if the
>>>>>> SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>>> the LUN size the host sees.
>>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>
>>>>>>> Hey Marcus,
>>>>>>>
>>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work when
>>>>>>> you take into consideration hypervisor snapshots?
>>>>>>>
>>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>>>
>>>>>>> Same idea for VMware, I believe.
>>>>>>>
>>>>>>> So, what would happen in my case (let's say for XenServer and VMware
>>>>>>> for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>>>>>> iSCSI target that is larger than what the user requested for the CloudStack
>>>>>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>>>>>> space is not actually used unless it needs to be). The CloudStack volume
>>>>>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>>>>>> taken. This snapshot would also reside on the SAN volume.
>>>>>>>
>>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>>> see how using this model will work.
>>>>>>>
>>>>>>> Perhaps I will have to go enhance the current way this works with
>>>>>>> DIR?
>>>>>>>
>>>>>>> What do you think?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>>
>>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>>
>>>>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>>>>> what libvirt has for iSCSI instead.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>>> like a
>>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>>> access,
>>>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>>>> could
>>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack
>>>>>>>>> just
>>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>>
>>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>>> > Multiples, in fact.
>>>>>>>>> >
>>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>>> >>
>>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>>> >> Name                 State      Autostart
>>>>>>>>> >> -----------------------------------------
>>>>>>>>> >> default              active     yes
>>>>>>>>> >> iSCSI                active     no
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>>> >>>
>>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>>> >>>
>>>>>>>>> >>> I see what you're saying now.
>>>>>>>>> >>>
>>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>>> iSCSI target.
>>>>>>>>> >>>
>>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>>>>>> would only
>>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>>>>>> pool.
>>>>>>>>> >>>
>>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs
>>>>>>>>> on the
>>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>>> support
>>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>>> >>>
>>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>>> supports
>>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one
>>>>>>>>> of its
>>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>>> >>>
>>>>>>>>> >>>
>>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>>> >>>>
>>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>>> >>>>
>>>>>>>>> >>>>     public enum poolType {
>>>>>>>>> >>>>
>>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>>>> DIR("dir"),
>>>>>>>>> >>>> RBD("rbd");
>>>>>>>>> >>>>
>>>>>>>>> >>>>         String _poolType;
>>>>>>>>> >>>>
>>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>>> >>>>
>>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>>> >>>>
>>>>>>>>> >>>>         }
>>>>>>>>> >>>>
>>>>>>>>> >>>>         @Override
>>>>>>>>> >>>>
>>>>>>>>> >>>>         public String toString() {
>>>>>>>>> >>>>
>>>>>>>>> >>>>             return _poolType;
>>>>>>>>> >>>>
>>>>>>>>> >>>>         }
>>>>>>>>> >>>>
>>>>>>>>> >>>>     }
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being used,
>>>>>>>>> but I'm
>>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>>>> "netfs" option
>>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>> Thanks!
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>>> shadowsor@gmail.com>
>>>>>>>>> >>>> wrote:
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> Take a look at this:
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>>> cannot be
>>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>>>>> will take
>>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and
>>>>>>>>> hooking it up to
>>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>>> stuff).
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>>> mapping, or if
>>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You
>>>>>>>>> may need
>>>>>>>>> >>>>> to write some test code or read up a bit more about this.
>>>>>>>>> Let us know.
>>>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>>>> adaptor
>>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>>> cross that
>>>>>>>>> >>>>> bridge when we get there.
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings
>>>>>>>>> doc.
>>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll
>>>>>>>>> see a
>>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>>> object. You
>>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is
>>>>>>>>> done for
>>>>>>>>> >>>>> other pool types, and maybe write some test java code to see
>>>>>>>>> if you
>>>>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>>>>> before you
>>>>>>>>> >>>>> get started.
>>>>>>>>> >>>>>
>>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>>> figure it
>>>>>>>>> >>>>> > supports
>>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>>> pointed out
>>>>>>>>> >>>>> >> last
>>>>>>>>> >>>>> >> week or so.
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>>> >>>>> >> wrote:
>>>>>>>>> >>>>> >>>
>>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>>>>> utilities
>>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>>> distro. Then
>>>>>>>>> >>>>> >>> you'd call
>>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See
>>>>>>>>> the info I
>>>>>>>>> >>>>> >>> sent
>>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>>>>> iscsi
>>>>>>>>> >>>>> >>> storage type
>>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>>> >>>>> >>>
>>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>>> >>>>> >>> wrote:
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> Hi,
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed
>>>>>>>>> a SolidFire
>>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at
>>>>>>>>> the necessary
>>>>>>>>> >>>>> >>>> times
>>>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes
>>>>>>>>> on the
>>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>>> between a
>>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>>> create large
>>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>>> house many
>>>>>>>>> >>>>> >>>> root and
>>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to
>>>>>>>>> modify logic in
>>>>>>>>> >>>>> >>>> the
>>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>>> create/delete storage
>>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM,
>>>>>>>>> but I'm
>>>>>>>>> >>>>> >>>> still
>>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>>>> interact with
>>>>>>>>> >>>>> >>>> the
>>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>>>> iSCSI will be
>>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>>> >>>>> >>>> Mike
>>>>>>>>> >>>>> >>>>
>>>>>>>>> >>>>> >>>> --
>>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >>
>>>>>>>>> >>>>> >> --
>>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> >
>>>>>>>>> >>>>> > --
>>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>>
>>>>>>>>> >>>> --
>>>>>>>>> >>>> Mike Tutkowski
>>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>>> >>>> o: 303.746.7302
>>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>>> >>>
>>>>>>>>> >>>
>>>>>>>>> >>>
>>>>>>>>> >>>
>>>>>>>>> >>> --
>>>>>>>>> >>> Mike Tutkowski
>>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>>> >>> o: 303.746.7302
>>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >>
>>>>>>>>> >> --
>>>>>>>>> >> Mike Tutkowski
>>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>>> >> o: 303.746.7302
>>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Mike Tutkowski*
>>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>>> o: 303.746.7302
>>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>>> *™*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>>
>>>>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Better to wire up the lun directly to the vm unless there is a good reason
not to.
On Sep 13, 2013 7:40 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:

> You could do that, but as mentioned I think its a mistake to go to the
> trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
> filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
> image on that filesystem. You'll lose a lot of iops along the way, and have
> more overhead with the filesystem and its journaling, etc.
> On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> Ah, OK, I didn't know that was such new ground in KVM with CS.
>>
>> So, the way people use our SAN with KVM and CS today is by selecting
>> SharedMountPoint and specifying the location of the share.
>>
>> They can set up their share using Open iSCSI by discovering their iSCSI
>> target, logging in to it, then mounting it somewhere on their file system.
>>
>> Would it make sense for me to just do that discovery, logging in, and
>> mounting behind the scenes for them and letting the current code manage the
>> rest as it currently does?
>>
>>
>> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> Oh, hypervisor snapshots are a bit different. I need to catch up on the
>>> work done in KVM, but this is basically just disk snapshots + memory dump.
>>> I still think disk snapshots would preferably be handled by the SAN, and
>>> then memory dumps can go to secondary storage or something else. This is
>>> relatively new ground with CS and KVM, so we will want to see how others
>>> are planning theirs.
>>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> Let me back up and say I don't think you'd use a vdi style on an iscsi
>>>> lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>>> and that seems unnecessary and a performance killer.
>>>>
>>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>>> handling snapshots on the San side via the storage plugin is best. My
>>>> impression from the storage plugin refactor was that there was a snapshot
>>>> service that would allow the San to handle snapshots.
>>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>>
>>>>> Ideally volume snapshots can be handled by the SAN back end, if the
>>>>> SAN supports it. The cloudstack mgmt server could call your plugin for
>>>>> volume snapshot and it would be hypervisor agnostic. As far as space, that
>>>>> would depend on how your SAN handles it. With ours, we carve out luns from
>>>>> a pool, and the snapshot spave comes from the pool and is independent of
>>>>> the LUN size the host sees.
>>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> Hey Marcus,
>>>>>>
>>>>>> I wonder if the iSCSI storage pool type for libvirt won't work when
>>>>>> you take into consideration hypervisor snapshots?
>>>>>>
>>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>>
>>>>>> Same idea for VMware, I believe.
>>>>>>
>>>>>> So, what would happen in my case (let's say for XenServer and VMware
>>>>>> for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>>>>> iSCSI target that is larger than what the user requested for the CloudStack
>>>>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>>>>> space is not actually used unless it needs to be). The CloudStack volume
>>>>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>>>>> taken. This snapshot would also reside on the SAN volume.
>>>>>>
>>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>>> see how using this model will work.
>>>>>>
>>>>>> Perhaps I will have to go enhance the current way this works with DIR?
>>>>>>
>>>>>> What do you think?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>>
>>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>>
>>>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>>>> what libvirt has for iSCSI instead.
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>
>>>>>>>> To your question about SharedMountPoint, I believe it just acts
>>>>>>>> like a
>>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>>> responsible for mounting a file system that all KVM hosts can
>>>>>>>> access,
>>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>>> could
>>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack
>>>>>>>> just
>>>>>>>> knows that the provided directory path has VM images.
>>>>>>>>
>>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>>> > Multiples, in fact.
>>>>>>>> >
>>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>>> >>
>>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>>> >> Name                 State      Autostart
>>>>>>>> >> -----------------------------------------
>>>>>>>> >> default              active     yes
>>>>>>>> >> iSCSI                active     no
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>>> >>>
>>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>>> >>>
>>>>>>>> >>> I see what you're saying now.
>>>>>>>> >>>
>>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an
>>>>>>>> iSCSI target.
>>>>>>>> >>>
>>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>>>>> would only
>>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>>>>> pool.
>>>>>>>> >>>
>>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs
>>>>>>>> on the
>>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>>> support
>>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>>> >>>
>>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>>> supports
>>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one
>>>>>>>> of its
>>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>>> >>>>
>>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>>> >>>>
>>>>>>>> >>>>     public enum poolType {
>>>>>>>> >>>>
>>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>>> DIR("dir"),
>>>>>>>> >>>> RBD("rbd");
>>>>>>>> >>>>
>>>>>>>> >>>>         String _poolType;
>>>>>>>> >>>>
>>>>>>>> >>>>         poolType(String poolType) {
>>>>>>>> >>>>
>>>>>>>> >>>>             _poolType = poolType;
>>>>>>>> >>>>
>>>>>>>> >>>>         }
>>>>>>>> >>>>
>>>>>>>> >>>>         @Override
>>>>>>>> >>>>
>>>>>>>> >>>>         public String toString() {
>>>>>>>> >>>>
>>>>>>>> >>>>             return _poolType;
>>>>>>>> >>>>
>>>>>>>> >>>>         }
>>>>>>>> >>>>
>>>>>>>> >>>>     }
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>> It doesn't look like the iSCSI type is currently being used,
>>>>>>>> but I'm
>>>>>>>> >>>> understanding more what you were getting at.
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>>> "netfs" option
>>>>>>>> >>>> above or is that just for NFS?
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>> Thanks!
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>>> shadowsor@gmail.com>
>>>>>>>> >>>> wrote:
>>>>>>>> >>>>>
>>>>>>>> >>>>> Take a look at this:
>>>>>>>> >>>>>
>>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>>> >>>>>
>>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and
>>>>>>>> cannot be
>>>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>>>> will take
>>>>>>>> >>>>> care of. Libvirt just does the work of logging in and hooking
>>>>>>>> it up to
>>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen
>>>>>>>> stuff).
>>>>>>>> >>>>>
>>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>>> mapping, or if
>>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You
>>>>>>>> may need
>>>>>>>> >>>>> to write some test code or read up a bit more about this. Let
>>>>>>>> us know.
>>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>>> adaptor
>>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can
>>>>>>>> cross that
>>>>>>>> >>>>> bridge when we get there.
>>>>>>>> >>>>>
>>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll
>>>>>>>> see a
>>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>>> object. You
>>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done
>>>>>>>> for
>>>>>>>> >>>>> other pool types, and maybe write some test java code to see
>>>>>>>> if you
>>>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>>>> before you
>>>>>>>> >>>>> get started.
>>>>>>>> >>>>>
>>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>>> figure it
>>>>>>>> >>>>> > supports
>>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>>> >>>>> >
>>>>>>>> >>>>> >
>>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>>> pointed out
>>>>>>>> >>>>> >> last
>>>>>>>> >>>>> >> week or so.
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>>> >>>>> >> wrote:
>>>>>>>> >>>>> >>>
>>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>>>> utilities
>>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>>> distro. Then
>>>>>>>> >>>>> >>> you'd call
>>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See
>>>>>>>> the info I
>>>>>>>> >>>>> >>> sent
>>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>>>> iscsi
>>>>>>>> >>>>> >>> storage type
>>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>>> >>>>> >>>
>>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>>> >>>>> >>> wrote:
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> Hi,
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed
>>>>>>>> a SolidFire
>>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>>>>>> necessary
>>>>>>>> >>>>> >>>> times
>>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes on
>>>>>>>> the
>>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>>> >>>>> >>>> (among other activities).
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>>> between a
>>>>>>>> >>>>> >>>> CloudStack
>>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>>> create large
>>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>>> house many
>>>>>>>> >>>>> >>>> root and
>>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>>>>>> logic in
>>>>>>>> >>>>> >>>> the
>>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could
>>>>>>>> create/delete storage
>>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM,
>>>>>>>> but I'm
>>>>>>>> >>>>> >>>> still
>>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>>> interact with
>>>>>>>> >>>>> >>>> the
>>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>>> iSCSI will be
>>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>>> >>>>> >>>> Mike
>>>>>>>> >>>>> >>>>
>>>>>>>> >>>>> >>>> --
>>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> --
>>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>>> >>>>> >
>>>>>>>> >>>>> >
>>>>>>>> >>>>> >
>>>>>>>> >>>>> >
>>>>>>>> >>>>> > --
>>>>>>>> >>>>> > Mike Tutkowski
>>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>>> >>>>> > o: 303.746.7302
>>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>>
>>>>>>>> >>>> --
>>>>>>>> >>>> Mike Tutkowski
>>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>>> >>>> o: 303.746.7302
>>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>> --
>>>>>>>> >>> Mike Tutkowski
>>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>>> >>> o: 303.746.7302
>>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> --
>>>>>>>> >> Mike Tutkowski
>>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>>> >> o: 303.746.7302
>>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *™*
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
You could do that, but as mentioned I think its a mistake to go to the
trouble of creating a 1:1 mapping of CS volumes to luns and then putting a
filesystem on it, mounting it, and then putting a QCOW2 or even RAW disk
image on that filesystem. You'll lose a lot of iops along the way, and have
more overhead with the filesystem and its journaling, etc.
On Sep 13, 2013 7:33 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Ah, OK, I didn't know that was such new ground in KVM with CS.
>
> So, the way people use our SAN with KVM and CS today is by selecting
> SharedMountPoint and specifying the location of the share.
>
> They can set up their share using Open iSCSI by discovering their iSCSI
> target, logging in to it, then mounting it somewhere on their file system.
>
> Would it make sense for me to just do that discovery, logging in, and
> mounting behind the scenes for them and letting the current code manage the
> rest as it currently does?
>
>
> On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Oh, hypervisor snapshots are a bit different. I need to catch up on the
>> work done in KVM, but this is basically just disk snapshots + memory dump.
>> I still think disk snapshots would preferably be handled by the SAN, and
>> then memory dumps can go to secondary storage or something else. This is
>> relatively new ground with CS and KVM, so we will want to see how others
>> are planning theirs.
>>  On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>
>>> Let me back up and say I don't think you'd use a vdi style on an iscsi
>>> lun. I think you'd want to treat it as a RAW format. Otherwise you're
>>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>>> and that seems unnecessary and a performance killer.
>>>
>>> So probably attaching the raw iscsi lun as a disk to the VM, and
>>> handling snapshots on the San side via the storage plugin is best. My
>>> impression from the storage plugin refactor was that there was a snapshot
>>> service that would allow the San to handle snapshots.
>>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>>
>>>> Ideally volume snapshots can be handled by the SAN back end, if the SAN
>>>> supports it. The cloudstack mgmt server could call your plugin for volume
>>>> snapshot and it would be hypervisor agnostic. As far as space, that would
>>>> depend on how your SAN handles it. With ours, we carve out luns from a
>>>> pool, and the snapshot spave comes from the pool and is independent of the
>>>> LUN size the host sees.
>>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>>> wrote:
>>>>
>>>>> Hey Marcus,
>>>>>
>>>>> I wonder if the iSCSI storage pool type for libvirt won't work when
>>>>> you take into consideration hypervisor snapshots?
>>>>>
>>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>>
>>>>> Same idea for VMware, I believe.
>>>>>
>>>>> So, what would happen in my case (let's say for XenServer and VMware
>>>>> for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>>>> iSCSI target that is larger than what the user requested for the CloudStack
>>>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>>>> space is not actually used unless it needs to be). The CloudStack volume
>>>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>>>> taken. This snapshot would also reside on the SAN volume.
>>>>>
>>>>> If this is also how KVM behaves and there is no creation of LUNs
>>>>> within an iSCSI target from libvirt (which, even if there were support for
>>>>> this, our SAN currently only allows one LUN per iSCSI target), then I don't
>>>>> see how using this model will work.
>>>>>
>>>>> Perhaps I will have to go enhance the current way this works with DIR?
>>>>>
>>>>> What do you think?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>>> mike.tutkowski@solidfire.com> wrote:
>>>>>
>>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>>
>>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>>> what libvirt has for iSCSI instead.
>>>>>>
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <shadowsor@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> To your question about SharedMountPoint, I believe it just acts like
>>>>>>> a
>>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>>> responsible for mounting a file system that all KVM hosts can access,
>>>>>>> and CloudStack is oblivious to what is providing the storage. It
>>>>>>> could
>>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>>>>>> knows that the provided directory path has VM images.
>>>>>>>
>>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com> wrote:
>>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>>> > Multiples, in fact.
>>>>>>> >
>>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>>> > <mi...@solidfire.com> wrote:
>>>>>>> >> Looks like you can have multiple storage pools:
>>>>>>> >>
>>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>>> >> Name                 State      Autostart
>>>>>>> >> -----------------------------------------
>>>>>>> >> default              active     yes
>>>>>>> >> iSCSI                active     no
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>>> >>>
>>>>>>> >>> Reading through the docs you pointed out.
>>>>>>> >>>
>>>>>>> >>> I see what you're saying now.
>>>>>>> >>>
>>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>>>>>> target.
>>>>>>> >>>
>>>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>>>> would only
>>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>>>> pool.
>>>>>>> >>>
>>>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs
>>>>>>> on the
>>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>>> support
>>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>>> >>>
>>>>>>> >>> It looks like I need to test this a bit to see if libvirt
>>>>>>> supports
>>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one
>>>>>>> of its
>>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>>> >>>>
>>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>>> >>>>
>>>>>>> >>>>     public enum poolType {
>>>>>>> >>>>
>>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>>> DIR("dir"),
>>>>>>> >>>> RBD("rbd");
>>>>>>> >>>>
>>>>>>> >>>>         String _poolType;
>>>>>>> >>>>
>>>>>>> >>>>         poolType(String poolType) {
>>>>>>> >>>>
>>>>>>> >>>>             _poolType = poolType;
>>>>>>> >>>>
>>>>>>> >>>>         }
>>>>>>> >>>>
>>>>>>> >>>>         @Override
>>>>>>> >>>>
>>>>>>> >>>>         public String toString() {
>>>>>>> >>>>
>>>>>>> >>>>             return _poolType;
>>>>>>> >>>>
>>>>>>> >>>>         }
>>>>>>> >>>>
>>>>>>> >>>>     }
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> It doesn't look like the iSCSI type is currently being used,
>>>>>>> but I'm
>>>>>>> >>>> understanding more what you were getting at.
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>>> "netfs" option
>>>>>>> >>>> above or is that just for NFS?
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> Thanks!
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>>> shadowsor@gmail.com>
>>>>>>> >>>> wrote:
>>>>>>> >>>>>
>>>>>>> >>>>> Take a look at this:
>>>>>>> >>>>>
>>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>>> >>>>>
>>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot
>>>>>>> be
>>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>>> will take
>>>>>>> >>>>> care of. Libvirt just does the work of logging in and hooking
>>>>>>> it up to
>>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>>>>>> >>>>>
>>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1
>>>>>>> mapping, or if
>>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You
>>>>>>> may need
>>>>>>> >>>>> to write some test code or read up a bit more about this. Let
>>>>>>> us know.
>>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>>> adaptor
>>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross
>>>>>>> that
>>>>>>> >>>>> bridge when we get there.
>>>>>>> >>>>>
>>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll
>>>>>>> see a
>>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>>> object. You
>>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done
>>>>>>> for
>>>>>>> >>>>> other pool types, and maybe write some test java code to see
>>>>>>> if you
>>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>>> before you
>>>>>>> >>>>> get started.
>>>>>>> >>>>>
>>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>>> figure it
>>>>>>> >>>>> > supports
>>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>>> pointed out
>>>>>>> >>>>> >> last
>>>>>>> >>>>> >> week or so.
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>>> >>>>> >> wrote:
>>>>>>> >>>>> >>>
>>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>>> utilities
>>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>>> distro. Then
>>>>>>> >>>>> >>> you'd call
>>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See
>>>>>>> the info I
>>>>>>> >>>>> >>> sent
>>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>>> iscsi
>>>>>>> >>>>> >>> storage type
>>>>>>> >>>>> >>> to see if that fits your need.
>>>>>>> >>>>> >>>
>>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>>> >>>>> >>> wrote:
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Hi,
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>>>>>> SolidFire
>>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>>>>> necessary
>>>>>>> >>>>> >>>> times
>>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes on
>>>>>>> the
>>>>>>> >>>>> >>>> SolidFire SAN
>>>>>>> >>>>> >>>> (among other activities).
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping
>>>>>>> between a
>>>>>>> >>>>> >>>> CloudStack
>>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>>> create large
>>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely
>>>>>>> house many
>>>>>>> >>>>> >>>> root and
>>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>>>>> logic in
>>>>>>> >>>>> >>>> the
>>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>>>>>> storage
>>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM,
>>>>>>> but I'm
>>>>>>> >>>>> >>>> still
>>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>>> interact with
>>>>>>> >>>>> >>>> the
>>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>>> iSCSI will be
>>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>>> >>>>> >>>> Mike
>>>>>>> >>>>> >>>>
>>>>>>> >>>>> >>>> --
>>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >>
>>>>>>> >>>>> >> --
>>>>>>> >>>>> >> Mike Tutkowski
>>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> >> o: 303.746.7302
>>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> >
>>>>>>> >>>>> > --
>>>>>>> >>>>> > Mike Tutkowski
>>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>>> >>>>> > o: 303.746.7302
>>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>>
>>>>>>> >>>> --
>>>>>>> >>>> Mike Tutkowski
>>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>>> >>>> o: 303.746.7302
>>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>>
>>>>>>> >>> --
>>>>>>> >>> Mike Tutkowski
>>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>>> >>> o: 303.746.7302
>>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >>
>>>>>>> >> --
>>>>>>> >> Mike Tutkowski
>>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>>> >> o: 303.746.7302
>>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Mike Tutkowski*
>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>> e: mike.tutkowski@solidfire.com
>>>>>> o: 303.746.7302
>>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>> *™*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Ah, OK, I didn't know that was such new ground in KVM with CS.

So, the way people use our SAN with KVM and CS today is by selecting
SharedMountPoint and specifying the location of the share.

They can set up their share using Open iSCSI by discovering their iSCSI
target, logging in to it, then mounting it somewhere on their file system.

Would it make sense for me to just do that discovery, logging in, and
mounting behind the scenes for them and letting the current code manage the
rest as it currently does?


On Fri, Sep 13, 2013 at 7:27 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Oh, hypervisor snapshots are a bit different. I need to catch up on the
> work done in KVM, but this is basically just disk snapshots + memory dump.
> I still think disk snapshots would preferably be handled by the SAN, and
> then memory dumps can go to secondary storage or something else. This is
> relatively new ground with CS and KVM, so we will want to see how others
> are planning theirs.
> On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>> Let me back up and say I don't think you'd use a vdi style on an iscsi
>> lun. I think you'd want to treat it as a RAW format. Otherwise you're
>> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
>> and that seems unnecessary and a performance killer.
>>
>> So probably attaching the raw iscsi lun as a disk to the VM, and handling
>> snapshots on the San side via the storage plugin is best. My impression
>> from the storage plugin refactor was that there was a snapshot service that
>> would allow the San to handle snapshots.
>> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>>
>>> Ideally volume snapshots can be handled by the SAN back end, if the SAN
>>> supports it. The cloudstack mgmt server could call your plugin for volume
>>> snapshot and it would be hypervisor agnostic. As far as space, that would
>>> depend on how your SAN handles it. With ours, we carve out luns from a
>>> pool, and the snapshot spave comes from the pool and is independent of the
>>> LUN size the host sees.
>>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>
>>>> Hey Marcus,
>>>>
>>>> I wonder if the iSCSI storage pool type for libvirt won't work when you
>>>> take into consideration hypervisor snapshots?
>>>>
>>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>>> snapshot is placed on the same storage repository as the volume is on.
>>>>
>>>> Same idea for VMware, I believe.
>>>>
>>>> So, what would happen in my case (let's say for XenServer and VMware
>>>> for 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>>> iSCSI target that is larger than what the user requested for the CloudStack
>>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>>> space is not actually used unless it needs to be). The CloudStack volume
>>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>>> taken. This snapshot would also reside on the SAN volume.
>>>>
>>>> If this is also how KVM behaves and there is no creation of LUNs within
>>>> an iSCSI target from libvirt (which, even if there were support for this,
>>>> our SAN currently only allows one LUN per iSCSI target), then I don't see
>>>> how using this model will work.
>>>>
>>>> Perhaps I will have to go enhance the current way this works with DIR?
>>>>
>>>> What do you think?
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>>
>>>>> That appears to be the way it's used for iSCSI access today.
>>>>>
>>>>> I suppose I could go that route, too, but I might as well leverage
>>>>> what libvirt has for iSCSI instead.
>>>>>
>>>>>
>>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>>
>>>>>> To your question about SharedMountPoint, I believe it just acts like a
>>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>>> responsible for mounting a file system that all KVM hosts can access,
>>>>>> and CloudStack is oblivious to what is providing the storage. It could
>>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>>>>> knows that the provided directory path has VM images.
>>>>>>
>>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <sh...@gmail.com>
>>>>>> wrote:
>>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>>> > Multiples, in fact.
>>>>>> >
>>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>>> > <mi...@solidfire.com> wrote:
>>>>>> >> Looks like you can have multiple storage pools:
>>>>>> >>
>>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>>> >> Name                 State      Autostart
>>>>>> >> -----------------------------------------
>>>>>> >> default              active     yes
>>>>>> >> iSCSI                active     no
>>>>>> >>
>>>>>> >>
>>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>>> >> <mi...@solidfire.com> wrote:
>>>>>> >>>
>>>>>> >>> Reading through the docs you pointed out.
>>>>>> >>>
>>>>>> >>> I see what you're saying now.
>>>>>> >>>
>>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>>>>> target.
>>>>>> >>>
>>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>>> would only
>>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>>> pool.
>>>>>> >>>
>>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs on
>>>>>> the
>>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not
>>>>>> support
>>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>>> >>>
>>>>>> >>> It looks like I need to test this a bit to see if libvirt supports
>>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one of
>>>>>> its
>>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>>> >>> <mi...@solidfire.com> wrote:
>>>>>> >>>>
>>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>>> >>>>
>>>>>> >>>>     public enum poolType {
>>>>>> >>>>
>>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>>> DIR("dir"),
>>>>>> >>>> RBD("rbd");
>>>>>> >>>>
>>>>>> >>>>         String _poolType;
>>>>>> >>>>
>>>>>> >>>>         poolType(String poolType) {
>>>>>> >>>>
>>>>>> >>>>             _poolType = poolType;
>>>>>> >>>>
>>>>>> >>>>         }
>>>>>> >>>>
>>>>>> >>>>         @Override
>>>>>> >>>>
>>>>>> >>>>         public String toString() {
>>>>>> >>>>
>>>>>> >>>>             return _poolType;
>>>>>> >>>>
>>>>>> >>>>         }
>>>>>> >>>>
>>>>>> >>>>     }
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> It doesn't look like the iSCSI type is currently being used, but
>>>>>> I'm
>>>>>> >>>> understanding more what you were getting at.
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>>> "netfs" option
>>>>>> >>>> above or is that just for NFS?
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> Thanks!
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>>> shadowsor@gmail.com>
>>>>>> >>>> wrote:
>>>>>> >>>>>
>>>>>> >>>>> Take a look at this:
>>>>>> >>>>>
>>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>>> >>>>>
>>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot
>>>>>> be
>>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin
>>>>>> will take
>>>>>> >>>>> care of. Libvirt just does the work of logging in and hooking
>>>>>> it up to
>>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>>>>> >>>>>
>>>>>> >>>>> What I'm not sure about is whether this provides a 1:1 mapping,
>>>>>> or if
>>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You
>>>>>> may need
>>>>>> >>>>> to write some test code or read up a bit more about this. Let
>>>>>> us know.
>>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>>> adaptor
>>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross
>>>>>> that
>>>>>> >>>>> bridge when we get there.
>>>>>> >>>>>
>>>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll see
>>>>>> a
>>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>>> object. You
>>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done
>>>>>> for
>>>>>> >>>>> other pool types, and maybe write some test java code to see if
>>>>>> you
>>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>>> before you
>>>>>> >>>>> get started.
>>>>>> >>>>>
>>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you
>>>>>> figure it
>>>>>> >>>>> > supports
>>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>>> >>>>> >
>>>>>> >>>>> >
>>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>>> >>>>> >>
>>>>>> >>>>> >> OK, thanks, Marcus
>>>>>> >>>>> >>
>>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>>> pointed out
>>>>>> >>>>> >> last
>>>>>> >>>>> >> week or so.
>>>>>> >>>>> >>
>>>>>> >>>>> >>
>>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>>> >>>>> >> <sh...@gmail.com>
>>>>>> >>>>> >> wrote:
>>>>>> >>>>> >>>
>>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>>> utilities
>>>>>> >>>>> >>> installed. There should be standard packages for any
>>>>>> distro. Then
>>>>>> >>>>> >>> you'd call
>>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See the
>>>>>> info I
>>>>>> >>>>> >>> sent
>>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt
>>>>>> iscsi
>>>>>> >>>>> >>> storage type
>>>>>> >>>>> >>> to see if that fits your need.
>>>>>> >>>>> >>>
>>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>>> >>>>> >>> wrote:
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> Hi,
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>>>>> SolidFire
>>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>>>> necessary
>>>>>> >>>>> >>>> times
>>>>>> >>>>> >>>> so that I could dynamically create and delete volumes on
>>>>>> the
>>>>>> >>>>> >>>> SolidFire SAN
>>>>>> >>>>> >>>> (among other activities).
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping between
>>>>>> a
>>>>>> >>>>> >>>> CloudStack
>>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to
>>>>>> create large
>>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely house
>>>>>> many
>>>>>> >>>>> >>>> root and
>>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>>>> logic in
>>>>>> >>>>> >>>> the
>>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>>>>> storage
>>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM,
>>>>>> but I'm
>>>>>> >>>>> >>>> still
>>>>>> >>>>> >>>> pretty new to KVM.
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>>> interact with
>>>>>> >>>>> >>>> the
>>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open
>>>>>> iSCSI will be
>>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>>> >>>>> >>>> Mike
>>>>>> >>>>> >>>>
>>>>>> >>>>> >>>> --
>>>>>> >>>>> >>>> Mike Tutkowski
>>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>> >>>>> >>>> o: 303.746.7302
>>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>>> >>>>> >>
>>>>>> >>>>> >>
>>>>>> >>>>> >>
>>>>>> >>>>> >>
>>>>>> >>>>> >> --
>>>>>> >>>>> >> Mike Tutkowski
>>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>>> >>>>> >> o: 303.746.7302
>>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>>> >>>>> >
>>>>>> >>>>> >
>>>>>> >>>>> >
>>>>>> >>>>> >
>>>>>> >>>>> > --
>>>>>> >>>>> > Mike Tutkowski
>>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>>> >>>>> > o: 303.746.7302
>>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> --
>>>>>> >>>> Mike Tutkowski
>>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>>> >>>> o: 303.746.7302
>>>>>> >>>> Advancing the way the world uses the cloud™
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> --
>>>>>> >>> Mike Tutkowski
>>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>>> >>> o: 303.746.7302
>>>>>> >>> Advancing the way the world uses the cloud™
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >> --
>>>>>> >> Mike Tutkowski
>>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>>> >> e: mike.tutkowski@solidfire.com
>>>>>> >> o: 303.746.7302
>>>>>> >> Advancing the way the world uses the cloud™
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Oh, hypervisor snapshots are a bit different. I need to catch up on the
work done in KVM, but this is basically just disk snapshots + memory dump.
I still think disk snapshots would preferably be handled by the SAN, and
then memory dumps can go to secondary storage or something else. This is
relatively new ground with CS and KVM, so we will want to see how others
are planning theirs.
On Sep 13, 2013 7:20 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:

> Let me back up and say I don't think you'd use a vdi style on an iscsi
> lun. I think you'd want to treat it as a RAW format. Otherwise you're
> putting a filesystem on your lun, mounting it, creating a QCOW2 disk image,
> and that seems unnecessary and a performance killer.
>
> So probably attaching the raw iscsi lun as a disk to the VM, and handling
> snapshots on the San side via the storage plugin is best. My impression
> from the storage plugin refactor was that there was a snapshot service that
> would allow the San to handle snapshots.
> On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:
>
>> Ideally volume snapshots can be handled by the SAN back end, if the SAN
>> supports it. The cloudstack mgmt server could call your plugin for volume
>> snapshot and it would be hypervisor agnostic. As far as space, that would
>> depend on how your SAN handles it. With ours, we carve out luns from a
>> pool, and the snapshot spave comes from the pool and is independent of the
>> LUN size the host sees.
>> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> Hey Marcus,
>>>
>>> I wonder if the iSCSI storage pool type for libvirt won't work when you
>>> take into consideration hypervisor snapshots?
>>>
>>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>>> snapshot is placed on the same storage repository as the volume is on.
>>>
>>> Same idea for VMware, I believe.
>>>
>>> So, what would happen in my case (let's say for XenServer and VMware for
>>> 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>>> iSCSI target that is larger than what the user requested for the CloudStack
>>> volume (which is fine because our SAN thinly provisions volumes, so the
>>> space is not actually used unless it needs to be). The CloudStack volume
>>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>>> taken. This snapshot would also reside on the SAN volume.
>>>
>>> If this is also how KVM behaves and there is no creation of LUNs within
>>> an iSCSI target from libvirt (which, even if there were support for this,
>>> our SAN currently only allows one LUN per iSCSI target), then I don't see
>>> how using this model will work.
>>>
>>> Perhaps I will have to go enhance the current way this works with DIR?
>>>
>>> What do you think?
>>>
>>> Thanks
>>>
>>>
>>>
>>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>>> mike.tutkowski@solidfire.com> wrote:
>>>
>>>> That appears to be the way it's used for iSCSI access today.
>>>>
>>>> I suppose I could go that route, too, but I might as well leverage what
>>>> libvirt has for iSCSI instead.
>>>>
>>>>
>>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>>
>>>>> To your question about SharedMountPoint, I believe it just acts like a
>>>>> 'DIR' storage type or something similar to that. The end-user is
>>>>> responsible for mounting a file system that all KVM hosts can access,
>>>>> and CloudStack is oblivious to what is providing the storage. It could
>>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>>>> knows that the provided directory path has VM images.
>>>>>
>>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <sh...@gmail.com>
>>>>> wrote:
>>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>>> > Multiples, in fact.
>>>>> >
>>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>>> > <mi...@solidfire.com> wrote:
>>>>> >> Looks like you can have multiple storage pools:
>>>>> >>
>>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>>> >> Name                 State      Autostart
>>>>> >> -----------------------------------------
>>>>> >> default              active     yes
>>>>> >> iSCSI                active     no
>>>>> >>
>>>>> >>
>>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>>> >> <mi...@solidfire.com> wrote:
>>>>> >>>
>>>>> >>> Reading through the docs you pointed out.
>>>>> >>>
>>>>> >>> I see what you're saying now.
>>>>> >>>
>>>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>>>> target.
>>>>> >>>
>>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>>> would only
>>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage
>>>>> pool.
>>>>> >>>
>>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs on
>>>>> the
>>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not support
>>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>>> >>>
>>>>> >>> It looks like I need to test this a bit to see if libvirt supports
>>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one of
>>>>> its
>>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>>> >>>
>>>>> >>>
>>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>>> >>> <mi...@solidfire.com> wrote:
>>>>> >>>>
>>>>> >>>> LibvirtStoragePoolDef has this type:
>>>>> >>>>
>>>>> >>>>     public enum poolType {
>>>>> >>>>
>>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>>> DIR("dir"),
>>>>> >>>> RBD("rbd");
>>>>> >>>>
>>>>> >>>>         String _poolType;
>>>>> >>>>
>>>>> >>>>         poolType(String poolType) {
>>>>> >>>>
>>>>> >>>>             _poolType = poolType;
>>>>> >>>>
>>>>> >>>>         }
>>>>> >>>>
>>>>> >>>>         @Override
>>>>> >>>>
>>>>> >>>>         public String toString() {
>>>>> >>>>
>>>>> >>>>             return _poolType;
>>>>> >>>>
>>>>> >>>>         }
>>>>> >>>>
>>>>> >>>>     }
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> It doesn't look like the iSCSI type is currently being used, but
>>>>> I'm
>>>>> >>>> understanding more what you were getting at.
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>>> "netfs" option
>>>>> >>>> above or is that just for NFS?
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> Thanks!
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>>> shadowsor@gmail.com>
>>>>> >>>> wrote:
>>>>> >>>>>
>>>>> >>>>> Take a look at this:
>>>>> >>>>>
>>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>>> >>>>>
>>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot be
>>>>> >>>>> created via the libvirt APIs.", which I believe your plugin will
>>>>> take
>>>>> >>>>> care of. Libvirt just does the work of logging in and hooking it
>>>>> up to
>>>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>>>> >>>>>
>>>>> >>>>> What I'm not sure about is whether this provides a 1:1 mapping,
>>>>> or if
>>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You may
>>>>> need
>>>>> >>>>> to write some test code or read up a bit more about this. Let us
>>>>> know.
>>>>> >>>>> If it doesn't, you may just have to write your own storage
>>>>> adaptor
>>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross
>>>>> that
>>>>> >>>>> bridge when we get there.
>>>>> >>>>>
>>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll see a
>>>>> >>>>> connection object be made, then calls made to that 'conn'
>>>>> object. You
>>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done for
>>>>> >>>>> other pool types, and maybe write some test java code to see if
>>>>> you
>>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>>> before you
>>>>> >>>>> get started.
>>>>> >>>>>
>>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>>> >>>>> <mi...@solidfire.com> wrote:
>>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you figure
>>>>> it
>>>>> >>>>> > supports
>>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>>> >>>>> >
>>>>> >>>>> >
>>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>>> >>>>> >>
>>>>> >>>>> >> OK, thanks, Marcus
>>>>> >>>>> >>
>>>>> >>>>> >> I am currently looking through some of the classes you
>>>>> pointed out
>>>>> >>>>> >> last
>>>>> >>>>> >> week or so.
>>>>> >>>>> >>
>>>>> >>>>> >>
>>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>>> >>>>> >> <sh...@gmail.com>
>>>>> >>>>> >> wrote:
>>>>> >>>>> >>>
>>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>>> utilities
>>>>> >>>>> >>> installed. There should be standard packages for any distro.
>>>>> Then
>>>>> >>>>> >>> you'd call
>>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See the
>>>>> info I
>>>>> >>>>> >>> sent
>>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt iscsi
>>>>> >>>>> >>> storage type
>>>>> >>>>> >>> to see if that fits your need.
>>>>> >>>>> >>>
>>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>>> >>>>> >>> <mi...@solidfire.com>
>>>>> >>>>> >>> wrote:
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> Hi,
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>>>> SolidFire
>>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>>> necessary
>>>>> >>>>> >>>> times
>>>>> >>>>> >>>> so that I could dynamically create and delete volumes on the
>>>>> >>>>> >>>> SolidFire SAN
>>>>> >>>>> >>>> (among other activities).
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping between a
>>>>> >>>>> >>>> CloudStack
>>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> In the past, CloudStack always expected the admin to create
>>>>> large
>>>>> >>>>> >>>> volumes ahead of time and those volumes would likely house
>>>>> many
>>>>> >>>>> >>>> root and
>>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>>> logic in
>>>>> >>>>> >>>> the
>>>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>>>> storage
>>>>> >>>>> >>>> repositories/datastores as needed.
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM, but
>>>>> I'm
>>>>> >>>>> >>>> still
>>>>> >>>>> >>>> pretty new to KVM.
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>>> interact with
>>>>> >>>>> >>>> the
>>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open iSCSI
>>>>> will be
>>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> Thanks for any suggestions,
>>>>> >>>>> >>>> Mike
>>>>> >>>>> >>>>
>>>>> >>>>> >>>> --
>>>>> >>>>> >>>> Mike Tutkowski
>>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>> >>>>> >>>> o: 303.746.7302
>>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>>> >>>>> >>
>>>>> >>>>> >>
>>>>> >>>>> >>
>>>>> >>>>> >>
>>>>> >>>>> >> --
>>>>> >>>>> >> Mike Tutkowski
>>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>>> >>>>> >> o: 303.746.7302
>>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>>> >>>>> >
>>>>> >>>>> >
>>>>> >>>>> >
>>>>> >>>>> >
>>>>> >>>>> > --
>>>>> >>>>> > Mike Tutkowski
>>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>>> >>>>> > o: 303.746.7302
>>>>> >>>>> > Advancing the way the world uses the cloud™
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> --
>>>>> >>>> Mike Tutkowski
>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>>> >>>> e: mike.tutkowski@solidfire.com
>>>>> >>>> o: 303.746.7302
>>>>> >>>> Advancing the way the world uses the cloud™
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>> --
>>>>> >>> Mike Tutkowski
>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>>> >>> e: mike.tutkowski@solidfire.com
>>>>> >>> o: 303.746.7302
>>>>> >>> Advancing the way the world uses the cloud™
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> --
>>>>> >> Mike Tutkowski
>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>>> >> e: mike.tutkowski@solidfire.com
>>>>> >> o: 303.746.7302
>>>>> >> Advancing the way the world uses the cloud™
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Let me back up and say I don't think you'd use a vdi style on an iscsi lun.
I think you'd want to treat it as a RAW format. Otherwise you're putting a
filesystem on your lun, mounting it, creating a QCOW2 disk image, and that
seems unnecessary and a performance killer.

So probably attaching the raw iscsi lun as a disk to the VM, and handling
snapshots on the San side via the storage plugin is best. My impression
from the storage plugin refactor was that there was a snapshot service that
would allow the San to handle snapshots.
On Sep 13, 2013 7:15 PM, "Marcus Sorensen" <sh...@gmail.com> wrote:

> Ideally volume snapshots can be handled by the SAN back end, if the SAN
> supports it. The cloudstack mgmt server could call your plugin for volume
> snapshot and it would be hypervisor agnostic. As far as space, that would
> depend on how your SAN handles it. With ours, we carve out luns from a
> pool, and the snapshot spave comes from the pool and is independent of the
> LUN size the host sees.
> On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> Hey Marcus,
>>
>> I wonder if the iSCSI storage pool type for libvirt won't work when you
>> take into consideration hypervisor snapshots?
>>
>> On XenServer, when you take a hypervisor snapshot, the VDI for the
>> snapshot is placed on the same storage repository as the volume is on.
>>
>> Same idea for VMware, I believe.
>>
>> So, what would happen in my case (let's say for XenServer and VMware for
>> 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
>> iSCSI target that is larger than what the user requested for the CloudStack
>> volume (which is fine because our SAN thinly provisions volumes, so the
>> space is not actually used unless it needs to be). The CloudStack volume
>> would be the only "object" on the SAN volume until a hypervisor snapshot is
>> taken. This snapshot would also reside on the SAN volume.
>>
>> If this is also how KVM behaves and there is no creation of LUNs within
>> an iSCSI target from libvirt (which, even if there were support for this,
>> our SAN currently only allows one LUN per iSCSI target), then I don't see
>> how using this model will work.
>>
>> Perhaps I will have to go enhance the current way this works with DIR?
>>
>> What do you think?
>>
>> Thanks
>>
>>
>>
>> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com> wrote:
>>
>>> That appears to be the way it's used for iSCSI access today.
>>>
>>> I suppose I could go that route, too, but I might as well leverage what
>>> libvirt has for iSCSI instead.
>>>
>>>
>>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>>
>>>> To your question about SharedMountPoint, I believe it just acts like a
>>>> 'DIR' storage type or something similar to that. The end-user is
>>>> responsible for mounting a file system that all KVM hosts can access,
>>>> and CloudStack is oblivious to what is providing the storage. It could
>>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>>> knows that the provided directory path has VM images.
>>>>
>>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <sh...@gmail.com>
>>>> wrote:
>>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>>> > Multiples, in fact.
>>>> >
>>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>>> > <mi...@solidfire.com> wrote:
>>>> >> Looks like you can have multiple storage pools:
>>>> >>
>>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>>> >> Name                 State      Autostart
>>>> >> -----------------------------------------
>>>> >> default              active     yes
>>>> >> iSCSI                active     no
>>>> >>
>>>> >>
>>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>>> >> <mi...@solidfire.com> wrote:
>>>> >>>
>>>> >>> Reading through the docs you pointed out.
>>>> >>>
>>>> >>> I see what you're saying now.
>>>> >>>
>>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>>> target.
>>>> >>>
>>>> >>> In my case, the iSCSI target would only have one LUN, so there
>>>> would only
>>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage pool.
>>>> >>>
>>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs on
>>>> the
>>>> >>> SolidFire SAN, so it is not a problem that libvirt does not support
>>>> >>> creating/deleting iSCSI targets/LUNs.
>>>> >>>
>>>> >>> It looks like I need to test this a bit to see if libvirt supports
>>>> >>> multiple iSCSI storage pools (as you mentioned, since each one of
>>>> its
>>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>>> >>>
>>>> >>>
>>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>>> >>> <mi...@solidfire.com> wrote:
>>>> >>>>
>>>> >>>> LibvirtStoragePoolDef has this type:
>>>> >>>>
>>>> >>>>     public enum poolType {
>>>> >>>>
>>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>>> DIR("dir"),
>>>> >>>> RBD("rbd");
>>>> >>>>
>>>> >>>>         String _poolType;
>>>> >>>>
>>>> >>>>         poolType(String poolType) {
>>>> >>>>
>>>> >>>>             _poolType = poolType;
>>>> >>>>
>>>> >>>>         }
>>>> >>>>
>>>> >>>>         @Override
>>>> >>>>
>>>> >>>>         public String toString() {
>>>> >>>>
>>>> >>>>             return _poolType;
>>>> >>>>
>>>> >>>>         }
>>>> >>>>
>>>> >>>>     }
>>>> >>>>
>>>> >>>>
>>>> >>>> It doesn't look like the iSCSI type is currently being used, but
>>>> I'm
>>>> >>>> understanding more what you were getting at.
>>>> >>>>
>>>> >>>>
>>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the
>>>> "netfs" option
>>>> >>>> above or is that just for NFS?
>>>> >>>>
>>>> >>>>
>>>> >>>> Thanks!
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>>> shadowsor@gmail.com>
>>>> >>>> wrote:
>>>> >>>>>
>>>> >>>>> Take a look at this:
>>>> >>>>>
>>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>>> >>>>>
>>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot be
>>>> >>>>> created via the libvirt APIs.", which I believe your plugin will
>>>> take
>>>> >>>>> care of. Libvirt just does the work of logging in and hooking it
>>>> up to
>>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>>> >>>>>
>>>> >>>>> What I'm not sure about is whether this provides a 1:1 mapping,
>>>> or if
>>>> >>>>> it just allows you to register 1 iscsi device as a pool. You may
>>>> need
>>>> >>>>> to write some test code or read up a bit more about this. Let us
>>>> know.
>>>> >>>>> If it doesn't, you may just have to write your own storage adaptor
>>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross
>>>> that
>>>> >>>>> bridge when we get there.
>>>> >>>>>
>>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll see a
>>>> >>>>> connection object be made, then calls made to that 'conn' object.
>>>> You
>>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done for
>>>> >>>>> other pool types, and maybe write some test java code to see if
>>>> you
>>>> >>>>> can interface with libvirt and register iscsi storage pools
>>>> before you
>>>> >>>>> get started.
>>>> >>>>>
>>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>>> >>>>> <mi...@solidfire.com> wrote:
>>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you figure
>>>> it
>>>> >>>>> > supports
>>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>>> >>>>> >
>>>> >>>>> >
>>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>>> >>>>> > <mi...@solidfire.com> wrote:
>>>> >>>>> >>
>>>> >>>>> >> OK, thanks, Marcus
>>>> >>>>> >>
>>>> >>>>> >> I am currently looking through some of the classes you pointed
>>>> out
>>>> >>>>> >> last
>>>> >>>>> >> week or so.
>>>> >>>>> >>
>>>> >>>>> >>
>>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>>> >>>>> >> <sh...@gmail.com>
>>>> >>>>> >> wrote:
>>>> >>>>> >>>
>>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>>> utilities
>>>> >>>>> >>> installed. There should be standard packages for any distro.
>>>> Then
>>>> >>>>> >>> you'd call
>>>> >>>>> >>> an agent storage adaptor to do the initiator login. See the
>>>> info I
>>>> >>>>> >>> sent
>>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt iscsi
>>>> >>>>> >>> storage type
>>>> >>>>> >>> to see if that fits your need.
>>>> >>>>> >>>
>>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>>> >>>>> >>> <mi...@solidfire.com>
>>>> >>>>> >>> wrote:
>>>> >>>>> >>>>
>>>> >>>>> >>>> Hi,
>>>> >>>>> >>>>
>>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>>> SolidFire
>>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>>> >>>>> >>>>
>>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>>> necessary
>>>> >>>>> >>>> times
>>>> >>>>> >>>> so that I could dynamically create and delete volumes on the
>>>> >>>>> >>>> SolidFire SAN
>>>> >>>>> >>>> (among other activities).
>>>> >>>>> >>>>
>>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping between a
>>>> >>>>> >>>> CloudStack
>>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>>> >>>>> >>>>
>>>> >>>>> >>>> In the past, CloudStack always expected the admin to create
>>>> large
>>>> >>>>> >>>> volumes ahead of time and those volumes would likely house
>>>> many
>>>> >>>>> >>>> root and
>>>> >>>>> >>>> data disks (which is not QoS friendly).
>>>> >>>>> >>>>
>>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>>> logic in
>>>> >>>>> >>>> the
>>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>>> storage
>>>> >>>>> >>>> repositories/datastores as needed.
>>>> >>>>> >>>>
>>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>>> >>>>> >>>>
>>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM, but
>>>> I'm
>>>> >>>>> >>>> still
>>>> >>>>> >>>> pretty new to KVM.
>>>> >>>>> >>>>
>>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>>> interact with
>>>> >>>>> >>>> the
>>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open iSCSI
>>>> will be
>>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>>> >>>>> >>>>
>>>> >>>>> >>>> Thanks for any suggestions,
>>>> >>>>> >>>> Mike
>>>> >>>>> >>>>
>>>> >>>>> >>>> --
>>>> >>>>> >>>> Mike Tutkowski
>>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>>> >>>>> >>>> o: 303.746.7302
>>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>>> >>>>> >>
>>>> >>>>> >>
>>>> >>>>> >>
>>>> >>>>> >>
>>>> >>>>> >> --
>>>> >>>>> >> Mike Tutkowski
>>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>>> >>>>> >> o: 303.746.7302
>>>> >>>>> >> Advancing the way the world uses the cloud™
>>>> >>>>> >
>>>> >>>>> >
>>>> >>>>> >
>>>> >>>>> >
>>>> >>>>> > --
>>>> >>>>> > Mike Tutkowski
>>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>>> >>>>> > e: mike.tutkowski@solidfire.com
>>>> >>>>> > o: 303.746.7302
>>>> >>>>> > Advancing the way the world uses the cloud™
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> --
>>>> >>>> Mike Tutkowski
>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>>> e: mike.tutkowski@solidfire.com
>>>> >>>> o: 303.746.7302
>>>> >>>> Advancing the way the world uses the cloud™
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> --
>>>> >>> Mike Tutkowski
>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>>> >>> e: mike.tutkowski@solidfire.com
>>>> >>> o: 303.746.7302
>>>> >>> Advancing the way the world uses the cloud™
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Mike Tutkowski
>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>>> >> e: mike.tutkowski@solidfire.com
>>>> >> o: 303.746.7302
>>>> >> Advancing the way the world uses the cloud™
>>>>
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Ideally volume snapshots can be handled by the SAN back end, if the SAN
supports it. The cloudstack mgmt server could call your plugin for volume
snapshot and it would be hypervisor agnostic. As far as space, that would
depend on how your SAN handles it. With ours, we carve out luns from a
pool, and the snapshot spave comes from the pool and is independent of the
LUN size the host sees.
On Sep 13, 2013 7:10 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hey Marcus,
>
> I wonder if the iSCSI storage pool type for libvirt won't work when you
> take into consideration hypervisor snapshots?
>
> On XenServer, when you take a hypervisor snapshot, the VDI for the
> snapshot is placed on the same storage repository as the volume is on.
>
> Same idea for VMware, I believe.
>
> So, what would happen in my case (let's say for XenServer and VMware for
> 4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
> iSCSI target that is larger than what the user requested for the CloudStack
> volume (which is fine because our SAN thinly provisions volumes, so the
> space is not actually used unless it needs to be). The CloudStack volume
> would be the only "object" on the SAN volume until a hypervisor snapshot is
> taken. This snapshot would also reside on the SAN volume.
>
> If this is also how KVM behaves and there is no creation of LUNs within an
> iSCSI target from libvirt (which, even if there were support for this, our
> SAN currently only allows one LUN per iSCSI target), then I don't see how
> using this model will work.
>
> Perhaps I will have to go enhance the current way this works with DIR?
>
> What do you think?
>
> Thanks
>
>
>
> On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> That appears to be the way it's used for iSCSI access today.
>>
>> I suppose I could go that route, too, but I might as well leverage what
>> libvirt has for iSCSI instead.
>>
>>
>> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>>
>>> To your question about SharedMountPoint, I believe it just acts like a
>>> 'DIR' storage type or something similar to that. The end-user is
>>> responsible for mounting a file system that all KVM hosts can access,
>>> and CloudStack is oblivious to what is providing the storage. It could
>>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>>> knows that the provided directory path has VM images.
>>>
>>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <sh...@gmail.com>
>>> wrote:
>>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>>> > Multiples, in fact.
>>> >
>>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>>> > <mi...@solidfire.com> wrote:
>>> >> Looks like you can have multiple storage pools:
>>> >>
>>> >> mtutkowski@ubuntu:~$ virsh pool-list
>>> >> Name                 State      Autostart
>>> >> -----------------------------------------
>>> >> default              active     yes
>>> >> iSCSI                active     no
>>> >>
>>> >>
>>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>>> >> <mi...@solidfire.com> wrote:
>>> >>>
>>> >>> Reading through the docs you pointed out.
>>> >>>
>>> >>> I see what you're saying now.
>>> >>>
>>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>>> target.
>>> >>>
>>> >>> In my case, the iSCSI target would only have one LUN, so there would
>>> only
>>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage pool.
>>> >>>
>>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs on the
>>> >>> SolidFire SAN, so it is not a problem that libvirt does not support
>>> >>> creating/deleting iSCSI targets/LUNs.
>>> >>>
>>> >>> It looks like I need to test this a bit to see if libvirt supports
>>> >>> multiple iSCSI storage pools (as you mentioned, since each one of its
>>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>>> >>>
>>> >>>
>>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>>> >>> <mi...@solidfire.com> wrote:
>>> >>>>
>>> >>>> LibvirtStoragePoolDef has this type:
>>> >>>>
>>> >>>>     public enum poolType {
>>> >>>>
>>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>>> DIR("dir"),
>>> >>>> RBD("rbd");
>>> >>>>
>>> >>>>         String _poolType;
>>> >>>>
>>> >>>>         poolType(String poolType) {
>>> >>>>
>>> >>>>             _poolType = poolType;
>>> >>>>
>>> >>>>         }
>>> >>>>
>>> >>>>         @Override
>>> >>>>
>>> >>>>         public String toString() {
>>> >>>>
>>> >>>>             return _poolType;
>>> >>>>
>>> >>>>         }
>>> >>>>
>>> >>>>     }
>>> >>>>
>>> >>>>
>>> >>>> It doesn't look like the iSCSI type is currently being used, but I'm
>>> >>>> understanding more what you were getting at.
>>> >>>>
>>> >>>>
>>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>>> >>>> SharedMountPoint option and uses it with iSCSI, is that the "netfs"
>>> option
>>> >>>> above or is that just for NFS?
>>> >>>>
>>> >>>>
>>> >>>> Thanks!
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>>> shadowsor@gmail.com>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> Take a look at this:
>>> >>>>>
>>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>>> >>>>>
>>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot be
>>> >>>>> created via the libvirt APIs.", which I believe your plugin will
>>> take
>>> >>>>> care of. Libvirt just does the work of logging in and hooking it
>>> up to
>>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>>> >>>>>
>>> >>>>> What I'm not sure about is whether this provides a 1:1 mapping, or
>>> if
>>> >>>>> it just allows you to register 1 iscsi device as a pool. You may
>>> need
>>> >>>>> to write some test code or read up a bit more about this. Let us
>>> know.
>>> >>>>> If it doesn't, you may just have to write your own storage adaptor
>>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross that
>>> >>>>> bridge when we get there.
>>> >>>>>
>>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll see a
>>> >>>>> connection object be made, then calls made to that 'conn' object.
>>> You
>>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done for
>>> >>>>> other pool types, and maybe write some test java code to see if you
>>> >>>>> can interface with libvirt and register iscsi storage pools before
>>> you
>>> >>>>> get started.
>>> >>>>>
>>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>>> >>>>> <mi...@solidfire.com> wrote:
>>> >>>>> > So, Marcus, I need to investigate libvirt more, but you figure it
>>> >>>>> > supports
>>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>>> >>>>> >
>>> >>>>> >
>>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>>> >>>>> > <mi...@solidfire.com> wrote:
>>> >>>>> >>
>>> >>>>> >> OK, thanks, Marcus
>>> >>>>> >>
>>> >>>>> >> I am currently looking through some of the classes you pointed
>>> out
>>> >>>>> >> last
>>> >>>>> >> week or so.
>>> >>>>> >>
>>> >>>>> >>
>>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>>> >>>>> >> <sh...@gmail.com>
>>> >>>>> >> wrote:
>>> >>>>> >>>
>>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>>> utilities
>>> >>>>> >>> installed. There should be standard packages for any distro.
>>> Then
>>> >>>>> >>> you'd call
>>> >>>>> >>> an agent storage adaptor to do the initiator login. See the
>>> info I
>>> >>>>> >>> sent
>>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt iscsi
>>> >>>>> >>> storage type
>>> >>>>> >>> to see if that fits your need.
>>> >>>>> >>>
>>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>>> >>>>> >>> <mi...@solidfire.com>
>>> >>>>> >>> wrote:
>>> >>>>> >>>>
>>> >>>>> >>>> Hi,
>>> >>>>> >>>>
>>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>>> SolidFire
>>> >>>>> >>>> (storage) plug-in for CloudStack.
>>> >>>>> >>>>
>>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>>> necessary
>>> >>>>> >>>> times
>>> >>>>> >>>> so that I could dynamically create and delete volumes on the
>>> >>>>> >>>> SolidFire SAN
>>> >>>>> >>>> (among other activities).
>>> >>>>> >>>>
>>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping between a
>>> >>>>> >>>> CloudStack
>>> >>>>> >>>> volume and a SolidFire volume for QoS.
>>> >>>>> >>>>
>>> >>>>> >>>> In the past, CloudStack always expected the admin to create
>>> large
>>> >>>>> >>>> volumes ahead of time and those volumes would likely house
>>> many
>>> >>>>> >>>> root and
>>> >>>>> >>>> data disks (which is not QoS friendly).
>>> >>>>> >>>>
>>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify
>>> logic in
>>> >>>>> >>>> the
>>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>>> storage
>>> >>>>> >>>> repositories/datastores as needed.
>>> >>>>> >>>>
>>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>>> >>>>> >>>>
>>> >>>>> >>>> I'm coming up to speed with how this might work on KVM, but
>>> I'm
>>> >>>>> >>>> still
>>> >>>>> >>>> pretty new to KVM.
>>> >>>>> >>>>
>>> >>>>> >>>> Does anyone familiar with KVM know how I will need to
>>> interact with
>>> >>>>> >>>> the
>>> >>>>> >>>> iSCSI target? For example, will I have to expect Open iSCSI
>>> will be
>>> >>>>> >>>> installed on the KVM host and use it for this to work?
>>> >>>>> >>>>
>>> >>>>> >>>> Thanks for any suggestions,
>>> >>>>> >>>> Mike
>>> >>>>> >>>>
>>> >>>>> >>>> --
>>> >>>>> >>>> Mike Tutkowski
>>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>>> >>>> o: 303.746.7302
>>> >>>>> >>>> Advancing the way the world uses the cloud™
>>> >>>>> >>
>>> >>>>> >>
>>> >>>>> >>
>>> >>>>> >>
>>> >>>>> >> --
>>> >>>>> >> Mike Tutkowski
>>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> >>>>> >> e: mike.tutkowski@solidfire.com
>>> >>>>> >> o: 303.746.7302
>>> >>>>> >> Advancing the way the world uses the cloud™
>>> >>>>> >
>>> >>>>> >
>>> >>>>> >
>>> >>>>> >
>>> >>>>> > --
>>> >>>>> > Mike Tutkowski
>>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>>> >>>>> > e: mike.tutkowski@solidfire.com
>>> >>>>> > o: 303.746.7302
>>> >>>>> > Advancing the way the world uses the cloud™
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> Mike Tutkowski
>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>>> >>>> e: mike.tutkowski@solidfire.com
>>> >>>> o: 303.746.7302
>>> >>>> Advancing the way the world uses the cloud™
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Mike Tutkowski
>>> >>> Senior CloudStack Developer, SolidFire Inc.
>>> >>> e: mike.tutkowski@solidfire.com
>>> >>> o: 303.746.7302
>>> >>> Advancing the way the world uses the cloud™
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Mike Tutkowski
>>> >> Senior CloudStack Developer, SolidFire Inc.
>>> >> e: mike.tutkowski@solidfire.com
>>> >> o: 303.746.7302
>>> >> Advancing the way the world uses the cloud™
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
Hey Marcus,

I wonder if the iSCSI storage pool type for libvirt won't work when you
take into consideration hypervisor snapshots?

On XenServer, when you take a hypervisor snapshot, the VDI for the snapshot
is placed on the same storage repository as the volume is on.

Same idea for VMware, I believe.

So, what would happen in my case (let's say for XenServer and VMware for
4.3 because I don't support hypervisor snapshots in 4.2) is I'd make an
iSCSI target that is larger than what the user requested for the CloudStack
volume (which is fine because our SAN thinly provisions volumes, so the
space is not actually used unless it needs to be). The CloudStack volume
would be the only "object" on the SAN volume until a hypervisor snapshot is
taken. This snapshot would also reside on the SAN volume.

If this is also how KVM behaves and there is no creation of LUNs within an
iSCSI target from libvirt (which, even if there were support for this, our
SAN currently only allows one LUN per iSCSI target), then I don't see how
using this model will work.

Perhaps I will have to go enhance the current way this works with DIR?

What do you think?

Thanks



On Fri, Sep 13, 2013 at 6:28 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> That appears to be the way it's used for iSCSI access today.
>
> I suppose I could go that route, too, but I might as well leverage what
> libvirt has for iSCSI instead.
>
>
> On Fri, Sep 13, 2013 at 6:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> To your question about SharedMountPoint, I believe it just acts like a
>> 'DIR' storage type or something similar to that. The end-user is
>> responsible for mounting a file system that all KVM hosts can access,
>> and CloudStack is oblivious to what is providing the storage. It could
>> be NFS, or OCFS2, or some other clustered filesystem, cloudstack just
>> knows that the provided directory path has VM images.
>>
>> On Fri, Sep 13, 2013 at 6:23 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>> > Oh yes, you can use NFS, LVM, and iSCSI all at the same time.
>> > Multiples, in fact.
>> >
>> > On Fri, Sep 13, 2013 at 6:19 PM, Mike Tutkowski
>> > <mi...@solidfire.com> wrote:
>> >> Looks like you can have multiple storage pools:
>> >>
>> >> mtutkowski@ubuntu:~$ virsh pool-list
>> >> Name                 State      Autostart
>> >> -----------------------------------------
>> >> default              active     yes
>> >> iSCSI                active     no
>> >>
>> >>
>> >> On Fri, Sep 13, 2013 at 6:12 PM, Mike Tutkowski
>> >> <mi...@solidfire.com> wrote:
>> >>>
>> >>> Reading through the docs you pointed out.
>> >>>
>> >>> I see what you're saying now.
>> >>>
>> >>> You can create an iSCSI (libvirt) storage pool based on an iSCSI
>> target.
>> >>>
>> >>> In my case, the iSCSI target would only have one LUN, so there would
>> only
>> >>> be one iSCSI (libvirt) storage volume in the (libvirt) storage pool.
>> >>>
>> >>> As you say, my plug-in creates and destroys iSCSI targets/LUNs on the
>> >>> SolidFire SAN, so it is not a problem that libvirt does not support
>> >>> creating/deleting iSCSI targets/LUNs.
>> >>>
>> >>> It looks like I need to test this a bit to see if libvirt supports
>> >>> multiple iSCSI storage pools (as you mentioned, since each one of its
>> >>> storage pools would map to one of my iSCSI targets/LUNs).
>> >>>
>> >>>
>> >>> On Fri, Sep 13, 2013 at 5:58 PM, Mike Tutkowski
>> >>> <mi...@solidfire.com> wrote:
>> >>>>
>> >>>> LibvirtStoragePoolDef has this type:
>> >>>>
>> >>>>     public enum poolType {
>> >>>>
>> >>>>         ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"),
>> DIR("dir"),
>> >>>> RBD("rbd");
>> >>>>
>> >>>>         String _poolType;
>> >>>>
>> >>>>         poolType(String poolType) {
>> >>>>
>> >>>>             _poolType = poolType;
>> >>>>
>> >>>>         }
>> >>>>
>> >>>>         @Override
>> >>>>
>> >>>>         public String toString() {
>> >>>>
>> >>>>             return _poolType;
>> >>>>
>> >>>>         }
>> >>>>
>> >>>>     }
>> >>>>
>> >>>>
>> >>>> It doesn't look like the iSCSI type is currently being used, but I'm
>> >>>> understanding more what you were getting at.
>> >>>>
>> >>>>
>> >>>> Can you tell me for today (say, 4.2), when someone selects the
>> >>>> SharedMountPoint option and uses it with iSCSI, is that the "netfs"
>> option
>> >>>> above or is that just for NFS?
>> >>>>
>> >>>>
>> >>>> Thanks!
>> >>>>
>> >>>>
>> >>>>
>> >>>> On Fri, Sep 13, 2013 at 5:50 PM, Marcus Sorensen <
>> shadowsor@gmail.com>
>> >>>> wrote:
>> >>>>>
>> >>>>> Take a look at this:
>> >>>>>
>> >>>>> http://libvirt.org/storage.html#StorageBackendISCSI
>> >>>>>
>> >>>>> "Volumes must be pre-allocated on the iSCSI server, and cannot be
>> >>>>> created via the libvirt APIs.", which I believe your plugin will
>> take
>> >>>>> care of. Libvirt just does the work of logging in and hooking it up
>> to
>> >>>>> the VM (I believe the Xen api does that work in the Xen stuff).
>> >>>>>
>> >>>>> What I'm not sure about is whether this provides a 1:1 mapping, or
>> if
>> >>>>> it just allows you to register 1 iscsi device as a pool. You may
>> need
>> >>>>> to write some test code or read up a bit more about this. Let us
>> know.
>> >>>>> If it doesn't, you may just have to write your own storage adaptor
>> >>>>> rather than changing LibvirtStorageAdaptor.java.  We can cross that
>> >>>>> bridge when we get there.
>> >>>>>
>> >>>>> As far as interfacing with libvirt, see the java bindings doc.
>> >>>>> http://libvirt.org/sources/java/javadoc/  Normally, you'll see a
>> >>>>> connection object be made, then calls made to that 'conn' object.
>> You
>> >>>>> can look at the LibvirtStorageAdaptor to see how that is done for
>> >>>>> other pool types, and maybe write some test java code to see if you
>> >>>>> can interface with libvirt and register iscsi storage pools before
>> you
>> >>>>> get started.
>> >>>>>
>> >>>>> On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
>> >>>>> <mi...@solidfire.com> wrote:
>> >>>>> > So, Marcus, I need to investigate libvirt more, but you figure it
>> >>>>> > supports
>> >>>>> > connecting to/disconnecting from iSCSI targets, right?
>> >>>>> >
>> >>>>> >
>> >>>>> > On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
>> >>>>> > <mi...@solidfire.com> wrote:
>> >>>>> >>
>> >>>>> >> OK, thanks, Marcus
>> >>>>> >>
>> >>>>> >> I am currently looking through some of the classes you pointed
>> out
>> >>>>> >> last
>> >>>>> >> week or so.
>> >>>>> >>
>> >>>>> >>
>> >>>>> >> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen
>> >>>>> >> <sh...@gmail.com>
>> >>>>> >> wrote:
>> >>>>> >>>
>> >>>>> >>> Yes, my guess is that you will need the iscsi initiator
>> utilities
>> >>>>> >>> installed. There should be standard packages for any distro.
>> Then
>> >>>>> >>> you'd call
>> >>>>> >>> an agent storage adaptor to do the initiator login. See the
>> info I
>> >>>>> >>> sent
>> >>>>> >>> previously about LibvirtStorageAdaptor.java and libvirt iscsi
>> >>>>> >>> storage type
>> >>>>> >>> to see if that fits your need.
>> >>>>> >>>
>> >>>>> >>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski"
>> >>>>> >>> <mi...@solidfire.com>
>> >>>>> >>> wrote:
>> >>>>> >>>>
>> >>>>> >>>> Hi,
>> >>>>> >>>>
>> >>>>> >>>> As you may remember, during the 4.2 release I developed a
>> SolidFire
>> >>>>> >>>> (storage) plug-in for CloudStack.
>> >>>>> >>>>
>> >>>>> >>>> This plug-in was invoked by the storage framework at the
>> necessary
>> >>>>> >>>> times
>> >>>>> >>>> so that I could dynamically create and delete volumes on the
>> >>>>> >>>> SolidFire SAN
>> >>>>> >>>> (among other activities).
>> >>>>> >>>>
>> >>>>> >>>> This is necessary so I can establish a 1:1 mapping between a
>> >>>>> >>>> CloudStack
>> >>>>> >>>> volume and a SolidFire volume for QoS.
>> >>>>> >>>>
>> >>>>> >>>> In the past, CloudStack always expected the admin to create
>> large
>> >>>>> >>>> volumes ahead of time and those volumes would likely house many
>> >>>>> >>>> root and
>> >>>>> >>>> data disks (which is not QoS friendly).
>> >>>>> >>>>
>> >>>>> >>>> To make this 1:1 mapping scheme work, I needed to modify logic
>> in
>> >>>>> >>>> the
>> >>>>> >>>> XenServer and VMware plug-ins so they could create/delete
>> storage
>> >>>>> >>>> repositories/datastores as needed.
>> >>>>> >>>>
>> >>>>> >>>> For 4.3 I want to make this happen with KVM.
>> >>>>> >>>>
>> >>>>> >>>> I'm coming up to speed with how this might work on KVM, but I'm
>> >>>>> >>>> still
>> >>>>> >>>> pretty new to KVM.
>> >>>>> >>>>
>> >>>>> >>>> Does anyone familiar with KVM know how I will need to interact
>> with
>> >>>>> >>>> the
>> >>>>> >>>> iSCSI target? For example, will I have to expect Open iSCSI
>> will be
>> >>>>> >>>> installed on the KVM host and use it for this to work?
>> >>>>> >>>>
>> >>>>> >>>> Thanks for any suggestions,
>> >>>>> >>>> Mike
>> >>>>> >>>>
>> >>>>> >>>> --
>> >>>>> >>>> Mike Tutkowski
>> >>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>>> >>>> e: mike.tutkowski@solidfire.com
>> >>>>> >>>> o: 303.746.7302
>> >>>>> >>>> Advancing the way the world uses the cloud™
>> >>>>> >>
>> >>>>> >>
>> >>>>> >>
>> >>>>> >>
>> >>>>> >> --
>> >>>>> >> Mike Tutkowski
>> >>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>>>> >> e: mike.tutkowski@solidfire.com
>> >>>>> >> o: 303.746.7302
>> >>>>> >> Advancing the way the world uses the cloud™
>> >>>>> >
>> >>>>> >
>> >>>>> >
>> >>>>> >
>> >>>>> > --
>> >>>>> > Mike Tutkowski
>> >>>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>>>> > e: mike.tutkowski@solidfire.com
>> >>>>> > o: 303.746.7302
>> >>>>> > Advancing the way the world uses the cloud™
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Mike Tutkowski
>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>>> e: mike.tutkowski@solidfire.com
>> >>>> o: 303.746.7302
>> >>>> Advancing the way the world uses the cloud™
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Mike Tutkowski
>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> e: mike.tutkowski@solidfire.com
>> >>> o: 303.746.7302
>> >>> Advancing the way the world uses the cloud™
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Tutkowski
>> >> Senior CloudStack Developer, SolidFire Inc.
>> >> e: mike.tutkowski@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud™
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Take a look at this:

http://libvirt.org/storage.html#StorageBackendISCSI

"Volumes must be pre-allocated on the iSCSI server, and cannot be
created via the libvirt APIs.", which I believe your plugin will take
care of. Libvirt just does the work of logging in and hooking it up to
the VM (I believe the Xen api does that work in the Xen stuff).

What I'm not sure about is whether this provides a 1:1 mapping, or if
it just allows you to register 1 iscsi device as a pool. You may need
to write some test code or read up a bit more about this. Let us know.
If it doesn't, you may just have to write your own storage adaptor
rather than changing LibvirtStorageAdaptor.java.  We can cross that
bridge when we get there.

As far as interfacing with libvirt, see the java bindings doc.
http://libvirt.org/sources/java/javadoc/  Normally, you'll see a
connection object be made, then calls made to that 'conn' object. You
can look at the LibvirtStorageAdaptor to see how that is done for
other pool types, and maybe write some test java code to see if you
can interface with libvirt and register iscsi storage pools before you
get started.

On Fri, Sep 13, 2013 at 5:31 PM, Mike Tutkowski
<mi...@solidfire.com> wrote:
> So, Marcus, I need to investigate libvirt more, but you figure it supports
> connecting to/disconnecting from iSCSI targets, right?
>
>
> On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski
> <mi...@solidfire.com> wrote:
>>
>> OK, thanks, Marcus
>>
>> I am currently looking through some of the classes you pointed out last
>> week or so.
>>
>>
>> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen <sh...@gmail.com>
>> wrote:
>>>
>>> Yes, my guess is that you will need the iscsi initiator utilities
>>> installed. There should be standard packages for any distro. Then you'd call
>>> an agent storage adaptor to do the initiator login. See the info I sent
>>> previously about LibvirtStorageAdaptor.java and libvirt iscsi storage type
>>> to see if that fits your need.
>>>
>>> On Sep 13, 2013 4:55 PM, "Mike Tutkowski" <mi...@solidfire.com>
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> As you may remember, during the 4.2 release I developed a SolidFire
>>>> (storage) plug-in for CloudStack.
>>>>
>>>> This plug-in was invoked by the storage framework at the necessary times
>>>> so that I could dynamically create and delete volumes on the SolidFire SAN
>>>> (among other activities).
>>>>
>>>> This is necessary so I can establish a 1:1 mapping between a CloudStack
>>>> volume and a SolidFire volume for QoS.
>>>>
>>>> In the past, CloudStack always expected the admin to create large
>>>> volumes ahead of time and those volumes would likely house many root and
>>>> data disks (which is not QoS friendly).
>>>>
>>>> To make this 1:1 mapping scheme work, I needed to modify logic in the
>>>> XenServer and VMware plug-ins so they could create/delete storage
>>>> repositories/datastores as needed.
>>>>
>>>> For 4.3 I want to make this happen with KVM.
>>>>
>>>> I'm coming up to speed with how this might work on KVM, but I'm still
>>>> pretty new to KVM.
>>>>
>>>> Does anyone familiar with KVM know how I will need to interact with the
>>>> iSCSI target? For example, will I have to expect Open iSCSI will be
>>>> installed on the KVM host and use it for this to work?
>>>>
>>>> Thanks for any suggestions,
>>>> Mike
>>>>
>>>> --
>>>> Mike Tutkowski
>>>> Senior CloudStack Developer, SolidFire Inc.
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the cloud™
>>
>>
>>
>>
>> --
>> Mike Tutkowski
>> Senior CloudStack Developer, SolidFire Inc.
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud™
>
>
>
>
> --
> Mike Tutkowski
> Senior CloudStack Developer, SolidFire Inc.
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud™

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
So, Marcus, I need to investigate libvirt more, but you figure it supports
connecting to/disconnecting from iSCSI targets, right?


On Fri, Sep 13, 2013 at 5:29 PM, Mike Tutkowski <
mike.tutkowski@solidfire.com> wrote:

> OK, thanks, Marcus
>
> I am currently looking through some of the classes you pointed out last
> week or so.
>
>
> On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:
>
>> Yes, my guess is that you will need the iscsi initiator utilities
>> installed. There should be standard packages for any distro. Then you'd
>> call an agent storage adaptor to do the initiator login. See the info I
>> sent previously about LibvirtStorageAdaptor.java and libvirt iscsi storage
>> type to see if that fits your need.
>>  On Sep 13, 2013 4:55 PM, "Mike Tutkowski" <mi...@solidfire.com>
>> wrote:
>>
>>> Hi,
>>>
>>> As you may remember, during the 4.2 release I developed a SolidFire
>>> (storage) plug-in for CloudStack.
>>>
>>> This plug-in was invoked by the storage framework at the necessary times
>>> so that I could dynamically create and delete volumes on the SolidFire SAN
>>> (among other activities).
>>>
>>> This is necessary so I can establish a 1:1 mapping between a CloudStack
>>> volume and a SolidFire volume for QoS.
>>>
>>> In the past, CloudStack always expected the admin to create large
>>> volumes ahead of time and those volumes would likely house many root and
>>> data disks (which is not QoS friendly).
>>>
>>> To make this 1:1 mapping scheme work, I needed to modify logic in the
>>> XenServer and VMware plug-ins so they could create/delete storage
>>> repositories/datastores as needed.
>>>
>>> For 4.3 I want to make this happen with KVM.
>>>
>>> I'm coming up to speed with how this might work on KVM, but I'm still
>>> pretty new to KVM.
>>>
>>> Does anyone familiar with KVM know how I will need to interact with the
>>> iSCSI target? For example, will I have to expect Open iSCSI will be
>>> installed on the KVM host and use it for this to work?
>>>
>>> Thanks for any suggestions,
>>> Mike
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkowski@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>>> *™*
>>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Mike Tutkowski <mi...@solidfire.com>.
OK, thanks, Marcus

I am currently looking through some of the classes you pointed out last
week or so.


On Fri, Sep 13, 2013 at 5:26 PM, Marcus Sorensen <sh...@gmail.com>wrote:

> Yes, my guess is that you will need the iscsi initiator utilities
> installed. There should be standard packages for any distro. Then you'd
> call an agent storage adaptor to do the initiator login. See the info I
> sent previously about LibvirtStorageAdaptor.java and libvirt iscsi storage
> type to see if that fits your need.
> On Sep 13, 2013 4:55 PM, "Mike Tutkowski" <mi...@solidfire.com>
> wrote:
>
>> Hi,
>>
>> As you may remember, during the 4.2 release I developed a SolidFire
>> (storage) plug-in for CloudStack.
>>
>> This plug-in was invoked by the storage framework at the necessary times
>> so that I could dynamically create and delete volumes on the SolidFire SAN
>> (among other activities).
>>
>> This is necessary so I can establish a 1:1 mapping between a CloudStack
>> volume and a SolidFire volume for QoS.
>>
>> In the past, CloudStack always expected the admin to create large volumes
>> ahead of time and those volumes would likely house many root and data disks
>> (which is not QoS friendly).
>>
>> To make this 1:1 mapping scheme work, I needed to modify logic in the
>> XenServer and VMware plug-ins so they could create/delete storage
>> repositories/datastores as needed.
>>
>> For 4.3 I want to make this happen with KVM.
>>
>> I'm coming up to speed with how this might work on KVM, but I'm still
>> pretty new to KVM.
>>
>> Does anyone familiar with KVM know how I will need to interact with the
>> iSCSI target? For example, will I have to expect Open iSCSI will be
>> installed on the KVM host and use it for this to work?
>>
>> Thanks for any suggestions,
>> Mike
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Re: Managed storage with KVM

Posted by Marcus Sorensen <sh...@gmail.com>.
Yes, my guess is that you will need the iscsi initiator utilities
installed. There should be standard packages for any distro. Then you'd
call an agent storage adaptor to do the initiator login. See the info I
sent previously about LibvirtStorageAdaptor.java and libvirt iscsi storage
type to see if that fits your need.
On Sep 13, 2013 4:55 PM, "Mike Tutkowski" <mi...@solidfire.com>
wrote:

> Hi,
>
> As you may remember, during the 4.2 release I developed a SolidFire
> (storage) plug-in for CloudStack.
>
> This plug-in was invoked by the storage framework at the necessary times
> so that I could dynamically create and delete volumes on the SolidFire SAN
> (among other activities).
>
> This is necessary so I can establish a 1:1 mapping between a CloudStack
> volume and a SolidFire volume for QoS.
>
> In the past, CloudStack always expected the admin to create large volumes
> ahead of time and those volumes would likely house many root and data disks
> (which is not QoS friendly).
>
> To make this 1:1 mapping scheme work, I needed to modify logic in the
> XenServer and VMware plug-ins so they could create/delete storage
> repositories/datastores as needed.
>
> For 4.3 I want to make this happen with KVM.
>
> I'm coming up to speed with how this might work on KVM, but I'm still
> pretty new to KVM.
>
> Does anyone familiar with KVM know how I will need to interact with the
> iSCSI target? For example, will I have to expect Open iSCSI will be
> installed on the KVM host and use it for this to work?
>
> Thanks for any suggestions,
> Mike
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>