You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Justyn Shull <ju...@codero.com> on 2014/11/13 19:04:45 UTC

Snapshots not working when using S3 (ceph) ACS 4.4.0

I’m trying to enable object storage (ceph using the S3 radosgw) as a secondary store for an existing cloudstack installation, and running into some issues.    There was already an existing NFS store being used as the secondary storage, so I used the updateCloudToUseObjectStore api call (using cloud monkey) with these params(keys changed):

> update cloudtouseobjectstore name=cephs3 zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8 url=https://10.16.33.172 details[0].key=accesskey details[0].value=xxx details[1].key=secretkey details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK details[3].key=endpoint details[3].value=10.16.33.172 provider=S3

1) There were no errors from that call, and as far as I can tell it changed the old NFS store to ‘ImageCache’ and created the new s3 store in the database.  I’m not sure what else to check for to see if that was successful, or if there are any long-running processes that it triggers..

 However, I tried creating a volume snapshot to test and that is where I’m running into issues now.     Cloudstack appears to create the snapshot on NFS (I think this part is normal), but then when it goes to upload the snapshot to S3 - it’s using the wrong local path.   This is the log from the hypervisor (XenServer 6.1.0 w/ local storage):

###
2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments with args: {'maxErrorRetry': 'null', 'key': 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'maxSingleUploadSizeInBytes': '5368709120', 'accessKey': ‘xxx', 'bucket': 'CLOUDSTACK', 'filename': '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172', 'https': 'false', 'connectionTimeout': 'null', 'operation': 'put', 'iSCSIFlag': 'true'} ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 from/in bucket CLOUDSTACK key snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent call last):
  File "/etc/xapi.d/plugins/s3xen", line 414, in s3
    client.put(bucket, key, filename, maxSingleUploadBytes)
  File "/etc/xapi.d/plugins/s3xen", line 325, in put
    raise Exception(
Exception: Attempt to put /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 that does not exist.
 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false ####
###

2) I’m assuming it should be trying to upload the .vhd it created on the nfs store, but I am not sure whether it’s a config issue or some sort of bug somewhere that is causing this.   

3) Am I correct in assuming the general snapshot flow should be like this?
	(all on hypervisor)  Mount NFS -> create .vhd snapshot from localstorage/lvm -> upload .vhd from NFS to S3/objstor -> delete .vhd from nfs

Any help would be appreciated,

Thanks,

Re: Snapshots not working when using S3 (ceph) ACS 4.4.0

Posted by cs user <ac...@gmail.com>.
Hi Justyn,

This appears similar to the following bug I raised a few weeks ago:

https://issues.apache.org/jira/browse/CLOUDSTACK-7775

Will it be possible for a dev to take a look at this?

On Fri, Nov 14, 2014 at 6:36 PM, Justyn Shull <ju...@codero.com> wrote:

> Small update:  I was able to get past this error by editing
> /etc/xapi.d/plugins/s3xen on the hypervisor, and adding this line to the s3
> function:
>
> > filename = "%s.vhd" % filename.replace('/dev/VG_XenStorage-',
> '/var/run/sr-mount/').replace('VHD-', ‘')
>
> It just changes the filename to what it would be set to if isISCSI
> returned false in my previous snippet.   The IsISCSI function returns true
> for SRType.LVM which is what I’d be using with local storage - is that
> intended?
>
> Thanks,
>
> > On Nov 14, 2014, at 10:16 AM, Justyn Shull <ju...@codero.com> wrote:
> >
> > Thanks for the tip, Sanjeev.   The snapshot is around ~2.5gb.
> >
> > I found the “s3.singleupload.max.size” parameter and changed it to 0 so
> that multi-part upload is always used, and restarted cs management.     So
> far, I am still getting the same error I pasted before:
> >
> >> Exception: Attempt to put
> /dev/VG_XenStorage-c2c8fe9f-59af-f4d7-52e1-5853bbf47be8/VHD-fa86335b-806f-4355-944f-856eb41d1ac9
> that does not exist.
> >
> > I tried digging through the code a little, and I think I found where the
> filename gets set:
> >
> plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/XenServerStorageProcessor.java
> > 1067:    protected String backupSnapshotToS3(final Connection
> connection, final S3TO s3, final String srUuid, final String folder, final
> String snapshotUuid,
> > 1068:                                        final Boolean iSCSIFlag,
> final int wait) {
> > 1070:        final String filename = iSCSIFlag ? "VHD-" + snapshotUuid :
> snapshotUuid + ".vhd";
> > 1071:        final String dir = (iSCSIFlag ? "/dev/VG_XenStorage-" :
> "/var/run/sr-mount/") + srUuid;
> > 1080:            parameters.addAll(Arrays.asList("operation", "put",
> "filename", dir + "/" + filename, "iSCSIFlag", iSCSIFlag.toString(),
> "bucket", s3.getBucketName(), "key”,
> >
> > I haven’t figured out what sets that iSCSIFlag, but it seems like in my
> case it shouldn’t be set if it should be uploading the .vhd file, right?
> >
> >
> >
> > On Nov 13, 2014, at 11:46 PM, Sanjeev Neelarapu <
> sanjeev.neelarapu@citrix.com<ma...@citrix.com>> wrote:
> >
> > Hi,
> >
> > You have followed correct steps in updating the secondary storage from
> NFS to S3.  CS would upload the .vhd created on NFS to S3.
> >
> > What is the size of the snapshot? It it is >5GB can you enable multipart
> upload and try again?
> > You can find the global setting parameter to enable multipart upload.
> >
> > Thanks,
> > Sanjeev
> >
> > -----Original Message-----
> > From: Justyn Shull [mailto:justyns@codero.com]
> > Sent: Thursday, November 13, 2014 11:35 PM
> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Subject: Snapshots not working when using S3 (ceph) ACS 4.4.0
> >
> > I’m trying to enable object storage (ceph using the S3 radosgw) as a
> secondary store for an existing cloudstack installation, and running into
> some issues.    There was already an existing NFS store being used as the
> secondary storage, so I used the updateCloudToUseObjectStore api call
> (using cloud monkey) with these params(keys changed):
> >
> > update cloudtouseobjectstore name=cephs3
> zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8 url=xxx
> details[0].key=accesskey details[0].value=xxx details[1].key=secretkey
> details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK
> details[3].key=endpoint details[3].value=10.16.33.172 provider=S3
> >
> > 1) There were no errors from that call, and as far as I can tell it
> changed the old NFS store to ‘ImageCache’ and created the new s3 store in
> the database.  I’m not sure what else to check for to see if that was
> successful, or if there are any long-running processes that it triggers..
> >
> > However, I tried creating a volume snapshot to test and that is where
> I’m running into issues now.     Cloudstack appears to create the snapshot
> on NFS (I think this part is normal), but then when it goes to upload the
> snapshot to S3 - it’s using the wrong local path.   This is the log from
> the hypervisor (XenServer 6.1.0 w/ local storage):
> >
> > ###
> > 2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
> > 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments
> with args: {'maxErrorRetry': 'null', 'key':
> 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9',
> 'maxSingleUploadSizeInBytes': '5368709120', 'accessKey': ‘xxx', 'bucket':
> 'CLOUDSTACK', 'filename':
> '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9',
> 'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172',
> 'https': 'false', 'connectionTimeout': 'null', 'operation': 'put',
> 'iSCSIFlag': 'true'} ####
> > 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file
> /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9
> from/in bucket CLOUDSTACK key
> snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
> > 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent
> call last):
> > File "/etc/xapi.d/plugins/s3xen", line 414, in s3
> >   client.put(bucket, key, filename, maxSingleUploadBytes)
> > File "/etc/xapi.d/plugins/s3xen", line 325, in put
> >   raise Exception(
> > Exception: Attempt to put
> /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9
> that does not exist.
> > ####
> > 2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false
> ####
> > ###
> >
> > 2) I’m assuming it should be trying to upload the .vhd it created on the
> nfs store, but I am not sure whether it’s a config issue or some sort of
> bug somewhere that is causing this.
> >
> > 3) Am I correct in assuming the general snapshot flow should be like
> this?
> > (all on hypervisor)  Mount NFS -> create .vhd snapshot from
> localstorage/lvm -> upload .vhd from NFS to S3/objstor -> delete .vhd from
> nfs
> >
> > Any help would be appreciated,
> >
> > Thanks,
> >
>
>

Re: Snapshots not working when using S3 (ceph) ACS 4.4.0

Posted by Justyn Shull <ju...@codero.com>.
Small update:  I was able to get past this error by editing /etc/xapi.d/plugins/s3xen on the hypervisor, and adding this line to the s3 function:

> filename = "%s.vhd" % filename.replace('/dev/VG_XenStorage-', '/var/run/sr-mount/').replace('VHD-', ‘')

It just changes the filename to what it would be set to if isISCSI returned false in my previous snippet.   The IsISCSI function returns true for SRType.LVM which is what I’d be using with local storage - is that intended?  

Thanks,

> On Nov 14, 2014, at 10:16 AM, Justyn Shull <ju...@codero.com> wrote:
> 
> Thanks for the tip, Sanjeev.   The snapshot is around ~2.5gb.
> 
> I found the “s3.singleupload.max.size” parameter and changed it to 0 so that multi-part upload is always used, and restarted cs management.     So far, I am still getting the same error I pasted before:
> 
>> Exception: Attempt to put /dev/VG_XenStorage-c2c8fe9f-59af-f4d7-52e1-5853bbf47be8/VHD-fa86335b-806f-4355-944f-856eb41d1ac9 that does not exist.
> 
> I tried digging through the code a little, and I think I found where the filename gets set:
> plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/XenServerStorageProcessor.java
> 1067:    protected String backupSnapshotToS3(final Connection connection, final S3TO s3, final String srUuid, final String folder, final String snapshotUuid,
> 1068:                                        final Boolean iSCSIFlag, final int wait) {
> 1070:        final String filename = iSCSIFlag ? "VHD-" + snapshotUuid : snapshotUuid + ".vhd";
> 1071:        final String dir = (iSCSIFlag ? "/dev/VG_XenStorage-" : "/var/run/sr-mount/") + srUuid;
> 1080:            parameters.addAll(Arrays.asList("operation", "put", "filename", dir + "/" + filename, "iSCSIFlag", iSCSIFlag.toString(), "bucket", s3.getBucketName(), "key”,
> 
> I haven’t figured out what sets that iSCSIFlag, but it seems like in my case it shouldn’t be set if it should be uploading the .vhd file, right?
> 
> 
> 
> On Nov 13, 2014, at 11:46 PM, Sanjeev Neelarapu <sa...@citrix.com>> wrote:
> 
> Hi,
> 
> You have followed correct steps in updating the secondary storage from NFS to S3.  CS would upload the .vhd created on NFS to S3.
> 
> What is the size of the snapshot? It it is >5GB can you enable multipart upload and try again?
> You can find the global setting parameter to enable multipart upload.
> 
> Thanks,
> Sanjeev
> 
> -----Original Message-----
> From: Justyn Shull [mailto:justyns@codero.com]
> Sent: Thursday, November 13, 2014 11:35 PM
> To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Subject: Snapshots not working when using S3 (ceph) ACS 4.4.0
> 
> I’m trying to enable object storage (ceph using the S3 radosgw) as a secondary store for an existing cloudstack installation, and running into some issues.    There was already an existing NFS store being used as the secondary storage, so I used the updateCloudToUseObjectStore api call (using cloud monkey) with these params(keys changed):
> 
> update cloudtouseobjectstore name=cephs3 zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8 url=xxx details[0].key=accesskey details[0].value=xxx details[1].key=secretkey details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK details[3].key=endpoint details[3].value=10.16.33.172 provider=S3
> 
> 1) There were no errors from that call, and as far as I can tell it changed the old NFS store to ‘ImageCache’ and created the new s3 store in the database.  I’m not sure what else to check for to see if that was successful, or if there are any long-running processes that it triggers..
> 
> However, I tried creating a volume snapshot to test and that is where I’m running into issues now.     Cloudstack appears to create the snapshot on NFS (I think this part is normal), but then when it goes to upload the snapshot to S3 - it’s using the wrong local path.   This is the log from the hypervisor (XenServer 6.1.0 w/ local storage):
> 
> ###
> 2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
> 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments with args: {'maxErrorRetry': 'null', 'key': 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'maxSingleUploadSizeInBytes': '5368709120', 'accessKey': ‘xxx', 'bucket': 'CLOUDSTACK', 'filename': '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172', 'https': 'false', 'connectionTimeout': 'null', 'operation': 'put', 'iSCSIFlag': 'true'} ####
> 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 from/in bucket CLOUDSTACK key snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
> 2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent call last):
> File "/etc/xapi.d/plugins/s3xen", line 414, in s3
>   client.put(bucket, key, filename, maxSingleUploadBytes)
> File "/etc/xapi.d/plugins/s3xen", line 325, in put
>   raise Exception(
> Exception: Attempt to put /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 that does not exist.
> ####
> 2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false ####
> ###
> 
> 2) I’m assuming it should be trying to upload the .vhd it created on the nfs store, but I am not sure whether it’s a config issue or some sort of bug somewhere that is causing this.
> 
> 3) Am I correct in assuming the general snapshot flow should be like this?
> (all on hypervisor)  Mount NFS -> create .vhd snapshot from localstorage/lvm -> upload .vhd from NFS to S3/objstor -> delete .vhd from nfs
> 
> Any help would be appreciated,
> 
> Thanks,
> 


Re: Snapshots not working when using S3 (ceph) ACS 4.4.0

Posted by Justyn Shull <ju...@codero.com>.
Thanks for the tip, Sanjeev.   The snapshot is around ~2.5gb.

I found the “s3.singleupload.max.size” parameter and changed it to 0 so that multi-part upload is always used, and restarted cs management.     So far, I am still getting the same error I pasted before:

> Exception: Attempt to put /dev/VG_XenStorage-c2c8fe9f-59af-f4d7-52e1-5853bbf47be8/VHD-fa86335b-806f-4355-944f-856eb41d1ac9 that does not exist.

I tried digging through the code a little, and I think I found where the filename gets set:
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/XenServerStorageProcessor.java
1067:    protected String backupSnapshotToS3(final Connection connection, final S3TO s3, final String srUuid, final String folder, final String snapshotUuid,
1068:                                        final Boolean iSCSIFlag, final int wait) {
1070:        final String filename = iSCSIFlag ? "VHD-" + snapshotUuid : snapshotUuid + ".vhd";
1071:        final String dir = (iSCSIFlag ? "/dev/VG_XenStorage-" : "/var/run/sr-mount/") + srUuid;
1080:            parameters.addAll(Arrays.asList("operation", "put", "filename", dir + "/" + filename, "iSCSIFlag", iSCSIFlag.toString(), "bucket", s3.getBucketName(), "key”,

I haven’t figured out what sets that iSCSIFlag, but it seems like in my case it shouldn’t be set if it should be uploading the .vhd file, right?



On Nov 13, 2014, at 11:46 PM, Sanjeev Neelarapu <sa...@citrix.com>> wrote:

Hi,

You have followed correct steps in updating the secondary storage from NFS to S3.  CS would upload the .vhd created on NFS to S3.

What is the size of the snapshot? It it is >5GB can you enable multipart upload and try again?
You can find the global setting parameter to enable multipart upload.

Thanks,
Sanjeev

-----Original Message-----
From: Justyn Shull [mailto:justyns@codero.com]
Sent: Thursday, November 13, 2014 11:35 PM
To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
Subject: Snapshots not working when using S3 (ceph) ACS 4.4.0

I’m trying to enable object storage (ceph using the S3 radosgw) as a secondary store for an existing cloudstack installation, and running into some issues.    There was already an existing NFS store being used as the secondary storage, so I used the updateCloudToUseObjectStore api call (using cloud monkey) with these params(keys changed):

update cloudtouseobjectstore name=cephs3 zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8 url=xxx details[0].key=accesskey details[0].value=xxx details[1].key=secretkey details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK details[3].key=endpoint details[3].value=10.16.33.172 provider=S3

1) There were no errors from that call, and as far as I can tell it changed the old NFS store to ‘ImageCache’ and created the new s3 store in the database.  I’m not sure what else to check for to see if that was successful, or if there are any long-running processes that it triggers..

However, I tried creating a volume snapshot to test and that is where I’m running into issues now.     Cloudstack appears to create the snapshot on NFS (I think this part is normal), but then when it goes to upload the snapshot to S3 - it’s using the wrong local path.   This is the log from the hypervisor (XenServer 6.1.0 w/ local storage):

###
2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments with args: {'maxErrorRetry': 'null', 'key': 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'maxSingleUploadSizeInBytes': '5368709120', 'accessKey': ‘xxx', 'bucket': 'CLOUDSTACK', 'filename': '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172', 'https': 'false', 'connectionTimeout': 'null', 'operation': 'put', 'iSCSIFlag': 'true'} ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 from/in bucket CLOUDSTACK key snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent call last):
 File "/etc/xapi.d/plugins/s3xen", line 414, in s3
   client.put(bucket, key, filename, maxSingleUploadBytes)
 File "/etc/xapi.d/plugins/s3xen", line 325, in put
   raise Exception(
Exception: Attempt to put /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 that does not exist.
####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false ####
###

2) I’m assuming it should be trying to upload the .vhd it created on the nfs store, but I am not sure whether it’s a config issue or some sort of bug somewhere that is causing this.

3) Am I correct in assuming the general snapshot flow should be like this?
(all on hypervisor)  Mount NFS -> create .vhd snapshot from localstorage/lvm -> upload .vhd from NFS to S3/objstor -> delete .vhd from nfs

Any help would be appreciated,

Thanks,


RE: Snapshots not working when using S3 (ceph) ACS 4.4.0

Posted by Sanjeev Neelarapu <sa...@citrix.com>.
Hi,

You have followed correct steps in updating the secondary storage from NFS to S3.  CS would upload the .vhd created on NFS to S3.

What is the size of the snapshot? It it is >5GB can you enable multipart upload and try again?
You can find the global setting parameter to enable multipart upload.

Thanks,
Sanjeev

-----Original Message-----
From: Justyn Shull [mailto:justyns@codero.com] 
Sent: Thursday, November 13, 2014 11:35 PM
To: users@cloudstack.apache.org
Subject: Snapshots not working when using S3 (ceph) ACS 4.4.0

I’m trying to enable object storage (ceph using the S3 radosgw) as a secondary store for an existing cloudstack installation, and running into some issues.    There was already an existing NFS store being used as the secondary storage, so I used the updateCloudToUseObjectStore api call (using cloud monkey) with these params(keys changed):

> update cloudtouseobjectstore name=cephs3 zoneId=749cde04-531a-4e1f-bfa2-ad7f7854b1f8 url=https://secure-web.cisco.com/1AnsgjehnyWnXmsKE7K7NjEABD3VG6zDTfI0sfe8u-w53lku0XhmHy9zSQ8ev37TZ2YZ99ACNmPtFgwVCJstYtcChdSaBwPEwBZQz4OlUSB4gKRwi5VkNM1lzQH9k7ZmyUvHYvR5HJoTZWq7K5pF32_ZTkYSHg2lY_n_VL3BAzoA/https%3A%2F%2F10.16.33.172 details[0].key=accesskey details[0].value=xxx details[1].key=secretkey details[1].value=xxx details[2].key=bucket details[2].value=CLOUDSTACK details[3].key=endpoint details[3].value=10.16.33.172 provider=S3

1) There were no errors from that call, and as far as I can tell it changed the old NFS store to ‘ImageCache’ and created the new s3 store in the database.  I’m not sure what else to check for to see if that was successful, or if there are any long-running processes that it triggers..

 However, I tried creating a volume snapshot to test and that is where I’m running into issues now.     Cloudstack appears to create the snapshot on NFS (I think this part is normal), but then when it goes to upload the snapshot to S3 - it’s using the wrong local path.   This is the log from the hypervisor (XenServer 6.1.0 w/ local storage):

###
2014-11-13 10:23:17    DEBUG [root] #### VMOPS enter s3 #### ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Enetered parseArguments with args: {'maxErrorRetry': 'null', 'key': 'snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'maxSingleUploadSizeInBytes': '5368709120', 'accessKey': ‘xxx', 'bucket': 'CLOUDSTACK', 'filename': '/dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9', 'secretKey': ‘xxx', 'socketTimeout': 'null', 'endPoint': '10.16.33.172', 'https': 'false', 'connectionTimeout': 'null', 'operation': 'put', 'iSCSIFlag': 'true'} ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Operation put on file /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 from/in bucket CLOUDSTACK key snapshots/4/54/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS Traceback (most recent call last):
  File "/etc/xapi.d/plugins/s3xen", line 414, in s3
    client.put(bucket, key, filename, maxSingleUploadBytes)
  File "/etc/xapi.d/plugins/s3xen", line 325, in put
    raise Exception(
Exception: Attempt to put /dev/VG_XenStorage-85dfb820-d810-716b-89cb-0e1303da2c2b/VHD-2862dc6b-c057-4bb4-9d70-b263fc1086c9 that does not exist.
 ####
2014-11-13 10:23:17    DEBUG [root] #### VMOPS exit s3 with result false ####
###

2) I’m assuming it should be trying to upload the .vhd it created on the nfs store, but I am not sure whether it’s a config issue or some sort of bug somewhere that is causing this.   

3) Am I correct in assuming the general snapshot flow should be like this?
	(all on hypervisor)  Mount NFS -> create .vhd snapshot from localstorage/lvm -> upload .vhd from NFS to S3/objstor -> delete .vhd from nfs

Any help would be appreciated,

Thanks,