You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by ro...@apache.org on 2021/02/24 09:29:01 UTC

[cloudstack] branch master updated: storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)

This is an automated email from the ASF dual-hosted git repository.

rohit pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cloudstack.git


The following commit(s) were added to refs/heads/master by this push:
     new eba186a  storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
eba186a is described below

commit eba186aa40f16911c019bf06cd33d7d0cbbc303b
Author: sureshanaparti <12...@users.noreply.github.com>
AuthorDate: Wed Feb 24 14:58:33 2021 +0530

    storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
    
    Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
    Please find more details in the FS here:
    https://cwiki.apache.org/confluence/x/cDl4CQ
    
    Documentation PR: apache/cloudstack-documentation#169
    
    This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
    
    Other improvements addressed in addition to PowerFlex/ScaleIO support:
    
    - Added support for config drives in host cache for KVM
    	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
    	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
    	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
    	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
    	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
    
    - Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
    
    - Updated full deployment destination for preparing the network(s) on VM start
    
    - Propagate the direct download certificates uploaded to the newly added KVM hosts
    
    - Discover the template size for direct download templates using any available host from the zones specified on template registration
    	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
    
    - Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
    
    - Retry VM deployment/start when the host cannot grant access to volume/template
    
    - Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
    	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
    
    - Check the router filesystem is writable or not, before performing health checks
    	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
    	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
    	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
    
    - Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
    
    * Addressed some issues for few operations on PowerFlex storage pool.
    
    - Updated migration volume operation to sync the status and wait for migration to complete.
    
    - Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
    
    - Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
    
    - Updated resize volume error message string.
    
    - Blocked the below operations on PowerFlex storage pool:
      -> Extract Volume
      -> Create Snapshot for VMSnapshot
    
    * Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
    
    - The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
      Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
    
    Other fixes included:
    
    - Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
    
    - Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
    
    - Perform basic tests (for connectivity and file system) on router before updating the health check config data
    	=> Validate the basic tests (connectivity and file system check) on router
    	=> Cleanup the health check results when router is destroyed
    
    * Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
    
    * UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
    - PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
    - Updated protocol to "custom" for PowerFlex provider
    - Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
    
    and Minor improvements in PowerFlex/ScaleIO storage plugin code
    
    * Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
    
    - findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
    - Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
    - Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
    - Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
    
    * Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
    
    * Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
    
    * Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
    
    * Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
    
    * Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
    
    * Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
---
 agent/conf/agent.properties                        |    3 +
 .../com/cloud/agent/api/to/VirtualMachineTO.java   |   14 +
 .../cloud/exception/StorageAccessException.java    |   32 +-
 .../network/VirtualNetworkApplianceService.java    |    3 +-
 .../com/cloud/network/element/NetworkElement.java  |    4 +
 api/src/main/java/com/cloud/storage/Storage.java   |    1 +
 api/src/main/java/com/cloud/storage/Volume.java    |    6 +
 .../java/com/cloud/vm/VirtualMachineProfile.java   |   10 +
 .../main/java/com/cloud/vm/VmDetailConstants.java  |    2 +
 .../org/apache/cloudstack/alert/AlertService.java  |    7 +-
 .../org/apache/cloudstack/api/ApiConstants.java    |    3 +
 .../admin/offering/CreateDiskOfferingCmd.java      |   23 +-
 .../admin/offering/CreateServiceOfferingCmd.java   |   10 +-
 .../router/GetRouterHealthCheckResultsCmd.java     |    2 +-
 .../cloudstack/api/response/UserVmResponse.java    |    8 +
 .../cloudstack/api/response/VolumeResponse.java    |   14 +-
 .../test/java/com/cloud/storage/StorageTest.java   |    5 +-
 client/pom.xml                                     |    5 +
 .../agent/api/HandleConfigDriveIsoAnswer.java      |   55 +
 .../agent/api/HandleConfigDriveIsoCommand.java     |   15 +-
 .../routing/GetRouterMonitorResultsCommand.java    |    8 +-
 .../agent/resource/virtualnetwork/VRScripts.java   |    2 +
 .../virtualnetwork/VirtualRoutingResource.java     |   33 +
 .../StorageSubsystemCommandHandlerBase.java        |   11 +-
 .../agent/directdownload/CheckUrlCommand.java      |    8 +-
 .../directdownload/DirectDownloadCommand.java      |   27 +-
 .../cloudstack/storage/to/PrimaryDataStoreTO.java  |    7 +-
 .../cloudstack/storage/to/VolumeObjectTO.java      |   10 +
 .../java/com/cloud/vm/VirtualMachineManager.java   |    8 +-
 .../service/VolumeOrchestrationService.java        |    5 +-
 .../subsystem/api/storage/DataStoreDriver.java     |    5 +-
 .../api/storage/PrimaryDataStoreDriver.java        |   31 +
 .../subsystem/api/storage/TemplateDataFactory.java |    4 +
 .../engine/subsystem/api/storage/TemplateInfo.java |    2 +
 .../engine/subsystem/api/storage/VolumeInfo.java   |    3 +
 .../subsystem/api/storage/VolumeService.java       |    7 +-
 .../java/com/cloud/resource/ResourceManager.java   |    7 +-
 .../java/com/cloud/storage/StorageManager.java     |   34 +-
 .../main/java/com/cloud/storage/StorageUtil.java   |   15 +-
 .../com/cloud/vm/VirtualMachineProfileImpl.java    |   28 +
 .../com/cloud/vm/VirtualMachineManagerImpl.java    |  118 +-
 .../engine/orchestration/VolumeOrchestrator.java   |  124 +-
 .../com/cloud/storage/dao/StoragePoolHostDao.java  |    2 +
 .../cloud/storage/dao/StoragePoolHostDaoImpl.java  |   30 +
 .../storage/motion/DataMotionServiceImpl.java      |    4 +-
 .../KvmNonManagedStorageDataMotionStrategy.java    |    5 +
 .../motion/StorageSystemDataMotionStrategy.java    |   63 +-
 .../KvmNonManagedStorageSystemDataMotionTest.java  |    4 +
 .../storage/image/TemplateDataFactoryImpl.java     |   38 +
 .../storage/image/TemplateServiceImpl.java         |    9 +-
 .../storage/image/store/TemplateObject.java        |   29 +
 engine/storage/snapshot/pom.xml                    |    6 +
 .../storage/snapshot/ScaleIOSnapshotStrategy.java  |   93 ++
 .../snapshot/StorageSystemSnapshotStrategy.java    |   77 +-
 .../vmsnapshot/ScaleIOVMSnapshotStrategy.java      |  487 ++++++++
 ...ing-engine-storage-snapshot-storage-context.xml |    6 +
 .../allocator/AbstractStoragePoolAllocator.java    |   10 +-
 .../allocator/ZoneWideStoragePoolAllocator.java    |    7 -
 .../storage/helper/VMSnapshotHelperImpl.java       |   30 +
 .../storage/image/BaseImageStoreDriverImpl.java    |    6 +
 .../storage/vmsnapshot/VMSnapshotHelper.java       |    5 +
 .../storage/datastore/PrimaryDataStoreImpl.java    |    3 +-
 .../cloudstack/storage/volume/VolumeObject.java    |   14 +-
 .../storage/volume/VolumeServiceImpl.java          |  632 +++++++++-
 .../direct/download/DirectDownloadService.java     |    5 +
 plugins/hypervisors/kvm/pom.xml                    |    6 +
 .../kvm/resource/LibvirtComputingResource.java     |   51 +-
 .../kvm/resource/LibvirtStoragePoolDef.java        |    4 +-
 .../kvm/resource/LibvirtStoragePoolXMLParser.java  |    2 +-
 .../resource/wrapper/LibvirtCheckUrlCommand.java   |   15 +-
 .../LibvirtGetVolumeStatsCommandWrapper.java       |   16 +-
 .../LibvirtHandleConfigDriveCommandWrapper.java    |  126 +-
 .../LibvirtPrepareForMigrationCommandWrapper.java  |   28 +-
 .../kvm/storage/IscsiAdmStorageAdaptor.java        |    8 +-
 .../kvm/storage/IscsiAdmStoragePool.java           |    7 +-
 .../hypervisor/kvm/storage/KVMStoragePool.java     |    4 +-
 .../kvm/storage/KVMStoragePoolManager.java         |   31 +-
 .../kvm/storage/KVMStorageProcessor.java           |  114 +-
 .../kvm/storage/LibvirtStorageAdaptor.java         |   22 +-
 .../hypervisor/kvm/storage/LibvirtStoragePool.java |   12 +-
 .../kvm/storage/ManagedNfsStorageAdaptor.java      |    3 +-
 .../kvm/storage/ScaleIOStorageAdaptor.java         |  393 ++++++
 .../hypervisor/kvm/storage/ScaleIOStoragePool.java |  181 +++
 .../hypervisor/kvm/storage/StorageAdaptor.java     |    5 +-
 .../kvm/storage/ScaleIOStoragePoolTest.java        |  155 +++
 plugins/pom.xml                                    |    1 +
 .../driver/ElastistorPrimaryDataStoreDriver.java   |   30 +
 .../driver/DateraPrimaryDataStoreDriver.java       |   91 +-
 .../CloudStackPrimaryDataStoreDriverImpl.java      |   31 +
 .../driver/NexentaPrimaryDataStoreDriver.java      |   30 +
 .../driver/SamplePrimaryDataStoreDriverImpl.java   |   29 +
 .../storage/volume/scaleio}/pom.xml                |   38 +-
 .../storage/datastore/api/ProtectionDomain.java    |   43 +-
 .../cloudstack/storage/datastore/api/Sdc.java      |  138 +++
 .../storage/datastore/api/SdcMappingInfo.java      |   29 +-
 .../storage/datastore/api/SnapshotDef.java         |   34 +-
 .../storage/datastore/api/SnapshotDefs.java        |   26 +-
 .../storage/datastore/api/SnapshotGroup.java       |   32 +-
 .../storage/datastore/api/StoragePool.java         |   75 ++
 .../datastore/api/StoragePoolStatistics.java       |   85 ++
 .../cloudstack/storage/datastore/api/VTree.java    |   25 +-
 .../storage/datastore/api/VTreeMigrationInfo.java  |   76 ++
 .../cloudstack/storage/datastore/api/Volume.java   |  152 +++
 .../storage/datastore/api/VolumeStatistics.java    |   53 +
 .../datastore/client/ScaleIOGatewayClient.java     |   88 ++
 .../client/ScaleIOGatewayClientConnectionPool.java |   90 ++
 .../datastore/client/ScaleIOGatewayClientImpl.java | 1255 ++++++++++++++++++++
 .../driver/ScaleIOPrimaryDataStoreDriver.java      |  950 +++++++++++++++
 .../ScaleIOPrimaryDataStoreLifeCycle.java          |  452 +++++++
 .../datastore/provider/ScaleIOHostListener.java    |  141 +++
 .../provider/ScaleIOPrimaryDatastoreProvider.java  |   77 ++
 .../storage/datastore/util/ScaleIOUtil.java        |  119 ++
 .../storage-volume-scaleio/module.properties       |   21 +
 .../spring-storage-volume-scaleio-context.xml      |   35 +
 .../client/ScaleIOGatewayClientImplTest.java       |   34 +-
 .../ScaleIOPrimaryDataStoreLifeCycleTest.java      |  250 ++++
 .../driver/SolidFirePrimaryDataStoreDriver.java    |   31 +
 .../java/com/cloud/alert/AlertManagerImpl.java     |    3 +-
 server/src/main/java/com/cloud/api/ApiDBUtils.java |    4 +-
 .../java/com/cloud/api/query/QueryManagerImpl.java |   14 +-
 .../com/cloud/api/query/ViewResponseHelper.java    |   11 +-
 .../com/cloud/api/query/dao/UserVmJoinDaoImpl.java |    5 +
 .../com/cloud/capacity/CapacityManagerImpl.java    |   16 +-
 .../configuration/ConfigurationManagerImpl.java    |   34 +-
 .../deploy/DeploymentPlanningManagerImpl.java      |   25 +-
 .../com/cloud/hypervisor/HypervisorGuruBase.java   |    1 +
 .../kvm/discoverer/LibvirtServerDiscoverer.java    |   11 +
 .../network/element/ConfigDriveNetworkElement.java |  147 ++-
 .../cloud/network/router/NetworkHelperImpl.java    |    4 +
 .../router/VirtualNetworkApplianceManager.java     |    2 +-
 .../router/VirtualNetworkApplianceManagerImpl.java |  216 ++--
 .../com/cloud/resource/ResourceManagerImpl.java    |    5 +-
 .../com/cloud/server/ManagementServerImpl.java     |   32 +-
 .../main/java/com/cloud/server/StatsCollector.java |    3 +-
 .../java/com/cloud/storage/StorageManagerImpl.java |  179 ++-
 .../com/cloud/storage/VolumeApiServiceImpl.java    |   94 +-
 .../cloud/storage/listener/StoragePoolMonitor.java |   48 +-
 .../cloud/storage/snapshot/SnapshotManager.java    |    2 +
 .../storage/snapshot/SnapshotManagerImpl.java      |   11 +-
 .../cloud/template/HypervisorTemplateAdapter.java  |   36 +-
 .../main/java/com/cloud/vm/UserVmManagerImpl.java  |   37 +-
 .../cloud/vm/snapshot/VMSnapshotManagerImpl.java   |   64 +-
 .../direct/download/DirectDownloadManagerImpl.java |  145 ++-
 .../element/ConfigDriveNetworkElementTest.java     |    6 +-
 .../cloud/resource/MockResourceManagerImpl.java    |    2 +-
 .../cloud/vm/snapshot/VMSnapshotManagerTest.java   |   20 +-
 .../vpc/MockVpcVirtualNetworkApplianceManager.java |    5 +-
 .../resource/NfsSecondaryStorageResource.java      |   14 +-
 .../opt/cloud/bin/filesystem_writable_check.py     |   46 +
 test/integration/plugins/scaleio/README.md         |   46 +
 .../plugins/scaleio/test_scaleio_volumes.py        | 1213 +++++++++++++++++++
 test/integration/smoke/test_service_offerings.py   |   12 +-
 test/integration/smoke/test_vm_snapshots.py        |    3 +-
 ui/public/locales/en.json                          |    4 +
 ui/src/config/section/compute.js                   |    3 +-
 ui/src/views/compute/CreateSnapshotWizard.vue      |    9 +-
 ui/src/views/infra/AddPrimaryStorage.vue           |   67 +-
 ui/src/views/storage/TakeSnapshot.vue              |    4 +-
 utils/pom.xml                                      |    1 +
 .../java/com/cloud/utils/SerialVersionUID.java     |    1 +
 .../java/com/cloud/utils/storage/QCOW2Utils.java   |   64 +
 161 files changed, 9971 insertions(+), 726 deletions(-)

diff --git a/agent/conf/agent.properties b/agent/conf/agent.properties
index 325e12d..06d8f3f 100644
--- a/agent/conf/agent.properties
+++ b/agent/conf/agent.properties
@@ -143,6 +143,9 @@ hypervisor.type=kvm
 # This parameter specifies a directory on the host local storage for temporary storing direct download templates
 #direct.download.temporary.download.location=/var/lib/libvirt/images
 
+# This parameter specifies a directory on the host local storage for creating and hosting the config drives
+#host.cache.location=/var/cache/cloud
+
 # set the rolling maintenance hook scripts directory
 #rolling.maintenance.hooks.dir=/etc/cloudstack/agent/hooks.d
 
diff --git a/api/src/main/java/com/cloud/agent/api/to/VirtualMachineTO.java b/api/src/main/java/com/cloud/agent/api/to/VirtualMachineTO.java
index efc735c..c472938 100644
--- a/api/src/main/java/com/cloud/agent/api/to/VirtualMachineTO.java
+++ b/api/src/main/java/com/cloud/agent/api/to/VirtualMachineTO.java
@@ -20,6 +20,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.HashMap;
 
+import com.cloud.network.element.NetworkElement;
 import com.cloud.template.VirtualMachineTemplate.BootloaderType;
 import com.cloud.vm.VirtualMachine;
 import com.cloud.vm.VirtualMachine.Type;
@@ -73,6 +74,7 @@ public class VirtualMachineTO {
     String configDriveLabel = null;
     String configDriveIsoRootFolder = null;
     String configDriveIsoFile = null;
+    NetworkElement.Location configDriveLocation = NetworkElement.Location.SECONDARY;
 
     Double cpuQuotaPercentage = null;
 
@@ -349,6 +351,18 @@ public class VirtualMachineTO {
         this.configDriveIsoFile = configDriveIsoFile;
     }
 
+    public boolean isConfigDriveOnHostCache() {
+        return (this.configDriveLocation == NetworkElement.Location.HOST);
+    }
+
+    public NetworkElement.Location getConfigDriveLocation() {
+        return configDriveLocation;
+    }
+
+    public void setConfigDriveLocation(NetworkElement.Location configDriveLocation) {
+        this.configDriveLocation = configDriveLocation;
+    }
+
     public Map<String, String> getGuestOsDetails() {
         return guestOsDetails;
     }
diff --git a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java b/api/src/main/java/com/cloud/exception/StorageAccessException.java
similarity index 67%
copy from core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
copy to api/src/main/java/com/cloud/exception/StorageAccessException.java
index ed49997..eefbcf5 100644
--- a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
+++ b/api/src/main/java/com/cloud/exception/StorageAccessException.java
@@ -1,4 +1,3 @@
-//
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -15,28 +14,19 @@
 // KIND, either express or implied.  See the License for the
 // specific language governing permissions and limitations
 // under the License.
-//
-
-package org.apache.cloudstack.agent.directdownload;
+package com.cloud.exception;
 
-import com.cloud.agent.api.Command;
+import com.cloud.utils.SerialVersionUID;
 
-public class CheckUrlCommand extends Command {
+/**
+ * If the cause is due to storage pool not accessible on host, calling
+ * problem with.
+ *
+ */
+public class StorageAccessException extends RuntimeException {
+    private static final long serialVersionUID = SerialVersionUID.StorageAccessException;
 
-    private String url;
-
-    public String getUrl() {
-        return url;
+    public StorageAccessException(String message) {
+        super(message);
     }
-
-    public CheckUrlCommand(final String url) {
-        super();
-        this.url = url;
-    }
-
-    @Override
-    public boolean executeInSequence() {
-        return false;
-    }
-
 }
diff --git a/api/src/main/java/com/cloud/network/VirtualNetworkApplianceService.java b/api/src/main/java/com/cloud/network/VirtualNetworkApplianceService.java
index 98fb8be..8504efd 100644
--- a/api/src/main/java/com/cloud/network/VirtualNetworkApplianceService.java
+++ b/api/src/main/java/com/cloud/network/VirtualNetworkApplianceService.java
@@ -26,6 +26,7 @@ import com.cloud.exception.InsufficientCapacityException;
 import com.cloud.exception.ResourceUnavailableException;
 import com.cloud.network.router.VirtualRouter;
 import com.cloud.user.Account;
+import com.cloud.utils.Pair;
 
 public interface VirtualNetworkApplianceService {
     /**
@@ -73,5 +74,5 @@ public interface VirtualNetworkApplianceService {
      * @param routerId id of the router
      * @return
      */
-    boolean performRouterHealthChecks(long routerId);
+    Pair<Boolean, String> performRouterHealthChecks(long routerId);
 }
diff --git a/api/src/main/java/com/cloud/network/element/NetworkElement.java b/api/src/main/java/com/cloud/network/element/NetworkElement.java
index 951732f..fa67575 100644
--- a/api/src/main/java/com/cloud/network/element/NetworkElement.java
+++ b/api/src/main/java/com/cloud/network/element/NetworkElement.java
@@ -39,6 +39,10 @@ import com.cloud.vm.VirtualMachineProfile;
  */
 public interface NetworkElement extends Adapter {
 
+    enum Location {
+        SECONDARY, PRIMARY, HOST
+    }
+
     Map<Service, Map<Capability, String>> getCapabilities();
 
     /**
diff --git a/api/src/main/java/com/cloud/storage/Storage.java b/api/src/main/java/com/cloud/storage/Storage.java
index 7a229b6..362cc2c 100644
--- a/api/src/main/java/com/cloud/storage/Storage.java
+++ b/api/src/main/java/com/cloud/storage/Storage.java
@@ -135,6 +135,7 @@ public class Storage {
         OCFS2(true, false),
         SMB(true, false),
         Gluster(true, false),
+        PowerFlex(true, true), // Dell EMC PowerFlex/ScaleIO (formerly VxFlexOS)
         ManagedNFS(true, false),
         DatastoreCluster(true, true); // for VMware, to abstract pool of clusters
 
diff --git a/api/src/main/java/com/cloud/storage/Volume.java b/api/src/main/java/com/cloud/storage/Volume.java
index 5979697..9036fa5 100644
--- a/api/src/main/java/com/cloud/storage/Volume.java
+++ b/api/src/main/java/com/cloud/storage/Volume.java
@@ -29,6 +29,11 @@ import com.cloud.utils.fsm.StateMachine2;
 import com.cloud.utils.fsm.StateObject;
 
 public interface Volume extends ControlledEntity, Identity, InternalIdentity, BasedOn, StateObject<Volume.State>, Displayable {
+
+    // Managed storage volume parameters (specified in the compute/disk offering for PowerFlex)
+    String BANDWIDTH_LIMIT_IN_MBPS = "bandwidthLimitInMbps";
+    String IOPS_LIMIT = "iopsLimit";
+
     enum Type {
         UNKNOWN, ROOT, SWAP, DATADISK, ISO
     };
@@ -79,6 +84,7 @@ public interface Volume extends ControlledEntity, Identity, InternalIdentity, Ba
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.OperationSucceeded, Ready, null));
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.DestroyRequested, Destroy, null));
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Creating, Event.CreateRequested, Creating, null));
+            s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Ready, Event.CreateRequested, Creating, null));
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Ready, Event.ResizeRequested, Resizing, null));
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Resizing, Event.OperationSucceeded, Ready, Arrays.asList(new StateMachine2.Transition.Impact[]{StateMachine2.Transition.Impact.USAGE})));
             s_fsm.addTransition(new StateMachine2.Transition<State, Event>(Resizing, Event.OperationFailed, Ready, null));
diff --git a/api/src/main/java/com/cloud/vm/VirtualMachineProfile.java b/api/src/main/java/com/cloud/vm/VirtualMachineProfile.java
index c17a716..f87939a 100644
--- a/api/src/main/java/com/cloud/vm/VirtualMachineProfile.java
+++ b/api/src/main/java/com/cloud/vm/VirtualMachineProfile.java
@@ -20,7 +20,9 @@ import java.util.List;
 import java.util.Map;
 
 import com.cloud.agent.api.to.DiskTO;
+import com.cloud.host.Host;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
+import com.cloud.network.element.NetworkElement;
 import com.cloud.offering.ServiceOffering;
 import com.cloud.template.VirtualMachineTemplate;
 import com.cloud.template.VirtualMachineTemplate.BootloaderType;
@@ -54,6 +56,10 @@ public interface VirtualMachineProfile {
 
     void setConfigDriveIsoFile(String isoFile);
 
+    NetworkElement.Location getConfigDriveLocation();
+
+    void setConfigDriveLocation(NetworkElement.Location location);
+
     public static class Param {
 
         public static final Param VmPassword = new Param("VmPassword");
@@ -100,6 +106,10 @@ public interface VirtualMachineProfile {
         }
     }
 
+    Long getHostId();
+
+    void setHost(Host host);
+
     String getHostName();
 
     String getInstanceName();
diff --git a/api/src/main/java/com/cloud/vm/VmDetailConstants.java b/api/src/main/java/com/cloud/vm/VmDetailConstants.java
index 9991e1f..64de939 100644
--- a/api/src/main/java/com/cloud/vm/VmDetailConstants.java
+++ b/api/src/main/java/com/cloud/vm/VmDetailConstants.java
@@ -56,6 +56,8 @@ public interface VmDetailConstants {
     String PASSWORD = "password";
     String ENCRYPTED_PASSWORD = "Encrypted.Password";
 
+    String CONFIG_DRIVE_LOCATION = "configDriveLocation";
+
     // VM import with nic, disk and custom params for custom compute offering
     String NIC = "nic";
     String NETWORK = "network";
diff --git a/api/src/main/java/org/apache/cloudstack/alert/AlertService.java b/api/src/main/java/org/apache/cloudstack/alert/AlertService.java
index 26c3f3c..c2cd1b2 100644
--- a/api/src/main/java/org/apache/cloudstack/alert/AlertService.java
+++ b/api/src/main/java/org/apache/cloudstack/alert/AlertService.java
@@ -16,12 +16,12 @@
 // under the License.
 package org.apache.cloudstack.alert;
 
-import com.cloud.capacity.Capacity;
-import com.cloud.exception.InvalidParameterValueException;
-
 import java.util.HashSet;
 import java.util.Set;
 
+import com.cloud.capacity.Capacity;
+import com.cloud.exception.InvalidParameterValueException;
+
 public interface AlertService {
     public static class AlertType {
         private static Set<AlertType> defaultAlertTypes = new HashSet<AlertType>();
@@ -69,6 +69,7 @@ public interface AlertService {
         public static final AlertType ALERT_TYPE_OOBM_AUTH_ERROR = new AlertType((short)29, "ALERT.OOBM.AUTHERROR", true);
         public static final AlertType ALERT_TYPE_HA_ACTION = new AlertType((short)30, "ALERT.HA.ACTION", true);
         public static final AlertType ALERT_TYPE_CA_CERT = new AlertType((short)31, "ALERT.CA.CERT", true);
+        public static final AlertType ALERT_TYPE_VM_SNAPSHOT = new AlertType((short)32, "ALERT.VM.SNAPSHOT", true);
 
         public short getType() {
             return type;
diff --git a/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java b/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java
index 5c3050c..8b9df63 100644
--- a/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java
+++ b/api/src/main/java/org/apache/cloudstack/api/ApiConstants.java
@@ -339,6 +339,7 @@ public class ApiConstants {
     public static final String SNAPSHOT_POLICY_ID = "snapshotpolicyid";
     public static final String SNAPSHOT_TYPE = "snapshottype";
     public static final String SNAPSHOT_QUIESCEVM = "quiescevm";
+    public static final String SUPPORTS_STORAGE_SNAPSHOT = "supportsstoragesnapshot";
     public static final String SOURCE_ZONE_ID = "sourcezoneid";
     public static final String START_DATE = "startdate";
     public static final String START_ID = "startid";
@@ -834,6 +835,8 @@ public class ApiConstants {
     public static final String TEMPLATETYPE = "templatetype";
     public static final String SOURCETEMPLATEID = "sourcetemplateid";
 
+    public static final String POOL_TYPE ="pooltype";
+
     public enum BootType {
         UEFI, BIOS;
 
diff --git a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
index a830777..e7b46be 100644
--- a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
+++ b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateDiskOfferingCmd.java
@@ -16,8 +16,11 @@
 // under the License.
 package org.apache.cloudstack.api.command.admin.offering;
 
+import java.util.Collection;
+import java.util.HashMap;
 import java.util.LinkedHashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 import org.apache.cloudstack.api.APICommand;
@@ -31,6 +34,7 @@ import org.apache.cloudstack.api.response.DomainResponse;
 import org.apache.cloudstack.api.response.VsphereStoragePoliciesResponse;
 import org.apache.cloudstack.api.response.ZoneResponse;
 import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
 import org.apache.log4j.Logger;
 
 import com.cloud.offering.DiskOffering;
@@ -155,7 +159,10 @@ public class CreateDiskOfferingCmd extends BaseCmd {
     @Parameter(name = ApiConstants.STORAGE_POLICY, type = CommandType.UUID, entityType = VsphereStoragePoliciesResponse.class,required = false, description = "Name of the storage policy defined at vCenter, this is applicable only for VMware", since = "4.15")
     private Long storagePolicy;
 
-/////////////////////////////////////////////////////
+    @Parameter(name = ApiConstants.DETAILS, type = CommandType.MAP, description = "details to specify disk offering parameters", since = "4.16")
+    private Map details;
+
+    /////////////////////////////////////////////////////
     /////////////////// Accessors ///////////////////////
     /////////////////////////////////////////////////////
 
@@ -277,6 +284,20 @@ public class CreateDiskOfferingCmd extends BaseCmd {
         return cacheMode;
     }
 
+    public Map<String, String> getDetails() {
+        Map<String, String> detailsMap = new HashMap<>();
+        if (MapUtils.isNotEmpty(details)) {
+            Collection<?> props = details.values();
+            for (Object prop : props) {
+                HashMap<String, String> detail = (HashMap<String, String>) prop;
+                for (Map.Entry<String, String> entry: detail.entrySet()) {
+                    detailsMap.put(entry.getKey(),entry.getValue());
+                }
+            }
+        }
+        return detailsMap;
+    }
+
     public Long getStoragePolicy() {
         return storagePolicy;
     }
diff --git a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
index 3219422..d2d6f38 100644
--- a/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
+++ b/api/src/main/java/org/apache/cloudstack/api/command/admin/offering/CreateServiceOfferingCmd.java
@@ -321,7 +321,15 @@ public class CreateServiceOfferingCmd extends BaseCmd {
             Collection<?> props = details.values();
             for (Object prop : props) {
                 HashMap<String, String> detail = (HashMap<String, String>) prop;
-                detailsMap.put(detail.get("key"), detail.get("value"));
+                // Compatibility with key and value pairs input from API cmd for details map parameter
+                if (!Strings.isNullOrEmpty(detail.get("key")) && !Strings.isNullOrEmpty(detail.get("value"))) {
+                    detailsMap.put(detail.get("key"), detail.get("value"));
+                    continue;
+                }
+
+                for (Map.Entry<String, String> entry: detail.entrySet()) {
+                    detailsMap.put(entry.getKey(),entry.getValue());
+                }
             }
         }
         return detailsMap;
diff --git a/api/src/main/java/org/apache/cloudstack/api/command/admin/router/GetRouterHealthCheckResultsCmd.java b/api/src/main/java/org/apache/cloudstack/api/command/admin/router/GetRouterHealthCheckResultsCmd.java
index 5efc6de..dc1020b 100644
--- a/api/src/main/java/org/apache/cloudstack/api/command/admin/router/GetRouterHealthCheckResultsCmd.java
+++ b/api/src/main/java/org/apache/cloudstack/api/command/admin/router/GetRouterHealthCheckResultsCmd.java
@@ -111,7 +111,7 @@ public class GetRouterHealthCheckResultsCmd extends BaseCmd {
             setResponseObject(routerResponse);
         } catch (CloudRuntimeException ex){
             ex.printStackTrace();
-            throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to execute command due to exception: " + ex.getLocalizedMessage());
+            throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "Failed to get health check results due to: " + ex.getLocalizedMessage());
         }
     }
 }
diff --git a/api/src/main/java/org/apache/cloudstack/api/response/UserVmResponse.java b/api/src/main/java/org/apache/cloudstack/api/response/UserVmResponse.java
index 44eaba7..7204d5a 100644
--- a/api/src/main/java/org/apache/cloudstack/api/response/UserVmResponse.java
+++ b/api/src/main/java/org/apache/cloudstack/api/response/UserVmResponse.java
@@ -310,6 +310,10 @@ public class UserVmResponse extends BaseResponseWithTagInformation implements Co
     @Param(description = "Guest vm Boot Type")
     private String bootType;
 
+    @SerializedName(ApiConstants.POOL_TYPE)
+    @Param(description = "the pool type of the virtual machine", since = "4.16")
+    private String poolType;
+
     public UserVmResponse() {
         securityGroupList = new LinkedHashSet<SecurityGroupResponse>();
         nics = new LinkedHashSet<NicResponse>();
@@ -901,4 +905,8 @@ public class UserVmResponse extends BaseResponseWithTagInformation implements Co
     public String getBootMode() { return bootMode; }
 
     public void setBootMode(String bootMode) { this.bootMode = bootMode; }
+
+    public String getPoolType() { return poolType; }
+
+    public void setPoolType(String poolType) { this.poolType = poolType; }
 }
diff --git a/api/src/main/java/org/apache/cloudstack/api/response/VolumeResponse.java b/api/src/main/java/org/apache/cloudstack/api/response/VolumeResponse.java
index 1cdd696..e9254ef 100644
--- a/api/src/main/java/org/apache/cloudstack/api/response/VolumeResponse.java
+++ b/api/src/main/java/org/apache/cloudstack/api/response/VolumeResponse.java
@@ -248,8 +248,12 @@ public class VolumeResponse extends BaseResponseWithTagInformation implements Co
     @Param(description = "need quiesce vm or not when taking snapshot", since = "4.3")
     private boolean needQuiescevm;
 
+    @SerializedName(ApiConstants.SUPPORTS_STORAGE_SNAPSHOT)
+    @Param(description = "true if storage snapshot is supported for the volume, false otherwise", since = "4.16")
+    private boolean supportsStorageSnapshot;
+
     @SerializedName(ApiConstants.PHYSICAL_SIZE)
-    @Param(description = "the bytes alloaated")
+    @Param(description = "the bytes allocated")
     private Long physicalsize;
 
     @SerializedName(ApiConstants.VIRTUAL_SIZE)
@@ -538,6 +542,14 @@ public class VolumeResponse extends BaseResponseWithTagInformation implements Co
         return this.needQuiescevm;
     }
 
+    public void setSupportsStorageSnapshot(boolean supportsStorageSnapshot) {
+        this.supportsStorageSnapshot = supportsStorageSnapshot;
+    }
+
+    public boolean getSupportsStorageSnapshot() {
+        return this.supportsStorageSnapshot;
+    }
+
     public String getIsoId() {
         return isoId;
     }
diff --git a/api/src/test/java/com/cloud/storage/StorageTest.java b/api/src/test/java/com/cloud/storage/StorageTest.java
index 61909e7..bf45169 100644
--- a/api/src/test/java/com/cloud/storage/StorageTest.java
+++ b/api/src/test/java/com/cloud/storage/StorageTest.java
@@ -16,11 +16,12 @@
 // under the License.
 package com.cloud.storage;
 
-import com.cloud.storage.Storage.StoragePoolType;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 
+import com.cloud.storage.Storage.StoragePoolType;
+
 public class StorageTest {
     @Before
     public void setUp() {
@@ -37,6 +38,7 @@ public class StorageTest {
         Assert.assertFalse(StoragePoolType.LVM.isShared());
         Assert.assertTrue(StoragePoolType.CLVM.isShared());
         Assert.assertTrue(StoragePoolType.RBD.isShared());
+        Assert.assertTrue(StoragePoolType.PowerFlex.isShared());
         Assert.assertTrue(StoragePoolType.SharedMountPoint.isShared());
         Assert.assertTrue(StoragePoolType.VMFS.isShared());
         Assert.assertTrue(StoragePoolType.PreSetup.isShared());
@@ -59,6 +61,7 @@ public class StorageTest {
         Assert.assertFalse(StoragePoolType.LVM.supportsOverProvisioning());
         Assert.assertFalse(StoragePoolType.CLVM.supportsOverProvisioning());
         Assert.assertTrue(StoragePoolType.RBD.supportsOverProvisioning());
+        Assert.assertTrue(StoragePoolType.PowerFlex.supportsOverProvisioning());
         Assert.assertFalse(StoragePoolType.SharedMountPoint.supportsOverProvisioning());
         Assert.assertTrue(StoragePoolType.VMFS.supportsOverProvisioning());
         Assert.assertTrue(StoragePoolType.PreSetup.supportsOverProvisioning());
diff --git a/client/pom.xml b/client/pom.xml
index 4547436..904b98b 100644
--- a/client/pom.xml
+++ b/client/pom.xml
@@ -89,6 +89,11 @@
         </dependency>
         <dependency>
             <groupId>org.apache.cloudstack</groupId>
+            <artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
+            <version>${project.version}</version>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.cloudstack</groupId>
             <artifactId>cloud-server</artifactId>
             <version>${project.version}</version>
         </dependency>
diff --git a/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoAnswer.java b/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoAnswer.java
new file mode 100644
index 0000000..769f886
--- /dev/null
+++ b/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoAnswer.java
@@ -0,0 +1,55 @@
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+//
+
+package com.cloud.agent.api;
+
+import com.cloud.network.element.NetworkElement;
+import com.cloud.utils.exception.ExceptionUtil;
+
+public class HandleConfigDriveIsoAnswer extends Answer {
+
+    @LogLevel(LogLevel.Log4jLevel.Off)
+    private NetworkElement.Location location = NetworkElement.Location.SECONDARY;
+
+    public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd) {
+        super(cmd);
+    }
+
+    public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final NetworkElement.Location location) {
+        super(cmd);
+        this.location = location;
+    }
+
+    public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final NetworkElement.Location location, final String details) {
+        super(cmd, true, details);
+        this.location = location;
+    }
+
+    public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final String details) {
+        super(cmd, false, details);
+    }
+
+    public HandleConfigDriveIsoAnswer(final HandleConfigDriveIsoCommand cmd, final Exception e) {
+        this(cmd, ExceptionUtil.toString(e));
+    }
+
+    public NetworkElement.Location getConfigDriveLocation() {
+        return location;
+    }
+}
diff --git a/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoCommand.java b/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoCommand.java
index 3d8d8f7..062274f 100644
--- a/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoCommand.java
+++ b/core/src/main/java/com/cloud/agent/api/HandleConfigDriveIsoCommand.java
@@ -25,16 +25,19 @@ public class HandleConfigDriveIsoCommand extends Command {
 
     @LogLevel(LogLevel.Log4jLevel.Off)
     private String isoData;
-
     private String isoFile;
     private boolean create = false;
     private DataStoreTO destStore;
+    private boolean useHostCacheOnUnsupportedPool = false;
+    private boolean preferHostCache = false;
 
-    public HandleConfigDriveIsoCommand(String isoFile, String isoData, DataStoreTO destStore, boolean create) {
+    public HandleConfigDriveIsoCommand(String isoFile, String isoData, DataStoreTO destStore, boolean useHostCacheOnUnsupportedPool, boolean preferHostCache, boolean create) {
         this.isoFile = isoFile;
         this.isoData = isoData;
         this.destStore = destStore;
         this.create = create;
+        this.useHostCacheOnUnsupportedPool = useHostCacheOnUnsupportedPool;
+        this.preferHostCache = preferHostCache;
     }
 
     @Override
@@ -57,4 +60,12 @@ public class HandleConfigDriveIsoCommand extends Command {
     public String getIsoFile() {
         return isoFile;
     }
+
+    public boolean isHostCachePreferred() {
+        return preferHostCache;
+    }
+
+    public boolean getUseHostCacheOnUnsupportedPool() {
+        return useHostCacheOnUnsupportedPool;
+    }
 }
diff --git a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java b/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
index 779a0f4..e32dda3 100644
--- a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
+++ b/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
@@ -19,12 +19,14 @@ package com.cloud.agent.api.routing;
 
 public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
     private boolean performFreshChecks;
+    private boolean validateBasicTestsOnly;
 
     protected GetRouterMonitorResultsCommand() {
     }
 
-    public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
+    public GetRouterMonitorResultsCommand(boolean performFreshChecks, boolean validateBasicTestsOnly) {
         this.performFreshChecks = performFreshChecks;
+        this.validateBasicTestsOnly = validateBasicTestsOnly;
     }
 
     @Override
@@ -35,4 +37,8 @@ public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
     public boolean shouldPerformFreshChecks() {
         return performFreshChecks;
     }
+
+    public boolean shouldValidateBasicTestsOnly() {
+        return validateBasicTestsOnly;
+    }
 }
\ No newline at end of file
diff --git a/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VRScripts.java b/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VRScripts.java
index f8cf6d4..834a11c 100644
--- a/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VRScripts.java
+++ b/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VRScripts.java
@@ -75,4 +75,6 @@ public class VRScripts {
     public static final String DIAGNOSTICS = "diagnostics.py";
     public static final String RETRIEVE_DIAGNOSTICS = "get_diagnostics_files.py";
     public static final String VR_FILE_CLEANUP = "cleanup.sh";
+
+    public static final String ROUTER_FILESYSTEM_WRITABLE_CHECK = "filesystem_writable_check.py";
 }
\ No newline at end of file
diff --git a/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java b/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
index 8f4670d..9a55d3b 100644
--- a/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
+++ b/core/src/main/java/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
@@ -44,6 +44,7 @@ import org.apache.cloudstack.diagnostics.DiagnosticsCommand;
 import org.apache.cloudstack.diagnostics.PrepareFilesAnswer;
 import org.apache.cloudstack.diagnostics.PrepareFilesCommand;
 import org.apache.cloudstack.utils.security.KeyStoreUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.log4j.Logger;
 import org.joda.time.Duration;
 
@@ -65,6 +66,7 @@ import com.cloud.agent.api.routing.NetworkElementCommand;
 import com.cloud.agent.resource.virtualnetwork.facade.AbstractConfigItemFacade;
 import com.cloud.utils.ExecutionResult;
 import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.Pair;
 import com.cloud.utils.exception.CloudRuntimeException;
 
 /**
@@ -310,6 +312,16 @@ public class VirtualRoutingResource {
 
     private GetRouterMonitorResultsAnswer execute(GetRouterMonitorResultsCommand cmd) {
         String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
+        Pair<Boolean, String> fileSystemTestResult = checkRouterFileSystem(routerIp);
+        if (!fileSystemTestResult.first()) {
+            return new GetRouterMonitorResultsAnswer(cmd, false, null, fileSystemTestResult.second());
+        }
+
+        if (cmd.shouldValidateBasicTestsOnly()) {
+            // Basic tests (connectivity and file system checks) are already validated
+            return new GetRouterMonitorResultsAnswer(cmd, true, null, "success");
+        }
+
         String args = cmd.shouldPerformFreshChecks() ? "true" : "false";
         s_logger.info("Fetching health check result for " + routerIp + " and executing fresh checks: " + args);
         ExecutionResult result = _vrDeployer.executeInVR(routerIp, VRScripts.ROUTER_MONITOR_RESULTS, args);
@@ -327,6 +339,27 @@ public class VirtualRoutingResource {
         return parseLinesForHealthChecks(cmd, result.getDetails());
     }
 
+    private Pair<Boolean, String> checkRouterFileSystem(String routerIp) {
+        ExecutionResult fileSystemWritableTestResult = _vrDeployer.executeInVR(routerIp, VRScripts.ROUTER_FILESYSTEM_WRITABLE_CHECK, null);
+        if (fileSystemWritableTestResult.isSuccess()) {
+            s_logger.debug("Router connectivity and file system writable check passed");
+            return new Pair<Boolean, String>(true, "success");
+        }
+
+        String resultDetails = fileSystemWritableTestResult.getDetails();
+        s_logger.warn("File system writable check failed with details: " + resultDetails);
+        if (StringUtils.isNotBlank(resultDetails)) {
+            final String readOnlyFileSystemError = "Read-only file system";
+            if (resultDetails.contains(readOnlyFileSystemError)) {
+                resultDetails = "Read-only file system";
+            }
+        } else {
+            resultDetails = "No results available";
+        }
+
+        return new Pair<Boolean, String>(false, resultDetails);
+    }
+
     private GetRouterAlertsAnswer execute(GetRouterAlertsCommand cmd) {
 
         String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
diff --git a/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java b/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
index 910eb3d..6c5b55a 100644
--- a/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
+++ b/core/src/main/java/com/cloud/storage/resource/StorageSubsystemCommandHandlerBase.java
@@ -99,10 +99,13 @@ public class StorageSubsystemCommandHandlerBase implements StorageSubsystemComma
             //copy volume from image cache to primary
             return processor.copyVolumeFromImageCacheToPrimary(cmd);
         } else if (srcData.getObjectType() == DataObjectType.VOLUME && srcData.getDataStore().getRole() == DataStoreRole.Primary) {
-            if (destData.getObjectType() == DataObjectType.VOLUME && srcData instanceof VolumeObjectTO && ((VolumeObjectTO)srcData).isDirectDownload()) {
-                return processor.copyVolumeFromPrimaryToPrimary(cmd);
-            } else if (destData.getObjectType() == DataObjectType.VOLUME) {
-                return processor.copyVolumeFromPrimaryToSecondary(cmd);
+            if (destData.getObjectType() == DataObjectType.VOLUME) {
+                if ((srcData instanceof VolumeObjectTO && ((VolumeObjectTO)srcData).isDirectDownload()) ||
+                        destData.getDataStore().getRole() == DataStoreRole.Primary) {
+                    return processor.copyVolumeFromPrimaryToPrimary(cmd);
+                } else {
+                    return processor.copyVolumeFromPrimaryToSecondary(cmd);
+                }
             } else if (destData.getObjectType() == DataObjectType.TEMPLATE) {
                 return processor.createTemplateFromVolume(cmd);
             }
diff --git a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java b/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
index ed49997..e8618d5 100644
--- a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
+++ b/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
@@ -23,14 +23,20 @@ import com.cloud.agent.api.Command;
 
 public class CheckUrlCommand extends Command {
 
+    private String format;
     private String url;
 
+    public String getFormat() {
+        return format;
+    }
+
     public String getUrl() {
         return url;
     }
 
-    public CheckUrlCommand(final String url) {
+    public CheckUrlCommand(final String format,final String url) {
         super();
+        this.format = format;
         this.url = url;
     }
 
diff --git a/core/src/main/java/org/apache/cloudstack/agent/directdownload/DirectDownloadCommand.java b/core/src/main/java/org/apache/cloudstack/agent/directdownload/DirectDownloadCommand.java
index aafcb53..7e1ff0b 100644
--- a/core/src/main/java/org/apache/cloudstack/agent/directdownload/DirectDownloadCommand.java
+++ b/core/src/main/java/org/apache/cloudstack/agent/directdownload/DirectDownloadCommand.java
@@ -23,6 +23,9 @@ import java.util.Map;
 
 import org.apache.cloudstack.storage.command.StorageSubSystemCommand;
 import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
+import org.apache.cloudstack.storage.to.TemplateObjectTO;
+
+import com.cloud.storage.Storage;
 
 public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
 
@@ -32,6 +35,7 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
 
     private String url;
     private Long templateId;
+    private TemplateObjectTO destData;
     private PrimaryDataStoreTO destPool;
     private String checksum;
     private Map<String, String> headers;
@@ -39,11 +43,12 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
     private Integer soTimeout;
     private Integer connectionRequestTimeout;
     private Long templateSize;
-    private boolean iso;
+    private Storage.ImageFormat format;
 
     protected DirectDownloadCommand (final String url, final Long templateId, final PrimaryDataStoreTO destPool, final String checksum, final Map<String, String> headers, final Integer connectTimeout, final Integer soTimeout, final Integer connectionRequestTimeout) {
         this.url = url;
         this.templateId = templateId;
+        this.destData = destData;
         this.destPool = destPool;
         this.checksum = checksum;
         this.headers = headers;
@@ -60,6 +65,14 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
         return templateId;
     }
 
+    public TemplateObjectTO getDestData() {
+        return destData;
+    }
+
+    public void setDestData(TemplateObjectTO destData) {
+         this.destData = destData;
+    }
+
     public PrimaryDataStoreTO getDestPool() {
         return destPool;
     }
@@ -104,12 +117,12 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
         this.templateSize = templateSize;
     }
 
-    public boolean isIso() {
-        return iso;
+    public Storage.ImageFormat getFormat() {
+        return format;
     }
 
-    public void setIso(boolean iso) {
-        this.iso = iso;
+    public void setFormat(Storage.ImageFormat format) {
+        this.format = format;
     }
 
     @Override
@@ -120,4 +133,8 @@ public abstract class DirectDownloadCommand extends StorageSubSystemCommand {
     public boolean executeInSequence() {
         return false;
     }
+
+    public int getWaitInMillSeconds() {
+        return getWait() * 1000;
+    }
 }
diff --git a/core/src/main/java/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java b/core/src/main/java/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
index 7dab8d9..0bb5b79 100644
--- a/core/src/main/java/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
+++ b/core/src/main/java/org/apache/cloudstack/storage/to/PrimaryDataStoreTO.java
@@ -19,12 +19,13 @@
 
 package org.apache.cloudstack.storage.to;
 
+import java.util.Map;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
+
 import com.cloud.agent.api.to.DataStoreTO;
 import com.cloud.storage.DataStoreRole;
 import com.cloud.storage.Storage.StoragePoolType;
-import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
-
-import java.util.Map;
 
 public class PrimaryDataStoreTO implements DataStoreTO {
     public static final String MANAGED = PrimaryDataStore.MANAGED;
diff --git a/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java b/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java
index a076b80..36c35e5 100644
--- a/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java
+++ b/core/src/main/java/org/apache/cloudstack/storage/to/VolumeObjectTO.java
@@ -43,6 +43,7 @@ public class VolumeObjectTO implements DataTO {
     private String chainInfo;
     private Storage.ImageFormat format;
     private Storage.ProvisioningType provisioningType;
+    private Long poolId;
     private long id;
 
     private Long deviceId;
@@ -89,6 +90,7 @@ public class VolumeObjectTO implements DataTO {
         setId(volume.getId());
         format = volume.getFormat();
         provisioningType = volume.getProvisioningType();
+        poolId = volume.getPoolId();
         bytesReadRate = volume.getBytesReadRate();
         bytesReadRateMax = volume.getBytesReadRateMax();
         bytesReadRateMaxLength = volume.getBytesReadRateMaxLength();
@@ -227,6 +229,14 @@ public class VolumeObjectTO implements DataTO {
         this.provisioningType = provisioningType;
     }
 
+    public Long getPoolId(){
+        return poolId;
+    }
+
+    public void setPoolId(Long poolId){
+        this.poolId = poolId;
+    }
+
     @Override
     public String toString() {
         return new StringBuilder("volumeTO[uuid=").append(uuid).append("|path=").append(path).append("|datastore=").append(dataStore).append("]").toString();
diff --git a/engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java b/engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
index 463d3a7..3ca3008 100644
--- a/engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
+++ b/engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
@@ -58,7 +58,13 @@ public interface VirtualMachineManager extends Manager {
             "The default label name for the config drive", false);
 
     ConfigKey<Boolean> VmConfigDriveOnPrimaryPool = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.primarypool.enabled", "false",
-            "If config drive need to be created and hosted on primary storage pool. Currently only supported for KVM.", true);
+            "If config drive need to be created and hosted on primary storage pool. Currently only supported for KVM.", true, ConfigKey.Scope.Zone);
+
+    ConfigKey<Boolean> VmConfigDriveUseHostCacheOnUnsupportedPool = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.use.host.cache.on.unsupported.pool", "true",
+            "If true, config drive is created on the host cache storage when vm.configdrive.primarypool.enabled is true and the primary pool type doesn't support config drive.", true, ConfigKey.Scope.Zone);
+
+    ConfigKey<Boolean> VmConfigDriveForceHostCacheUse = new ConfigKey<>("Advanced", Boolean.class, "vm.configdrive.force.host.cache.use", "false",
+            "If true, config drive is forced to create on the host cache storage. Currently only supported for KVM.", true, ConfigKey.Scope.Zone);
 
     ConfigKey<Boolean> ResoureCountRunningVMsonly = new ConfigKey<Boolean>("Advanced", Boolean.class, "resource.count.running.vms.only", "false",
             "Count the resources of only running VMs in resource limitation.", true);
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/orchestration/service/VolumeOrchestrationService.java b/engine/api/src/main/java/org/apache/cloudstack/engine/orchestration/service/VolumeOrchestrationService.java
index ee264ac..c6b96bc 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/orchestration/service/VolumeOrchestrationService.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/orchestration/service/VolumeOrchestrationService.java
@@ -33,6 +33,7 @@ import com.cloud.dc.Pod;
 import com.cloud.deploy.DeployDestination;
 import com.cloud.exception.ConcurrentOperationException;
 import com.cloud.exception.InsufficientStorageCapacityException;
+import com.cloud.exception.StorageAccessException;
 import com.cloud.exception.StorageUnavailableException;
 import com.cloud.host.Host;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
@@ -104,6 +105,8 @@ public interface VolumeOrchestrationService {
 
     void release(VirtualMachineProfile profile);
 
+    void release(long vmId, long hostId);
+
     void cleanupVolumes(long vmId) throws ConcurrentOperationException;
 
     void revokeAccess(DataObject dataObject, Host host, DataStore dataStore);
@@ -116,7 +119,7 @@ public interface VolumeOrchestrationService {
 
     void prepareForMigration(VirtualMachineProfile vm, DeployDestination dest);
 
-    void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException;
+    void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException;
 
     boolean canVmRestartOnAnotherServer(long vmId);
 
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreDriver.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreDriver.java
index 3d73721..b197afa 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreDriver.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/DataStoreDriver.java
@@ -25,6 +25,7 @@ import org.apache.cloudstack.storage.command.CommandResult;
 
 import com.cloud.agent.api.to.DataStoreTO;
 import com.cloud.agent.api.to.DataTO;
+import com.cloud.host.Host;
 
 public interface DataStoreDriver {
     Map<String, String> getCapabilities();
@@ -37,7 +38,9 @@ public interface DataStoreDriver {
 
     void deleteAsync(DataStore store, DataObject data, AsyncCompletionCallback<CommandResult> callback);
 
-    void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback);
+    void copyAsync(DataObject srcData, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback);
+
+    void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback);
 
     boolean canCopy(DataObject srcData, DataObject destData);
 
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/PrimaryDataStoreDriver.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/PrimaryDataStoreDriver.java
index 6021a43..622dda3 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/PrimaryDataStoreDriver.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/PrimaryDataStoreDriver.java
@@ -23,6 +23,7 @@ import org.apache.cloudstack.storage.command.CommandResult;
 
 import com.cloud.host.Host;
 import com.cloud.storage.StoragePool;
+import com.cloud.utils.Pair;
 
 public interface PrimaryDataStoreDriver extends DataStoreDriver {
     enum QualityOfServiceState { MIGRATION, NO_MIGRATION }
@@ -72,4 +73,34 @@ public interface PrimaryDataStoreDriver extends DataStoreDriver {
     void revertSnapshot(SnapshotInfo snapshotOnImageStore, SnapshotInfo snapshotOnPrimaryStore, AsyncCompletionCallback<CommandResult> callback);
 
     void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState);
+
+    /**
+     * intended for managed storage
+     * returns true if the storage can provide the stats (capacity and used bytes)
+     */
+    boolean canProvideStorageStats();
+
+    /**
+     * intended for managed storage
+     * returns the total capacity and used size in bytes
+     */
+    Pair<Long, Long> getStorageStats(StoragePool storagePool);
+
+    /**
+     * intended for managed storage
+     * returns true if the storage can provide the volume stats (physical and virtual size)
+     */
+    boolean canProvideVolumeStats();
+
+    /**
+     * intended for managed storage
+     * returns the volume's physical and virtual size in bytes
+     */
+    Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId);
+
+    /**
+     * intended for managed storage
+     * returns true if the host can access the storage pool
+     */
+    boolean canHostAccessStoragePool(Host host, StoragePool pool);
 }
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateDataFactory.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateDataFactory.java
index 4d258f3..9584d7c 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateDataFactory.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateDataFactory.java
@@ -23,6 +23,8 @@ import java.util.List;
 import com.cloud.storage.DataStoreRole;
 
 public interface TemplateDataFactory {
+    TemplateInfo getTemplate(long templateId);
+
     TemplateInfo getTemplate(long templateId, DataStore store);
 
     TemplateInfo getReadyTemplateOnImageStore(long templateId, Long zoneId);
@@ -39,6 +41,8 @@ public interface TemplateDataFactory {
 
     TemplateInfo getReadyBypassedTemplateOnPrimaryStore(long templateId, Long poolId, Long hostId);
 
+    TemplateInfo getReadyBypassedTemplateOnManagedStorage(long templateId, TemplateInfo templateOnPrimary, Long poolId, Long hostId);
+
     boolean isTemplateMarkedForDirectDownload(long templateId);
 
     TemplateInfo getTemplateOnPrimaryStorage(long templateId, DataStore store, String configuration);
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateInfo.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateInfo.java
index 1e4a1b7..cc8e111 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateInfo.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/TemplateInfo.java
@@ -28,6 +28,8 @@ public interface TemplateInfo extends DataObject, VirtualMachineTemplate {
 
     boolean isDirectDownload();
 
+    boolean canBeDeletedFromDataStore();
+
     boolean isDeployAsIs();
 
     String getDeployAsIsConfiguration();
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java
index b138122..eafc3b7 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeInfo.java
@@ -22,6 +22,7 @@ import com.cloud.agent.api.Answer;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
 import com.cloud.offering.DiskOffering.DiskCacheMode;
 import com.cloud.storage.MigrationOptions;
+import com.cloud.storage.Storage;
 import com.cloud.storage.Volume;
 import com.cloud.vm.VirtualMachine;
 
@@ -35,6 +36,8 @@ public interface VolumeInfo extends DataObject, Volume {
 
     HypervisorType getHypervisorType();
 
+    Storage.StoragePoolType getStoragePoolType();
+
     Long getLastPoolId();
 
     String getAttachedVmName();
diff --git a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeService.java b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeService.java
index e8b533d..d194bbb 100644
--- a/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeService.java
+++ b/engine/api/src/main/java/org/apache/cloudstack/engine/subsystem/api/storage/VolumeService.java
@@ -25,6 +25,7 @@ import org.apache.cloudstack.framework.async.AsyncCallFuture;
 import org.apache.cloudstack.storage.command.CommandResult;
 
 import com.cloud.agent.api.to.VirtualMachineTO;
+import com.cloud.exception.StorageAccessException;
 import com.cloud.host.Host;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
 import com.cloud.offering.DiskOffering;
@@ -62,13 +63,17 @@ public interface VolumeService {
      */
     AsyncCallFuture<VolumeApiResult> expungeVolumeAsync(VolumeInfo volume);
 
+    void ensureVolumeIsExpungeReady(long volumeId);
+
     boolean cloneVolume(long volumeId, long baseVolId);
 
     AsyncCallFuture<VolumeApiResult> createVolumeFromSnapshot(VolumeInfo volume, DataStore store, SnapshotInfo snapshot);
 
     VolumeEntity getVolumeEntity(long volumeId);
 
-    AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId);
+    TemplateInfo createManagedStorageTemplate(long srcTemplateId, long destDataStoreId, long destHostId) throws StorageAccessException;
+
+    AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) throws StorageAccessException;
 
     AsyncCallFuture<VolumeApiResult> createVolumeFromTemplateAsync(VolumeInfo volume, long dataStoreId, TemplateInfo template);
 
diff --git a/engine/components-api/src/main/java/com/cloud/resource/ResourceManager.java b/engine/components-api/src/main/java/com/cloud/resource/ResourceManager.java
index db7a27f..ade2eeb 100755
--- a/engine/components-api/src/main/java/com/cloud/resource/ResourceManager.java
+++ b/engine/components-api/src/main/java/com/cloud/resource/ResourceManager.java
@@ -21,6 +21,9 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.cloudstack.framework.config.ConfigKey;
+import org.apache.cloudstack.framework.config.Configurable;
+
 import com.cloud.agent.api.StartupCommand;
 import com.cloud.agent.api.StartupRoutingCommand;
 import com.cloud.agent.api.VgpuTypesInfo;
@@ -38,8 +41,6 @@ import com.cloud.host.Status;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
 import com.cloud.resource.ResourceState.Event;
 import com.cloud.utils.fsm.NoTransitionException;
-import org.apache.cloudstack.framework.config.ConfigKey;
-import org.apache.cloudstack.framework.config.Configurable;
 
 /**
  * ResourceManager manages how physical resources are organized within the
@@ -204,7 +205,7 @@ public interface ResourceManager extends ResourceService, Configurable {
      */
     HashMap<String, HashMap<String, VgpuTypesInfo>> getGPUStatistics(HostVO host);
 
-    HostVO findOneRandomRunningHostByHypervisor(HypervisorType type);
+    HostVO findOneRandomRunningHostByHypervisor(HypervisorType type, Long dcId);
 
     boolean cancelMaintenance(final long hostId);
 }
diff --git a/engine/components-api/src/main/java/com/cloud/storage/StorageManager.java b/engine/components-api/src/main/java/com/cloud/storage/StorageManager.java
index 7455f22..b20db8d 100644
--- a/engine/components-api/src/main/java/com/cloud/storage/StorageManager.java
+++ b/engine/components-api/src/main/java/com/cloud/storage/StorageManager.java
@@ -109,6 +109,24 @@ public interface StorageManager extends StorageService {
             ConfigKey.Scope.Cluster,
             null);
 
+    ConfigKey<Integer> STORAGE_POOL_DISK_WAIT = new ConfigKey<>(Integer.class,
+            "storage.pool.disk.wait",
+            "Storage",
+            "60",
+            "Timeout (in secs) for the storage pool disk (of managed pool) to become available in the host. Currently only supported for PowerFlex.",
+            true,
+            ConfigKey.Scope.StoragePool,
+            null);
+
+    ConfigKey<Integer> STORAGE_POOL_CLIENT_TIMEOUT = new ConfigKey<>(Integer.class,
+            "storage.pool.client.timeout",
+            "Storage",
+            "60",
+            "Timeout (in secs) for the storage pool client connection timeout (for managed pools). Currently only supported for PowerFlex.",
+            true,
+            ConfigKey.Scope.StoragePool,
+            null);
+
     ConfigKey<Integer> PRIMARY_STORAGE_DOWNLOAD_WAIT = new ConfigKey<Integer>("Storage", Integer.class, "primary.storage.download.wait", "10800",
             "In second, timeout for download template to primary storage", false);
 
@@ -144,6 +162,8 @@ public interface StorageManager extends StorageService {
 
     Pair<Long, Answer> sendToPool(StoragePool pool, long[] hostIdsToTryFirst, List<Long> hostIdsToAvoid, Command cmd) throws StorageUnavailableException;
 
+    public Answer getVolumeStats(StoragePool pool, Command cmd);
+
     /**
      * Checks if a host has running VMs that are using its local storage pool.
      * @return true if local storage is active on the host
@@ -172,6 +192,14 @@ public interface StorageManager extends StorageService {
 
     StoragePoolVO findLocalStorageOnHost(long hostId);
 
+    Host findUpAndEnabledHostWithAccessToStoragePools(List<Long> poolIds);
+
+    List<StoragePoolHostVO> findStoragePoolsConnectedToHost(long hostId);
+
+    boolean canHostAccessStoragePool(Host host, StoragePool pool);
+
+    Host getHost(long hostId);
+
     Host updateSecondaryStorage(long secStorageId, String newUrl);
 
     void removeStoragePoolFromCluster(long hostId, String iScsiName, StoragePool storagePool);
@@ -210,7 +238,9 @@ public interface StorageManager extends StorageService {
      */
     boolean storagePoolHasEnoughSpace(List<Volume> volume, StoragePool pool, Long clusterId);
 
-    boolean storagePoolHasEnoughSpaceForResize(StoragePool pool, long currentSize, long newSiz);
+    boolean storagePoolHasEnoughSpaceForResize(StoragePool pool, long currentSize, long newSize);
+
+    boolean storagePoolCompatibleWithVolumePool(StoragePool pool, Volume volume);
 
     boolean isStoragePoolComplaintWithStoragePolicy(List<Volume> volumes, StoragePool pool) throws StorageUnavailableException;
 
@@ -218,6 +248,8 @@ public interface StorageManager extends StorageService {
 
     void connectHostToSharedPool(long hostId, long poolId) throws StorageUnavailableException, StorageConflictException;
 
+    void disconnectHostFromSharedPool(long hostId, long poolId) throws StorageUnavailableException, StorageConflictException;
+
     void createCapacityEntry(long poolId);
 
     DataStore createLocalStorage(Host host, StoragePoolInfo poolInfo) throws ConnectionException;
diff --git a/engine/components-api/src/main/java/com/cloud/storage/StorageUtil.java b/engine/components-api/src/main/java/com/cloud/storage/StorageUtil.java
index 97354e2..044ae3c 100644
--- a/engine/components-api/src/main/java/com/cloud/storage/StorageUtil.java
+++ b/engine/components-api/src/main/java/com/cloud/storage/StorageUtil.java
@@ -16,6 +16,14 @@
 // under the License.
 package com.cloud.storage;
 
+import java.util.List;
+
+import javax.inject.Inject;
+
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.commons.collections.CollectionUtils;
+
 import com.cloud.dc.ClusterVO;
 import com.cloud.dc.dao.ClusterDao;
 import com.cloud.host.HostVO;
@@ -25,13 +33,6 @@ import com.cloud.storage.dao.VolumeDao;
 import com.cloud.vm.VMInstanceVO;
 import com.cloud.vm.dao.VMInstanceDao;
 
-import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
-import org.apache.commons.collections.CollectionUtils;
-
-import java.util.List;
-import javax.inject.Inject;
-
 public class StorageUtil {
     @Inject private ClusterDao clusterDao;
     @Inject private HostDao hostDao;
diff --git a/engine/components-api/src/main/java/com/cloud/vm/VirtualMachineProfileImpl.java b/engine/components-api/src/main/java/com/cloud/vm/VirtualMachineProfileImpl.java
index 4d03396..efe4e2e 100644
--- a/engine/components-api/src/main/java/com/cloud/vm/VirtualMachineProfileImpl.java
+++ b/engine/components-api/src/main/java/com/cloud/vm/VirtualMachineProfileImpl.java
@@ -22,7 +22,9 @@ import java.util.List;
 import java.util.Map;
 
 import com.cloud.agent.api.to.DiskTO;
+import com.cloud.host.Host;
 import com.cloud.hypervisor.Hypervisor.HypervisorType;
+import com.cloud.network.element.NetworkElement;
 import com.cloud.offering.ServiceOffering;
 import com.cloud.service.ServiceOfferingVO;
 import com.cloud.template.VirtualMachineTemplate;
@@ -49,6 +51,8 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
     Float cpuOvercommitRatio = 1.0f;
     Float memoryOvercommitRatio = 1.0f;
 
+    Host _host = null;
+
     VirtualMachine.Type _type;
 
     List<String[]> vmData = null;
@@ -57,6 +61,7 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
     String configDriveIsoBaseLocation = "/tmp/";
     String configDriveIsoRootFolder = null;
     String configDriveIsoFile = null;
+    NetworkElement.Location configDriveLocation = NetworkElement.Location.SECONDARY;
 
     public VirtualMachineProfileImpl(VirtualMachine vm, VirtualMachineTemplate template, ServiceOffering offering, Account owner, Map<Param, Object> params) {
         _vm = vm;
@@ -220,6 +225,19 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
     }
 
     @Override
+    public Long getHostId() {
+        if (_host != null) {
+            return _host.getId();
+        }
+        return _vm.getHostId();
+    }
+
+    @Override
+    public void setHost(Host host) {
+        this._host = host;
+    }
+
+    @Override
     public String getHostName() {
         return _vm.getHostName();
     }
@@ -311,4 +329,14 @@ public class VirtualMachineProfileImpl implements VirtualMachineProfile {
     public void setConfigDriveIsoFile(String isoFile) {
         this.configDriveIsoFile = isoFile;
     }
+
+    @Override
+    public NetworkElement.Location getConfigDriveLocation() {
+        return configDriveLocation;
+    }
+
+    @Override
+    public void setConfigDriveLocation(NetworkElement.Location location) {
+        this.configDriveLocation = location;
+    }
 }
diff --git a/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java b/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
index de1ef20..dfec0b1 100755
--- a/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
+++ b/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
@@ -156,6 +156,7 @@ import com.cloud.exception.InsufficientServerCapacityException;
 import com.cloud.exception.InvalidParameterValueException;
 import com.cloud.exception.OperationTimedoutException;
 import com.cloud.exception.ResourceUnavailableException;
+import com.cloud.exception.StorageAccessException;
 import com.cloud.exception.StorageUnavailableException;
 import com.cloud.ha.HighAvailabilityManager;
 import com.cloud.ha.HighAvailabilityManager.WorkType;
@@ -743,12 +744,11 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         } catch (final InsufficientCapacityException e) {
             throw new CloudRuntimeException("Unable to start a VM due to insufficient capacity", e).add(VirtualMachine.class, vmUuid);
         } catch (final ResourceUnavailableException e) {
-            if(e.getScope() != null && e.getScope().equals(VirtualRouter.class)){
+            if (e.getScope() != null && e.getScope().equals(VirtualRouter.class)){
                 throw new CloudRuntimeException("Network is unavailable. Please contact administrator", e).add(VirtualMachine.class, vmUuid);
             }
             throw new CloudRuntimeException("Unable to start a VM due to unavailable resources", e).add(VirtualMachine.class, vmUuid);
         }
-
     }
 
     protected boolean checkWorkItems(final VMInstanceVO vm, final State state) throws ConcurrentOperationException {
@@ -1036,6 +1036,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
 
             int retry = StartRetry.value();
             while (retry-- != 0) { // It's != so that it can match -1.
+                s_logger.debug("VM start attempt #" + (StartRetry.value() - retry));
 
                 if (reuseVolume) {
                     // edit plan if this vm's ROOT volume is in READY state already
@@ -1115,7 +1116,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 if (dest != null) {
                     avoids.addHost(dest.getHost().getId());
                     if (!template.isDeployAsIs()) {
-                        journal.record("Deployment found ", vmProfile, dest);
+                        journal.record("Deployment found - Attempt #" + (StartRetry.value() - retry), vmProfile, dest);
                     }
                 }
 
@@ -1148,7 +1149,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
 
                 try {
                     resetVmNicsDeviceId(vm.getId());
-                    _networkMgr.prepare(vmProfile, new DeployDestination(dest.getDataCenter(), dest.getPod(), null, null, dest.getStorageForDisks(), dest.isDisplayStorage()), ctx);
+                    _networkMgr.prepare(vmProfile, dest, ctx);
                     if (vm.getHypervisorType() != HypervisorType.BareMetal) {
                         volumeMgr.prepare(vmProfile, dest);
                     }
@@ -1305,6 +1306,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 } catch (final NoTransitionException e) {
                     s_logger.error("Failed to start instance " + vm, e);
                     throw new AgentUnavailableException("Unable to start instance due to " + e.getMessage(), destHostId, e);
+                } catch (final StorageAccessException e) {
+                    s_logger.warn("Unable to access storage on host", e);
                 } finally {
                     if (startedVm == null && canRetry) {
                         final Step prevStep = work.getStep();
@@ -1632,6 +1635,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 info.put(DiskTO.STORAGE_HOST, storagePool.getHostAddress());
                 info.put(DiskTO.STORAGE_PORT, String.valueOf(storagePool.getPort()));
                 info.put(DiskTO.IQN, volume.get_iScsiName());
+                info.put(DiskTO.PROTOCOL_TYPE, (volume.getPoolType() != null) ? volume.getPoolType().toString() : null);
 
                 volumesToDisconnect.add(info);
             }
@@ -1762,20 +1766,34 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 }
             }
         } finally {
-            try {
-                _networkMgr.release(profile, cleanUpEvenIfUnableToStop);
-                s_logger.debug("Successfully released network resources for the vm " + vm);
-            } catch (final Exception e) {
-                s_logger.warn("Unable to release some network resources.", e);
-            }
-
-            volumeMgr.release(profile);
-            s_logger.debug(String.format("Successfully cleaned up resources for the VM %s in %s state", vm, state));
+            releaseVmResources(profile, cleanUpEvenIfUnableToStop);
         }
 
         return true;
     }
 
+    protected void releaseVmResources(final VirtualMachineProfile profile, final boolean forced) {
+        final VirtualMachine vm = profile.getVirtualMachine();
+        final State state = vm.getState();
+        try {
+            _networkMgr.release(profile, forced);
+            s_logger.debug(String.format("Successfully released network resources for the VM %s in %s state", vm, state));
+        } catch (final Exception e) {
+            s_logger.warn(String.format("Unable to release some network resources for the VM %s in %s state", vm, state), e);
+        }
+
+        try {
+            if (vm.getHypervisorType() != HypervisorType.BareMetal) {
+                volumeMgr.release(profile);
+                s_logger.debug(String.format("Successfully released storage resources for the VM %s in %s state", vm, state));
+            }
+        } catch (final Exception e) {
+            s_logger.warn(String.format("Unable to release storage resources for the VM %s in %s state", vm, state), e);
+        }
+
+        s_logger.debug(String.format("Successfully cleaned up resources for the VM %s in %s state", vm, state));
+    }
+
     @Override
     public void advanceStop(final String vmUuid, final boolean cleanUpEvenIfUnableToStop)
             throws AgentUnavailableException, OperationTimedoutException, ConcurrentOperationException {
@@ -1985,21 +2003,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
             s_logger.debug(vm + " is stopped on the host.  Proceeding to release resource held.");
         }
 
-        try {
-            _networkMgr.release(profile, cleanUpEvenIfUnableToStop);
-            s_logger.debug("Successfully released network resources for the vm " + vm);
-        } catch (final Exception e) {
-            s_logger.warn("Unable to release some network resources.", e);
-        }
-
-        try {
-            if (vm.getHypervisorType() != HypervisorType.BareMetal) {
-                volumeMgr.release(profile);
-                s_logger.debug("Successfully released storage resources for the vm " + vm);
-            }
-        } catch (final Exception e) {
-            s_logger.warn("Unable to release storage resources.", e);
-        }
+        releaseVmResources(profile, cleanUpEvenIfUnableToStop);
 
         try {
             if (work != null) {
@@ -2603,11 +2607,14 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         }
 
         final VirtualMachineProfile vmSrc = new VirtualMachineProfileImpl(vm);
+        vmSrc.setHost(fromHost);
         for (final NicProfile nic : _networkMgr.getNicProfiles(vm)) {
             vmSrc.addNic(nic);
         }
 
         final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, _offeringDao.findById(vm.getId(), vm.getServiceOfferingId()), null, null);
+        profile.setHost(dest.getHost());
+
         _networkMgr.prepareNicForMigration(profile, dest);
         volumeMgr.prepareForMigration(profile, dest);
         profile.setConfigDriveLabel(VmConfigDriveLabel.value());
@@ -2635,6 +2642,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         } finally {
             if (pfma == null) {
                 _networkMgr.rollbackNicForMigration(vmSrc, profile);
+                volumeMgr.release(vm.getId(), dstHostId);
                 work.setStep(Step.Done);
                 _workDao.update(work.getId(), work);
             }
@@ -2644,15 +2652,21 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         try {
             if (vm == null || vm.getHostId() == null || vm.getHostId() != srcHostId || !changeState(vm, Event.MigrationRequested, dstHostId, work, Step.Migrating)) {
                 _networkMgr.rollbackNicForMigration(vmSrc, profile);
+                if (vm != null) {
+                    volumeMgr.release(vm.getId(), dstHostId);
+                }
+
                 s_logger.info("Migration cancelled because state has changed: " + vm);
                 throw new ConcurrentOperationException("Migration cancelled because state has changed: " + vm);
             }
         } catch (final NoTransitionException e1) {
             _networkMgr.rollbackNicForMigration(vmSrc, profile);
+            volumeMgr.release(vm.getId(), dstHostId);
             s_logger.info("Migration cancelled because " + e1.getMessage());
             throw new ConcurrentOperationException("Migration cancelled because " + e1.getMessage());
         } catch (final CloudRuntimeException e2) {
             _networkMgr.rollbackNicForMigration(vmSrc, profile);
+            volumeMgr.release(vm.getId(), dstHostId);
             s_logger.info("Migration cancelled because " + e2.getMessage());
             work.setStep(Step.Done);
             _workDao.update(work.getId(), work);
@@ -2720,6 +2734,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
             if (!migrated) {
                 s_logger.info("Migration was unsuccessful.  Cleaning up: " + vm);
                 _networkMgr.rollbackNicForMigration(vmSrc, profile);
+                volumeMgr.release(vm.getId(), dstHostId);
 
                 _alertMgr.sendAlert(alertType, fromHost.getDataCenterId(), fromHost.getPodId(),
                         "Unable to migrate vm " + vm.getInstanceName() + " from host " + fromHost.getName() + " in zone " + dest.getDataCenter().getName() + " and pod " +
@@ -2737,6 +2752,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 }
             } else {
                 _networkMgr.commitNicForMigration(vmSrc, profile);
+                volumeMgr.release(vm.getId(), srcHostId);
                 _networkMgr.setHypervisorHostname(profile, dest, true);
             }
 
@@ -3026,8 +3042,16 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         final Cluster cluster = _clusterDao.findById(destHost.getClusterId());
         final DeployDestination destination = new DeployDestination(dc, pod, cluster, destHost);
 
+        final VirtualMachineProfile vmSrc = new VirtualMachineProfileImpl(vm);
+        vmSrc.setHost(srcHost);
+        for (final NicProfile nic : _networkMgr.getNicProfiles(vm)) {
+            vmSrc.addNic(nic);
+        }
+
+        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, _offeringDao.findById(vm.getId(), vm.getServiceOfferingId()), null, null);
+        profile.setHost(destHost);
+
         // Create a map of which volume should go in which storage pool.
-        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
         final Map<Volume, StoragePool> volumeToPoolMap = createMappingVolumeAndStoragePool(profile, destHost, volumeToPool);
 
         // If none of the volumes have to be migrated, fail the call. Administrator needs to make a call for migrating
@@ -3055,7 +3079,6 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         work.setResourceId(destHostId);
         work = _workDao.persist(work);
 
-
         // Put the vm in migrating state.
         vm.setLastHostId(srcHostId);
         vm.setPodIdToDeployIn(destHost.getPodId());
@@ -3127,6 +3150,9 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
         } finally {
             if (!migrated) {
                 s_logger.info("Migration was unsuccessful.  Cleaning up: " + vm);
+                _networkMgr.rollbackNicForMigration(vmSrc, profile);
+                volumeMgr.release(vm.getId(), destHostId);
+
                 _alertMgr.sendAlert(alertType, srcHost.getDataCenterId(), srcHost.getPodId(),
                         "Unable to migrate vm " + vm.getInstanceName() + " from host " + srcHost.getName() + " in zone " + dc.getName() + " and pod " + dc.getName(),
                         "Migrate Command failed.  Please check logs.");
@@ -3141,6 +3167,8 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                 }
                 _networkMgr.setHypervisorHostname(profile, destination, false);
             } else {
+                _networkMgr.commitNicForMigration(vmSrc, profile);
+                volumeMgr.release(vm.getId(), srcHostId);
                 _networkMgr.setHypervisorHostname(profile, destination, true);
             }
 
@@ -3415,7 +3443,7 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
     ResourceUnavailableException {
         final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
         // if there are active vm snapshots task, state change is not allowed
-        if(_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())){
+        if (_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())) {
             s_logger.error("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
             throw new CloudRuntimeException("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
         }
@@ -4623,11 +4651,11 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
 
     @Override
     public ConfigKey<?>[] getConfigKeys() {
-        return new ConfigKey<?>[] {ClusterDeltaSyncInterval, StartRetry, VmDestroyForcestop, VmOpCancelInterval, VmOpCleanupInterval, VmOpCleanupWait,
-            VmOpLockStateRetry,
-            VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval, VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, HaVmRestartHostUp,
-            ResoureCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel,
-            VmServiceOfferingMaxCPUCores, VmServiceOfferingMaxRAMSize };
+        return new ConfigKey<?>[] { ClusterDeltaSyncInterval, StartRetry, VmDestroyForcestop, VmOpCancelInterval, VmOpCleanupInterval, VmOpCleanupWait,
+                VmOpLockStateRetry, VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval,
+                VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, VmConfigDriveForceHostCacheUse, VmConfigDriveUseHostCacheOnUnsupportedPool,
+                HaVmRestartHostUp, ResoureCountRunningVMsonly, AllowExposeHypervisorHostname, AllowExposeHypervisorHostnameAccountLevel,
+                VmServiceOfferingMaxCPUCores, VmServiceOfferingMaxRAMSize };
     }
 
     public List<StoragePoolAllocator> getStoragePoolAllocators() {
@@ -4777,12 +4805,12 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
                         String.format("VM %s is at %s and we received a %s report while there is no pending jobs on it"
                                 , vm.getInstanceName(), vm.getState(), vm.getPowerState()));
             }
-            if(vm.isHaEnabled() && vm.getState() == State.Running
+            if (vm.isHaEnabled() && vm.getState() == State.Running
                     && HaVmRestartHostUp.value()
                     && vm.getHypervisorType() != HypervisorType.VMware
                     && vm.getHypervisorType() != HypervisorType.Hyperv) {
                 s_logger.info("Detected out-of-band stop of a HA enabled VM " + vm.getInstanceName() + ", will schedule restart");
-                if(!_haMgr.hasPendingHaWork(vm.getId())) {
+                if (!_haMgr.hasPendingHaWork(vm.getId())) {
                     _haMgr.scheduleRestart(vm, true);
                 } else {
                     s_logger.info("VM " + vm.getInstanceName() + " already has an pending HA task working on it");
@@ -4791,13 +4819,20 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
             }
 
             // not when report is missing
-            if(PowerState.PowerOff.equals(vm.getPowerState())) {
+            if (PowerState.PowerOff.equals(vm.getPowerState())) {
                 final VirtualMachineGuru vmGuru = getVmGuru(vm);
                 final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
                 if (!sendStop(vmGuru, profile, true, true)) {
                     // In case StopCommand fails, don't proceed further
                     return;
+                } else {
+                    // Release resources on StopCommand success
+                    releaseVmResources(profile, true);
                 }
+            } else if (PowerState.PowerReportMissing.equals(vm.getPowerState())) {
+                final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
+                // VM will be sync-ed to Stopped state, release the resources
+                releaseVmResources(profile, true);
             }
 
             try {
@@ -5574,10 +5609,9 @@ public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMac
             s_logger.trace(String.format("orchestrating VM start for '%s' %s set to %s", vm.getInstanceName(), VirtualMachineProfile.Param.BootIntoSetup, enterSetup));
         }
 
-        try{
+        try {
             orchestrateStart(vm.getUuid(), work.getParams(), work.getPlan(), _dpMgr.getDeploymentPlannerByName(work.getDeploymentPlanner()));
-        }
-        catch (CloudRuntimeException e){
+        } catch (CloudRuntimeException e) {
             e.printStackTrace();
             s_logger.info("Caught CloudRuntimeException, returning job failed " + e);
             CloudRuntimeException ex = new CloudRuntimeException("Unable to start VM instance");
diff --git a/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java b/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
index 8c97b47..e6260b8 100644
--- a/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
+++ b/engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
@@ -35,8 +35,6 @@ import javax.inject.Inject;
 import javax.naming.ConfigurationException;
 
 import com.cloud.agent.api.to.DatadiskTO;
-import com.cloud.storage.VolumeDetailVO;
-import com.cloud.storage.dao.VMTemplateDetailsDao;
 import com.cloud.utils.StringUtils;
 import com.cloud.vm.SecondaryStorageVmVO;
 import com.cloud.vm.UserVmDetailVO;
@@ -75,6 +73,8 @@ import org.apache.cloudstack.framework.config.ConfigKey;
 import org.apache.cloudstack.framework.config.Configurable;
 import org.apache.cloudstack.framework.jobs.AsyncJobManager;
 import org.apache.cloudstack.framework.jobs.impl.AsyncJobVO;
+import org.apache.cloudstack.resourcedetail.DiskOfferingDetailVO;
+import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
 import org.apache.cloudstack.storage.command.CommandResult;
 import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
 import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
@@ -103,6 +103,7 @@ import com.cloud.event.UsageEventUtils;
 import com.cloud.exception.ConcurrentOperationException;
 import com.cloud.exception.InsufficientStorageCapacityException;
 import com.cloud.exception.InvalidParameterValueException;
+import com.cloud.exception.StorageAccessException;
 import com.cloud.exception.StorageUnavailableException;
 import com.cloud.host.Host;
 import com.cloud.host.HostVO;
@@ -122,8 +123,10 @@ import com.cloud.storage.VMTemplateStorageResourceAssoc;
 import com.cloud.storage.Volume;
 import com.cloud.storage.Volume.Type;
 import com.cloud.storage.VolumeApiService;
+import com.cloud.storage.VolumeDetailVO;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.SnapshotDao;
+import com.cloud.storage.dao.VMTemplateDetailsDao;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.dao.VolumeDetailsDao;
 import com.cloud.template.TemplateManager;
@@ -185,6 +188,8 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
     @Inject
     protected ResourceLimitService _resourceLimitMgr;
     @Inject
+    DiskOfferingDetailsDao _diskOfferingDetailDao;
+    @Inject
     VolumeDetailsDao _volDetailDao;
     @Inject
     DataStoreManager dataStoreMgr;
@@ -748,6 +753,19 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
         vol.setFormat(getSupportedImageFormatForCluster(vm.getHypervisorType()));
         vol = _volsDao.persist(vol);
 
+        List<VolumeDetailVO> volumeDetailsVO = new ArrayList<VolumeDetailVO>();
+        DiskOfferingDetailVO bandwidthLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS);
+        if (bandwidthLimitDetail != null) {
+            volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS, bandwidthLimitDetail.getValue(), false));
+        }
+        DiskOfferingDetailVO iopsLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.IOPS_LIMIT);
+        if (iopsLimitDetail != null) {
+            volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.IOPS_LIMIT, iopsLimitDetail.getValue(), false));
+        }
+        if (!volumeDetailsVO.isEmpty()) {
+            _volDetailDao.saveDetails(volumeDetailsVO);
+        }
+
         // Save usage event and update resource count for user vm volumes
         if (vm.getType() == VirtualMachine.Type.User) {
             UsageEventUtils.publishUsageEvent(EventTypes.EVENT_VOLUME_CREATE, vol.getAccountId(), vol.getDataCenterId(), vol.getId(), vol.getName(), offering.getId(), null, size,
@@ -801,6 +819,19 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
 
         vol = _volsDao.persist(vol);
 
+        List<VolumeDetailVO> volumeDetailsVO = new ArrayList<VolumeDetailVO>();
+        DiskOfferingDetailVO bandwidthLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS);
+        if (bandwidthLimitDetail != null) {
+            volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS, bandwidthLimitDetail.getValue(), false));
+        }
+        DiskOfferingDetailVO iopsLimitDetail = _diskOfferingDetailDao.findDetail(offering.getId(), Volume.IOPS_LIMIT);
+        if (iopsLimitDetail != null) {
+            volumeDetailsVO.add(new VolumeDetailVO(vol.getId(), Volume.IOPS_LIMIT, iopsLimitDetail.getValue(), false));
+        }
+        if (!volumeDetailsVO.isEmpty()) {
+            _volDetailDao.saveDetails(volumeDetailsVO);
+        }
+
         if (StringUtils.isNotBlank(configurationId)) {
             VolumeDetailVO deployConfigurationDetail = new VolumeDetailVO(vol.getId(), VmDetailConstants.DEPLOY_AS_IS_CONFIGURATION, configurationId, false);
             _volDetailDao.persist(deployConfigurationDetail);
@@ -1010,8 +1041,39 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
     }
 
     @Override
-    public void release(VirtualMachineProfile profile) {
-        // add code here
+    public void release(VirtualMachineProfile vmProfile) {
+        Long hostId = vmProfile.getVirtualMachine().getHostId();
+        if (hostId != null) {
+            revokeAccess(vmProfile.getId(), hostId);
+        }
+    }
+
+    @Override
+    public void release(long vmId, long hostId) {
+        List<VolumeVO> volumesForVm = _volsDao.findUsableVolumesForInstance(vmId);
+        if (volumesForVm == null || volumesForVm.isEmpty()) {
+            return;
+        }
+
+        if (s_logger.isDebugEnabled()) {
+            s_logger.debug("Releasing " + volumesForVm.size() + " volumes for VM: " + vmId + " from host: " + hostId);
+        }
+
+        for (VolumeVO volumeForVm : volumesForVm) {
+            VolumeInfo volumeInfo = volFactory.getVolume(volumeForVm.getId());
+
+            // pool id can be null for the VM's volumes in Allocated state
+            if (volumeForVm.getPoolId() != null) {
+                DataStore dataStore = dataStoreMgr.getDataStore(volumeForVm.getPoolId(), DataStoreRole.Primary);
+                PrimaryDataStore primaryDataStore = (PrimaryDataStore)dataStore;
+                HostVO host = _hostDao.findById(hostId);
+
+                // This might impact other managed storages, grant access for PowerFlex storage pool only
+                if (primaryDataStore.isManaged() && primaryDataStore.getPoolType() == Storage.StoragePoolType.PowerFlex) {
+                    volService.revokeAccess(volumeInfo, host, dataStore);
+                }
+            }
+        }
     }
 
     @Override
@@ -1118,6 +1180,9 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
             VolumeApiResult result = future.get();
             if (result.isFailed()) {
                 s_logger.error("Migrate volume failed:" + result.getResult());
+                if (result.getResult() != null && result.getResult().contains("[UNSUPPORTED]")) {
+                    throw new CloudRuntimeException("Migrate volume failed: " + result.getResult());
+                }
                 throw new StorageUnavailableException("Migrate volume failed: " + result.getResult(), destPool.getId());
             } else {
                 // update the volumeId for snapshots on secondary
@@ -1243,6 +1308,12 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
 
             disk.setDetails(getDetails(volumeInfo, dataStore));
 
+            PrimaryDataStore primaryDataStore = (PrimaryDataStore)dataStore;
+            // This might impact other managed storages, grant access for PowerFlex storage pool only
+            if (primaryDataStore.isManaged() && primaryDataStore.getPoolType() == Storage.StoragePoolType.PowerFlex) {
+                volService.grantAccess(volFactory.getVolume(vol.getId()), dest.getHost(), dataStore);
+            }
+
             vm.addDisk(disk);
         }
 
@@ -1269,6 +1340,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
 
         VolumeVO volume = _volumeDao.findById(volumeInfo.getId());
         details.put(DiskTO.PROTOCOL_TYPE, (volume.getPoolType() != null) ? volume.getPoolType().toString() : null);
+        details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(storagePool.getId())));
 
          if (volume.getPoolId() != null) {
             StoragePoolVO poolVO = _storagePoolDao.findById(volume.getPoolId());
@@ -1386,7 +1458,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
         return tasks;
     }
 
-    private Pair<VolumeVO, DataStore> recreateVolume(VolumeVO vol, VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException {
+    private Pair<VolumeVO, DataStore> recreateVolume(VolumeVO vol, VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, StorageAccessException {
         VolumeVO newVol;
         boolean recreate = RecreatableSystemVmEnabled.value();
         DataStore destPool = null;
@@ -1430,19 +1502,28 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
                 future = volService.createVolumeAsync(volume, destPool);
             } else {
                 TemplateInfo templ = tmplFactory.getReadyTemplateOnImageStore(templateId, dest.getDataCenter().getId());
+                PrimaryDataStore primaryDataStore = (PrimaryDataStore)destPool;
 
                 if (templ == null) {
                     if (tmplFactory.isTemplateMarkedForDirectDownload(templateId)) {
                         // Template is marked for direct download bypassing Secondary Storage
-                        templ = tmplFactory.getReadyBypassedTemplateOnPrimaryStore(templateId, destPool.getId(), dest.getHost().getId());
+                        if (!primaryDataStore.isManaged()) {
+                            templ = tmplFactory.getReadyBypassedTemplateOnPrimaryStore(templateId, destPool.getId(), dest.getHost().getId());
+                        } else {
+                            s_logger.debug("Direct download template: " + templateId + " on host: " + dest.getHost().getId() + " and copy to the managed storage pool: " + destPool.getId());
+                            templ = volService.createManagedStorageTemplate(templateId, destPool.getId(), dest.getHost().getId());
+                        }
+
+                        if (templ == null) {
+                            s_logger.debug("Failed to spool direct download template: " + templateId + " for data center " + dest.getDataCenter().getId());
+                            throw new CloudRuntimeException("Failed to spool direct download template: " + templateId + " for data center " + dest.getDataCenter().getId());
+                        }
                     } else {
                         s_logger.debug("can't find ready template: " + templateId + " for data center " + dest.getDataCenter().getId());
                         throw new CloudRuntimeException("can't find ready template: " + templateId + " for data center " + dest.getDataCenter().getId());
                     }
                 }
 
-                PrimaryDataStore primaryDataStore = (PrimaryDataStore)destPool;
-
                 if (primaryDataStore.isManaged()) {
                     DiskOffering diskOffering = _entityMgr.findById(DiskOffering.class, volume.getDiskOfferingId());
                     HypervisorType hyperType = vm.getVirtualMachine().getHypervisorType();
@@ -1476,11 +1557,17 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
                     long hostId = vm.getVirtualMachine().getHostId();
                     Host host = _hostDao.findById(hostId);
 
-                    volService.grantAccess(volFactory.getVolume(newVol.getId()), host, destPool);
+                    try {
+                        volService.grantAccess(volFactory.getVolume(newVol.getId()), host, destPool);
+                    } catch (Exception e) {
+                        throw new StorageAccessException("Unable to grant access to volume: " + newVol.getId() + " on host: " + host.getId());
+                    }
                 }
 
                 newVol = _volsDao.findById(newVol.getId());
                 break; //break out of template-redeploy retry loop
+            } catch (StorageAccessException e) {
+                throw e;
             } catch (InterruptedException | ExecutionException e) {
                 s_logger.error("Unable to create " + newVol, e);
                 throw new StorageUnavailableException("Unable to create " + newVol + ":" + e.toString(), destPool.getId());
@@ -1491,7 +1578,7 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
     }
 
     @Override
-    public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException {
+    public void prepare(VirtualMachineProfile vm, DeployDestination dest) throws StorageUnavailableException, InsufficientStorageCapacityException, ConcurrentOperationException, StorageAccessException {
 
         if (dest == null) {
             if (s_logger.isDebugEnabled()) {
@@ -1534,7 +1621,20 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
                             volService.revokeAccess(volFactory.getVolume(vol.getId()), lastHost, storagePool);
                         }
 
-                        volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
+                        try {
+                            volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
+                        } catch (Exception e) {
+                            throw new StorageAccessException("Unable to grant access to volume: " + vol.getId() + " on host: " + host.getId());
+                        }
+                    } else {
+                        // This might impact other managed storages, grant access for PowerFlex storage pool only
+                        if (pool.getPoolType() == Storage.StoragePoolType.PowerFlex) {
+                            try {
+                                volService.grantAccess(volFactory.getVolume(vol.getId()), host, (DataStore)pool);
+                            } catch (Exception e) {
+                                throw new StorageAccessException("Unable to grant access to volume: " + vol.getId() + " on host: " + host.getId());
+                            }
+                        }
                     }
                 }
             } else if (task.type == VolumeTaskType.MIGRATE) {
@@ -1847,4 +1947,4 @@ public class VolumeOrchestrator extends ManagerBase implements VolumeOrchestrati
             }
         });
     }
-}
\ No newline at end of file
+}
diff --git a/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDao.java b/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDao.java
index 8dd10a7..b099a6d 100644
--- a/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDao.java
+++ b/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDao.java
@@ -32,6 +32,8 @@ public interface StoragePoolHostDao extends GenericDao<StoragePoolHostVO, Long>
 
     List<StoragePoolHostVO> listByHostStatus(long poolId, Status hostStatus);
 
+    List<Long> findHostsConnectedToPools(List<Long> poolIds);
+
     List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly);
 
     public void deletePrimaryRecordsForHost(long hostId);
diff --git a/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDaoImpl.java b/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDaoImpl.java
index 2b7b0f7..349baf0 100644
--- a/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDaoImpl.java
+++ b/engine/schema/src/main/java/com/cloud/storage/dao/StoragePoolHostDaoImpl.java
@@ -21,6 +21,7 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.List;
+import java.util.stream.Collectors;
 
 
 import org.apache.log4j.Logger;
@@ -44,6 +45,8 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
 
     protected static final String HOST_FOR_POOL_SEARCH = "SELECT * FROM storage_pool_host_ref ph,  host h where  ph.host_id = h.id and ph.pool_id=? and h.status=? ";
 
+    protected static final String HOSTS_FOR_POOLS_SEARCH = "SELECT DISTINCT(ph.host_id) FROM storage_pool_host_ref ph, host h WHERE ph.host_id = h.id AND h.status = 'Up' AND resource_state = 'Enabled' AND ph.pool_id IN (?)";
+
     protected static final String STORAGE_POOL_HOST_INFO = "SELECT p.data_center_id,  count(ph.host_id) " + " FROM storage_pool p, storage_pool_host_ref ph "
         + " WHERE p.id = ph.pool_id AND p.data_center_id = ? " + " GROUP by p.data_center_id";
 
@@ -122,6 +125,33 @@ public class StoragePoolHostDaoImpl extends GenericDaoBase<StoragePoolHostVO, Lo
     }
 
     @Override
+    public List<Long> findHostsConnectedToPools(List<Long> poolIds) {
+        List<Long> hosts = new ArrayList<Long>();
+        if (poolIds == null || poolIds.isEmpty()) {
+            return hosts;
+        }
+
+        String poolIdsInStr = poolIds.stream().map(poolId -> String.valueOf(poolId)).collect(Collectors.joining(",", "(", ")"));
+        String sql = HOSTS_FOR_POOLS_SEARCH.replace("(?)", poolIdsInStr);
+
+        TransactionLegacy txn = TransactionLegacy.currentTxn();
+        try(PreparedStatement pstmt = txn.prepareStatement(sql);) {
+            try(ResultSet rs = pstmt.executeQuery();) {
+                while (rs.next()) {
+                    long hostId = rs.getLong(1); // host_id column
+                    hosts.add(hostId);
+                }
+            } catch (SQLException e) {
+                s_logger.warn("findHostsConnectedToPools:Exception: ", e);
+            }
+        } catch (Exception e) {
+            s_logger.warn("findHostsConnectedToPools:Exception: ", e);
+        }
+
+        return hosts;
+    }
+
+    @Override
     public List<Pair<Long, Integer>> getDatacenterStoragePoolHostInfo(long dcId, boolean sharedOnly) {
         ArrayList<Pair<Long, Integer>> l = new ArrayList<Pair<Long, Integer>>();
         String sql = sharedOnly ? SHARED_STORAGE_POOL_HOST_INFO : STORAGE_POOL_HOST_INFO;
diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java
index ac6c855..71c1dce 100644
--- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java
+++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/DataMotionServiceImpl.java
@@ -61,10 +61,10 @@ public class DataMotionServiceImpl implements DataMotionService {
         }
 
         if (srcData.getDataStore().getDriver().canCopy(srcData, destData)) {
-            srcData.getDataStore().getDriver().copyAsync(srcData, destData, callback);
+            srcData.getDataStore().getDriver().copyAsync(srcData, destData, destHost, callback);
             return;
         } else if (destData.getDataStore().getDriver().canCopy(srcData, destData)) {
-            destData.getDataStore().getDriver().copyAsync(srcData, destData, callback);
+            destData.getDataStore().getDriver().copyAsync(srcData, destData, destHost, callback);
             return;
         }
 
diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageDataMotionStrategy.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageDataMotionStrategy.java
index 9718596..bf8761e 100644
--- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageDataMotionStrategy.java
+++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageDataMotionStrategy.java
@@ -53,6 +53,7 @@ import com.cloud.storage.StorageManager;
 import com.cloud.storage.StoragePool;
 import com.cloud.storage.VMTemplateStoragePoolVO;
 import com.cloud.storage.VMTemplateStorageResourceAssoc;
+import com.cloud.storage.Volume;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.VMTemplatePoolDao;
 import com.cloud.utils.exception.CloudRuntimeException;
@@ -195,6 +196,10 @@ public class KvmNonManagedStorageDataMotionStrategy extends StorageSystemDataMot
     @Override
     protected void copyTemplateToTargetFilesystemStorageIfNeeded(VolumeInfo srcVolumeInfo, StoragePool srcStoragePool, DataStore destDataStore, StoragePool destStoragePool,
             Host destHost) {
+        if (srcVolumeInfo.getVolumeType() != Volume.Type.ROOT || srcVolumeInfo.getTemplateId() == null) {
+            return;
+        }
+
         VMTemplateStoragePoolVO sourceVolumeTemplateStoragePoolVO = vmTemplatePoolDao.findByPoolTemplate(destStoragePool.getId(), srcVolumeInfo.getTemplateId(), null);
         if (sourceVolumeTemplateStoragePoolVO == null && destStoragePool.getPoolType() == StoragePoolType.Filesystem) {
             DataStore sourceTemplateDataStore = dataStoreManagerImpl.getRandomImageStore(srcVolumeInfo.getDataCenterId());
diff --git a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
index 936f062..952dbb2 100644
--- a/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
+++ b/engine/storage/datamotion/src/main/java/org/apache/cloudstack/storage/motion/StorageSystemDataMotionStrategy.java
@@ -574,6 +574,14 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
         }
     }
 
+    private void verifyFormatWithPoolType(ImageFormat imageFormat, StoragePoolType poolType) {
+        if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2 &&
+                !(imageFormat == ImageFormat.RAW && StoragePoolType.PowerFlex == poolType)) {
+            throw new CloudRuntimeException("Only the following image types are currently supported: " +
+                    ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", " + ImageFormat.QCOW2.toString() + ", and " + ImageFormat.RAW.toString() + "(for PowerFlex)");
+        }
+    }
+
     private void verifyFormat(ImageFormat imageFormat) {
         if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2) {
             throw new CloudRuntimeException("Only the following image types are currently supported: " +
@@ -585,8 +593,9 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
         long volumeId = snapshotInfo.getVolumeId();
 
         VolumeVO volumeVO = _volumeDao.findByIdIncludingRemoved(volumeId);
+        StoragePoolVO storagePoolVO = _storagePoolDao.findById(volumeVO.getPoolId());
 
-        verifyFormat(volumeVO.getFormat());
+        verifyFormatWithPoolType(volumeVO.getFormat(), storagePoolVO.getPoolType());
     }
 
     private boolean usingBackendSnapshotFor(SnapshotInfo snapshotInfo) {
@@ -735,6 +744,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
         details.put(DiskTO.MANAGED, Boolean.TRUE.toString());
         details.put(DiskTO.IQN, destVolumeInfo.get_iScsiName());
         details.put(DiskTO.STORAGE_HOST, destPool.getHostAddress());
+        details.put(DiskTO.PROTOCOL_TYPE, (destPool.getPoolType() != null) ? destPool.getPoolType().toString() : null);
 
         command.setDestDetails(details);
 
@@ -916,6 +926,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
             boolean keepGrantedAccess = false;
 
             DataStore srcDataStore = snapshotInfo.getDataStore();
+            StoragePoolVO storagePoolVO = _storagePoolDao.findById(srcDataStore.getId());
+
+            if (HypervisorType.KVM.equals(snapshotInfo.getHypervisorType()) && storagePoolVO.getPoolType() == StoragePoolType.PowerFlex) {
+                usingBackendSnapshot = false;
+            }
 
             if (usingBackendSnapshot) {
                 createVolumeFromSnapshot(snapshotInfo);
@@ -1309,7 +1324,13 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
             Preconditions.checkArgument(volumeInfo != null, "Passing 'null' to volumeInfo of " +
                             "handleCreateVolumeFromTemplateBothOnStorageSystem is not supported.");
 
-            verifyFormat(templateInfo.getFormat());
+            DataStore dataStore = volumeInfo.getDataStore();
+            if (dataStore.getRole() == DataStoreRole.Primary) {
+                StoragePoolVO storagePoolVO = _storagePoolDao.findById(dataStore.getId());
+                verifyFormatWithPoolType(templateInfo.getFormat(), storagePoolVO.getPoolType());
+            } else {
+                verifyFormat(templateInfo.getFormat());
+            }
 
             HostVO hostVO = null;
 
@@ -1786,6 +1807,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
                 StoragePoolVO destStoragePool = _storagePoolDao.findById(destDataStore.getId());
                 StoragePoolVO sourceStoragePool = _storagePoolDao.findById(srcVolumeInfo.getPoolId());
 
+                // do not initiate migration for the same PowerFlex/ScaleIO pool
+                if (sourceStoragePool.getId() == destStoragePool.getId() && sourceStoragePool.getPoolType() == Storage.StoragePoolType.PowerFlex) {
+                    continue;
+                }
+
                 if (!shouldMigrateVolume(sourceStoragePool, destHost, destStoragePool)) {
                     continue;
                 }
@@ -1894,13 +1920,11 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
 
                 throw new CloudRuntimeException(errMsg);
             }
-        }
-        catch (Exception ex) {
+        } catch (Exception ex) {
             errMsg = "Copy operation failed in 'StorageSystemDataMotionStrategy.copyAsync': " + ex.getMessage();
-
+            LOGGER.error(errMsg, ex);
             throw new CloudRuntimeException(errMsg);
-        }
-        finally {
+        } finally {
             CopyCmdAnswer copyCmdAnswer = new CopyCmdAnswer(errMsg);
 
             CopyCommandResult result = new CopyCommandResult(null, copyCmdAnswer);
@@ -2197,10 +2221,6 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
                 throw new CloudRuntimeException("Volume with ID " + volumeInfo.getId() + " is not associated with a storage pool.");
             }
 
-            if (srcStoragePoolVO.isManaged()) {
-                throw new CloudRuntimeException("Migrating a volume online with KVM from managed storage is not currently supported.");
-            }
-
             DataStore dataStore = entry.getValue();
             StoragePoolVO destStoragePoolVO = _storagePoolDao.findById(dataStore.getId());
 
@@ -2208,6 +2228,10 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
                 throw new CloudRuntimeException("Destination storage pool with ID " + dataStore.getId() + " was not located.");
             }
 
+            if (srcStoragePoolVO.isManaged() && srcStoragePoolVO.getId() != destStoragePoolVO.getId()) {
+                throw new CloudRuntimeException("Migrating a volume online with KVM from managed storage is not currently supported.");
+            }
+
             if (storageTypeConsistency == null) {
                 storageTypeConsistency = destStoragePoolVO.isManaged();
             } else if (storageTypeConsistency != destStoragePoolVO.isManaged()) {
@@ -2301,7 +2325,9 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
         CopyCmdAnswer copyCmdAnswer = null;
 
         try {
-            if (!ImageFormat.QCOW2.equals(volumeInfo.getFormat())) {
+            StoragePoolVO storagePoolVO = _storagePoolDao.findById(volumeInfo.getPoolId());
+
+            if (!ImageFormat.QCOW2.equals(volumeInfo.getFormat()) && !(ImageFormat.RAW.equals(volumeInfo.getFormat()) && StoragePoolType.PowerFlex == storagePoolVO.getPoolType())) {
                 throw new CloudRuntimeException("When using managed storage, you can only create a template from a volume on KVM currently.");
             }
 
@@ -2317,7 +2343,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
             try {
                 handleQualityOfServiceForVolumeMigration(volumeInfo, PrimaryDataStoreDriver.QualityOfServiceState.MIGRATION);
 
-                if (srcVolumeDetached) {
+                if (srcVolumeDetached || StoragePoolType.PowerFlex == storagePoolVO.getPoolType()) {
                     _volumeService.grantAccess(volumeInfo, hostVO, srcDataStore);
                 }
 
@@ -2349,7 +2375,7 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
                 throw new CloudRuntimeException(msg + ex.getMessage(), ex);
             }
             finally {
-                if (srcVolumeDetached) {
+                if (srcVolumeDetached || StoragePoolType.PowerFlex == storagePoolVO.getPoolType()) {
                     try {
                         _volumeService.revokeAccess(volumeInfo, hostVO, srcDataStore);
                     }
@@ -2415,6 +2441,8 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
         volumeDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
         volumeDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
         volumeDetails.put(DiskTO.IQN, volumeVO.get_iScsiName());
+        volumeDetails.put(DiskTO.PROTOCOL_TYPE, (volumeVO.getPoolType() != null) ? volumeVO.getPoolType().toString() : null);
+        volumeDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(storagePoolVO.getId())));
 
         volumeDetails.put(DiskTO.VOLUME_SIZE, String.valueOf(volumeVO.getSize()));
         volumeDetails.put(DiskTO.SCSI_NAA_DEVICE_ID, getVolumeProperty(volumeInfo.getId(), DiskTO.SCSI_NAA_DEVICE_ID));
@@ -2442,7 +2470,12 @@ public class StorageSystemDataMotionStrategy implements DataMotionStrategy {
 
         long snapshotId = snapshotInfo.getId();
 
-        snapshotDetails.put(DiskTO.IQN, getSnapshotProperty(snapshotId, DiskTO.IQN));
+        if (storagePoolVO.getPoolType() == StoragePoolType.PowerFlex) {
+            snapshotDetails.put(DiskTO.IQN, snapshotInfo.getPath());
+        } else {
+            snapshotDetails.put(DiskTO.IQN, getSnapshotProperty(snapshotId, DiskTO.IQN));
+        }
+
         snapshotDetails.put(DiskTO.VOLUME_SIZE, String.valueOf(snapshotInfo.getSize()));
         snapshotDetails.put(DiskTO.SCSI_NAA_DEVICE_ID, getSnapshotProperty(snapshotId, DiskTO.SCSI_NAA_DEVICE_ID));
 
diff --git a/engine/storage/datamotion/src/test/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageSystemDataMotionTest.java b/engine/storage/datamotion/src/test/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageSystemDataMotionTest.java
index ba7fb74..609742b 100644
--- a/engine/storage/datamotion/src/test/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageSystemDataMotionTest.java
+++ b/engine/storage/datamotion/src/test/java/org/apache/cloudstack/storage/motion/KvmNonManagedStorageSystemDataMotionTest.java
@@ -70,6 +70,7 @@ import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.Storage.StoragePoolType;
 import com.cloud.storage.StoragePool;
 import com.cloud.storage.VMTemplateStoragePoolVO;
+import com.cloud.storage.Volume;
 import com.cloud.storage.dao.DiskOfferingDao;
 import com.cloud.storage.dao.VMTemplatePoolDao;
 import com.cloud.utils.exception.CloudRuntimeException;
@@ -327,6 +328,7 @@ public class KvmNonManagedStorageSystemDataMotionTest {
 
         VolumeInfo srcVolumeInfo = Mockito.mock(VolumeInfo.class);
         Mockito.when(srcVolumeInfo.getTemplateId()).thenReturn(0l);
+        Mockito.when(srcVolumeInfo.getVolumeType()).thenReturn(Volume.Type.ROOT);
 
         StoragePool srcStoragePool = Mockito.mock(StoragePool.class);
 
@@ -465,6 +467,8 @@ public class KvmNonManagedStorageSystemDataMotionTest {
     @Test(expected = CloudRuntimeException.class)
     public void testVerifyLiveMigrationMapForKVMMixedManagedUnmagedStorage() {
         when(pool1.isManaged()).thenReturn(true);
+        when(pool1.getId()).thenReturn(POOL_1_ID);
+        when(pool2.getId()).thenReturn(POOL_2_ID);
         lenient().when(pool2.isManaged()).thenReturn(false);
         kvmNonManagedStorageDataMotionStrategy.verifyLiveMigrationForKVM(migrationMap, host2);
     }
diff --git a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateDataFactoryImpl.java b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateDataFactoryImpl.java
index 1590fe0..c720b28 100644
--- a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateDataFactoryImpl.java
+++ b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateDataFactoryImpl.java
@@ -44,6 +44,7 @@ import com.cloud.host.HostVO;
 import com.cloud.host.dao.HostDao;
 import com.cloud.storage.DataStoreRole;
 import com.cloud.storage.VMTemplateStoragePoolVO;
+import com.cloud.storage.VMTemplateStorageResourceAssoc;
 import com.cloud.storage.VMTemplateVO;
 import com.cloud.storage.dao.VMTemplateDao;
 import com.cloud.storage.dao.VMTemplatePoolDao;
@@ -80,6 +81,16 @@ public class TemplateDataFactoryImpl implements TemplateDataFactory {
     }
 
     @Override
+    public TemplateInfo getTemplate(long templateId) {
+        VMTemplateVO templ = imageDataDao.findById(templateId);
+        if (templ != null) {
+            TemplateObject tmpl = TemplateObject.getTemplate(templ, null, null);
+            return tmpl;
+        }
+        return null;
+    }
+
+    @Override
     public TemplateInfo getTemplate(long templateId, DataStore store) {
         VMTemplateVO templ = imageDataDao.findById(templateId);
         if (store == null && !templ.isDirectDownload()) {
@@ -245,6 +256,33 @@ public class TemplateDataFactoryImpl implements TemplateDataFactory {
     }
 
     @Override
+    public TemplateInfo getReadyBypassedTemplateOnManagedStorage(long templateId, TemplateInfo templateOnPrimary, Long poolId, Long hostId) {
+        VMTemplateVO templateVO = imageDataDao.findById(templateId);
+        if (templateVO == null || !templateVO.isDirectDownload()) {
+            return null;
+        }
+
+        if (poolId == null) {
+            throw new CloudRuntimeException("No storage pool specified to download template: " + templateId);
+        }
+
+        StoragePoolVO poolVO = primaryDataStoreDao.findById(poolId);
+        if (poolVO == null || !poolVO.isManaged()) {
+            return null;
+        }
+
+        VMTemplateStoragePoolVO spoolRef = templatePoolDao.findByPoolTemplate(poolId, templateId, null);
+        if (spoolRef == null) {
+            throw new CloudRuntimeException("Template not created on managed storage pool: " + poolId + " to copy the download template: " + templateId);
+        } else if (spoolRef.getDownloadState() == VMTemplateStorageResourceAssoc.Status.NOT_DOWNLOADED) {
+            directDownloadManager.downloadTemplate(templateId, poolId, hostId);
+        }
+
+        DataStore store = storeMgr.getDataStore(poolId, DataStoreRole.Primary);
+        return this.getTemplate(templateId, store);
+    }
+
+    @Override
     public boolean isTemplateMarkedForDirectDownload(long templateId) {
         VMTemplateVO templateVO = imageDataDao.findById(templateId);
         return templateVO.isDirectDownload();
diff --git a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateServiceImpl.java b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateServiceImpl.java
index ed9359d..ef0ef7e 100644
--- a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateServiceImpl.java
+++ b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/TemplateServiceImpl.java
@@ -917,7 +917,14 @@ public class TemplateServiceImpl implements TemplateService {
         TemplateOpContext<TemplateApiResult> context = new TemplateOpContext<TemplateApiResult>(null, to, future);
         AsyncCallbackDispatcher<TemplateServiceImpl, CommandResult> caller = AsyncCallbackDispatcher.create(this);
         caller.setCallback(caller.getTarget().deleteTemplateCallback(null, null)).setContext(context);
-        to.getDataStore().getDriver().deleteAsync(to.getDataStore(), to, caller);
+
+        if (to.canBeDeletedFromDataStore()) {
+            to.getDataStore().getDriver().deleteAsync(to.getDataStore(), to, caller);
+        } else {
+            CommandResult result = new CommandResult();
+            caller.complete(result);
+        }
+
         return future;
     }
 
diff --git a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java
index b7a44cd..d96b618 100644
--- a/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java
+++ b/engine/storage/image/src/main/java/org/apache/cloudstack/storage/image/store/TemplateObject.java
@@ -375,6 +375,35 @@ public class TemplateObject implements TemplateInfo {
     }
 
     @Override
+    public boolean canBeDeletedFromDataStore() {
+        Status downloadStatus = Status.UNKNOWN;
+        int downloadPercent = -1;
+        if (getDataStore().getRole() == DataStoreRole.Primary) {
+            VMTemplateStoragePoolVO templatePoolRef = templatePoolDao.findByPoolTemplate(getDataStore().getId(), getId(), null);
+            if (templatePoolRef != null) {
+                downloadStatus = templatePoolRef.getDownloadState();
+                downloadPercent = templatePoolRef.getDownloadPercent();
+            }
+        } else if (dataStore.getRole() == DataStoreRole.Image || dataStore.getRole() == DataStoreRole.ImageCache) {
+            TemplateDataStoreVO templateStoreRef = templateStoreDao.findByStoreTemplate(dataStore.getId(), getId());
+            if (templateStoreRef != null) {
+                downloadStatus = templateStoreRef.getDownloadState();
+                downloadPercent = templateStoreRef.getDownloadPercent();
+                templateStoreRef.getState();
+            }
+        }
+
+        // Marking downloaded templates for deletion, but might skip any deletion handled for failed templates.
+        // Only templates not downloaded and in error state (with no install path) cannot be deleted from the datastore, so doesn't impact last behavior for templates with other states
+        if (downloadStatus == null  || downloadStatus == Status.NOT_DOWNLOADED || (downloadStatus == Status.DOWNLOAD_ERROR && downloadPercent == 0)) {
+            s_logger.debug("Template: " + getId() + " cannot be deleted from the store: " + getDataStore().getId());
+            return false;
+        }
+
+        return true;
+    }
+
+    @Override
     public boolean isDeployAsIs() {
         if (this.imageVO == null) {
             return false;
diff --git a/engine/storage/snapshot/pom.xml b/engine/storage/snapshot/pom.xml
index 40e513b..a5d27ae 100644
--- a/engine/storage/snapshot/pom.xml
+++ b/engine/storage/snapshot/pom.xml
@@ -50,6 +50,12 @@
             <artifactId>cloud-engine-storage-volume</artifactId>
             <version>${project.version}</version>
         </dependency>
+        <dependency>
+            <groupId>org.apache.cloudstack</groupId>
+            <artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
+            <version>${project.version}</version>
+            <scope>compile</scope>
+        </dependency>
     </dependencies>
     <build>
         <plugins>
diff --git a/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/ScaleIOSnapshotStrategy.java b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/ScaleIOSnapshotStrategy.java
new file mode 100644
index 0000000..dfe4750
--- /dev/null
+++ b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/ScaleIOSnapshotStrategy.java
@@ -0,0 +1,93 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.snapshot;
+
+import javax.inject.Inject;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.log4j.Logger;
+
+import com.cloud.storage.DataStoreRole;
+import com.cloud.storage.Snapshot;
+import com.cloud.storage.Storage;
+import com.cloud.storage.VolumeVO;
+import com.cloud.storage.dao.VolumeDao;
+
+public class ScaleIOSnapshotStrategy extends StorageSystemSnapshotStrategy {
+    @Inject
+    private SnapshotDataStoreDao snapshotStoreDao;
+    @Inject
+    private PrimaryDataStoreDao primaryDataStoreDao;
+    @Inject
+    private VolumeDao volumeDao;
+
+    private static final Logger LOG = Logger.getLogger(ScaleIOSnapshotStrategy.class);
+
+    @Override
+    public StrategyPriority canHandle(Snapshot snapshot, SnapshotOperation op) {
+        long volumeId = snapshot.getVolumeId();
+        VolumeVO volumeVO = volumeDao.findByIdIncludingRemoved(volumeId);
+        boolean baseVolumeExists = volumeVO.getRemoved() == null;
+        if (!baseVolumeExists) {
+            return StrategyPriority.CANT_HANDLE;
+        }
+
+        if (!isSnapshotStoredOnScaleIOStoragePool(snapshot)) {
+            return StrategyPriority.CANT_HANDLE;
+        }
+
+        if (SnapshotOperation.REVERT.equals(op)) {
+            return StrategyPriority.HIGHEST;
+        }
+
+        if (SnapshotOperation.DELETE.equals(op)) {
+            return StrategyPriority.HIGHEST;
+        }
+
+        return StrategyPriority.CANT_HANDLE;
+    }
+
+    @Override
+    public boolean revertSnapshot(SnapshotInfo snapshotInfo) {
+        VolumeInfo volumeInfo = snapshotInfo.getBaseVolume();
+        Storage.ImageFormat imageFormat = volumeInfo.getFormat();
+        if (!Storage.ImageFormat.RAW.equals(imageFormat)) {
+            LOG.error(String.format("Does not support revert snapshot of the image format [%s] on PowerFlex. Can only rollback snapshots of format RAW", imageFormat));
+            return false;
+        }
+
+        executeRevertSnapshot(snapshotInfo, volumeInfo);
+
+        return true;
+    }
+
+    protected boolean isSnapshotStoredOnScaleIOStoragePool(Snapshot snapshot) {
+        SnapshotDataStoreVO snapshotStore = snapshotStoreDao.findBySnapshot(snapshot.getId(), DataStoreRole.Primary);
+        if (snapshotStore == null) {
+            return false;
+        }
+        StoragePoolVO storagePoolVO = primaryDataStoreDao.findById(snapshotStore.getDataStoreId());
+        return storagePoolVO != null && storagePoolVO.getPoolType() == Storage.StoragePoolType.PowerFlex;
+    }
+}
diff --git a/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/StorageSystemSnapshotStrategy.java b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/StorageSystemSnapshotStrategy.java
index 33d43d7..6401f8a 100644
--- a/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/StorageSystemSnapshotStrategy.java
+++ b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/snapshot/StorageSystemSnapshotStrategy.java
@@ -16,6 +16,37 @@
 // under the License.
 package org.apache.cloudstack.storage.snapshot;
 
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Random;
+import java.util.UUID;
+
+import javax.inject.Inject;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
+import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotDataFactory;
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotResult;
+import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService;
+import org.apache.cloudstack.storage.command.SnapshotAndCopyAnswer;
+import org.apache.cloudstack.storage.command.SnapshotAndCopyCommand;
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.log4j.Logger;
+import org.springframework.stereotype.Component;
+
 import com.cloud.agent.AgentManager;
 import com.cloud.agent.api.Answer;
 import com.cloud.agent.api.ModifyTargetsCommand;
@@ -38,18 +69,18 @@ import com.cloud.storage.Snapshot;
 import com.cloud.storage.SnapshotVO;
 import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.Volume;
+import com.cloud.storage.VolumeDetailVO;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.SnapshotDao;
 import com.cloud.storage.dao.SnapshotDetailsDao;
 import com.cloud.storage.dao.SnapshotDetailsVO;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.dao.VolumeDetailsDao;
-import com.cloud.storage.VolumeDetailVO;
 import com.cloud.utils.db.DB;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.utils.fsm.NoTransitionException;
-import com.cloud.vm.VirtualMachine;
 import com.cloud.vm.VMInstanceVO;
+import com.cloud.vm.VirtualMachine;
 import com.cloud.vm.dao.VMInstanceDao;
 import com.cloud.vm.snapshot.VMSnapshot;
 import com.cloud.vm.snapshot.VMSnapshotService;
@@ -57,37 +88,6 @@ import com.cloud.vm.snapshot.VMSnapshotVO;
 import com.cloud.vm.snapshot.dao.VMSnapshotDao;
 import com.google.common.base.Preconditions;
 
-import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
-import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
-import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotDataFactory;
-import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotResult;
-import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
-import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService;
-import org.apache.cloudstack.storage.command.SnapshotAndCopyAnswer;
-import org.apache.cloudstack.storage.command.SnapshotAndCopyCommand;
-import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
-import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
-import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
-import org.apache.log4j.Logger;
-import org.springframework.stereotype.Component;
-
-import javax.inject.Inject;
-
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Optional;
-import java.util.Random;
-import java.util.UUID;
-
 @Component
 public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
     private static final Logger s_logger = Logger.getLogger(StorageSystemSnapshotStrategy.class);
@@ -241,15 +241,16 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
     }
 
     private boolean isAcceptableRevertFormat(VolumeVO volumeVO) {
-        return ImageFormat.VHD.equals(volumeVO.getFormat()) || ImageFormat.OVA.equals(volumeVO.getFormat()) || ImageFormat.QCOW2.equals(volumeVO.getFormat());
+        return ImageFormat.VHD.equals(volumeVO.getFormat()) || ImageFormat.OVA.equals(volumeVO.getFormat())
+                || ImageFormat.QCOW2.equals(volumeVO.getFormat()) || ImageFormat.RAW.equals(volumeVO.getFormat());
     }
 
     private void verifyFormat(VolumeInfo volumeInfo) {
         ImageFormat imageFormat = volumeInfo.getFormat();
 
-        if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2) {
+        if (imageFormat != ImageFormat.VHD && imageFormat != ImageFormat.OVA && imageFormat != ImageFormat.QCOW2 && imageFormat != ImageFormat.RAW) {
             throw new CloudRuntimeException("Only the following image types are currently supported: " +
-                    ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", and " + ImageFormat.QCOW2);
+                    ImageFormat.VHD.toString() + ", " + ImageFormat.OVA.toString() + ", " + ImageFormat.QCOW2 + ", and " + ImageFormat.RAW);
         }
     }
 
@@ -456,7 +457,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
 
             computeClusterSupportsVolumeClone = clusterDao.getSupportsResigning(hostVO.getClusterId());
         }
-        else if (volumeInfo.getFormat() == ImageFormat.OVA || volumeInfo.getFormat() == ImageFormat.QCOW2) {
+        else if (volumeInfo.getFormat() == ImageFormat.OVA || volumeInfo.getFormat() == ImageFormat.QCOW2 || volumeInfo.getFormat() == ImageFormat.RAW) {
             computeClusterSupportsVolumeClone = true;
         }
         else {
@@ -760,6 +761,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
         sourceDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
         sourceDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
         sourceDetails.put(DiskTO.IQN, volumeVO.get_iScsiName());
+        sourceDetails.put(DiskTO.PROTOCOL_TYPE, (storagePoolVO.getPoolType() != null) ? storagePoolVO.getPoolType().toString() : null);
 
         ChapInfo chapInfo = volService.getChapInfo(volumeInfo, volumeInfo.getDataStore());
 
@@ -778,6 +780,7 @@ public class StorageSystemSnapshotStrategy extends SnapshotStrategyBase {
 
         destDetails.put(DiskTO.STORAGE_HOST, storagePoolVO.getHostAddress());
         destDetails.put(DiskTO.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
+        destDetails.put(DiskTO.PROTOCOL_TYPE, (storagePoolVO.getPoolType() != null) ? storagePoolVO.getPoolType().toString() : null);
 
         long snapshotId = snapshotInfo.getId();
 
diff --git a/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/vmsnapshot/ScaleIOVMSnapshotStrategy.java b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/vmsnapshot/ScaleIOVMSnapshotStrategy.java
new file mode 100644
index 0000000..a124a4a
--- /dev/null
+++ b/engine/storage/snapshot/src/main/java/org/apache/cloudstack/storage/vmsnapshot/ScaleIOVMSnapshotStrategy.java
@@ -0,0 +1,487 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.cloudstack.storage.vmsnapshot;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.inject.Inject;
+import javax.naming.ConfigurationException;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.StrategyPriority;
+import org.apache.cloudstack.engine.subsystem.api.storage.VMSnapshotStrategy;
+import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
+import org.apache.cloudstack.storage.datastore.api.SnapshotGroup;
+import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient;
+import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClientConnectionPool;
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
+import org.apache.cloudstack.storage.to.VolumeObjectTO;
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.api.VMSnapshotTO;
+import com.cloud.alert.AlertManager;
+import com.cloud.event.EventTypes;
+import com.cloud.event.UsageEventUtils;
+import com.cloud.event.UsageEventVO;
+import com.cloud.server.ManagementServerImpl;
+import com.cloud.storage.DiskOfferingVO;
+import com.cloud.storage.Storage;
+import com.cloud.storage.VolumeVO;
+import com.cloud.storage.dao.DiskOfferingDao;
+import com.cloud.storage.dao.VolumeDao;
+import com.cloud.uservm.UserVm;
+import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.component.ManagerBase;
+import com.cloud.utils.db.DB;
+import com.cloud.utils.db.Transaction;
+import com.cloud.utils.db.TransactionCallbackWithExceptionNoReturn;
+import com.cloud.utils.db.TransactionStatus;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.utils.fsm.NoTransitionException;
+import com.cloud.vm.UserVmVO;
+import com.cloud.vm.dao.UserVmDao;
+import com.cloud.vm.snapshot.VMSnapshot;
+import com.cloud.vm.snapshot.VMSnapshotDetailsVO;
+import com.cloud.vm.snapshot.VMSnapshotVO;
+import com.cloud.vm.snapshot.dao.VMSnapshotDao;
+import com.cloud.vm.snapshot.dao.VMSnapshotDetailsDao;
+
+public class ScaleIOVMSnapshotStrategy extends ManagerBase implements VMSnapshotStrategy {
+    private static final Logger LOGGER = Logger.getLogger(ScaleIOVMSnapshotStrategy.class);
+    @Inject
+    VMSnapshotHelper vmSnapshotHelper;
+    @Inject
+    UserVmDao userVmDao;
+    @Inject
+    VMSnapshotDao vmSnapshotDao;
+    @Inject
+    protected VMSnapshotDetailsDao vmSnapshotDetailsDao;
+    int _wait;
+    @Inject
+    ConfigurationDao configurationDao;
+    @Inject
+    VolumeDao volumeDao;
+    @Inject
+    DiskOfferingDao diskOfferingDao;
+    @Inject
+    PrimaryDataStoreDao storagePoolDao;
+    @Inject
+    StoragePoolDetailsDao storagePoolDetailsDao;
+    @Inject
+    AlertManager alertManager;
+
+    @Override
+    public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
+        String value = configurationDao.getValue("vmsnapshot.create.wait");
+        _wait = NumbersUtil.parseInt(value, 1800);
+        return true;
+    }
+
+    @Override
+    public StrategyPriority canHandle(VMSnapshot vmSnapshot) {
+        List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(vmSnapshot.getVmId());
+        if (volumeTOs == null) {
+            throw new CloudRuntimeException("Failed to get the volumes for the vm snapshot: " + vmSnapshot.getUuid());
+        }
+
+        if (volumeTOs != null && !volumeTOs.isEmpty()) {
+            for (VolumeObjectTO volumeTO: volumeTOs) {
+                Long poolId  = volumeTO.getPoolId();
+                Storage.StoragePoolType poolType = vmSnapshotHelper.getStoragePoolType(poolId);
+                if (poolType != Storage.StoragePoolType.PowerFlex) {
+                    return StrategyPriority.CANT_HANDLE;
+                }
+            }
+        }
+
+        return StrategyPriority.HIGHEST;
+    }
+
+    @Override
+    public VMSnapshot takeVMSnapshot(VMSnapshot vmSnapshot) {
+        UserVm userVm = userVmDao.findById(vmSnapshot.getVmId());
+        VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
+
+        try {
+            vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshotVO, VMSnapshot.Event.CreateRequested);
+        } catch (NoTransitionException e) {
+            throw new CloudRuntimeException(e.getMessage());
+        }
+
+        boolean result = false;
+        try {
+            Map<String, String> srcVolumeDestSnapshotMap = new HashMap<>();
+            List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
+
+            final Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
+            StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+            long prev_chain_size = 0;
+            long virtual_size=0;
+            for (VolumeObjectTO volume : volumeTOs) {
+                String volumeSnapshotName = String.format("%s-%s-%s-%s-%s", ScaleIOUtil.VMSNAPSHOT_PREFIX, vmSnapshotVO.getId(), volume.getId(),
+                        storagePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+                srcVolumeDestSnapshotMap.put(ScaleIOUtil.getVolumePath(volume.getPath()), volumeSnapshotName);
+
+                virtual_size += volume.getSize();
+                VolumeVO volumeVO = volumeDao.findById(volume.getId());
+                prev_chain_size += volumeVO.getVmSnapshotChainSize() == null ? 0 : volumeVO.getVmSnapshotChainSize();
+            }
+
+            VMSnapshotTO current = null;
+            VMSnapshotVO currentSnapshot = vmSnapshotDao.findCurrentSnapshotByVmId(userVm.getId());
+            if (currentSnapshot != null) {
+                current = vmSnapshotHelper.getSnapshotWithParents(currentSnapshot);
+            }
+
+            if (current == null)
+                vmSnapshotVO.setParent(null);
+            else
+                vmSnapshotVO.setParent(current.getId());
+
+            try {
+                final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+                SnapshotGroup snapshotGroup = client.takeSnapshot(srcVolumeDestSnapshotMap);
+                if (snapshotGroup == null) {
+                    throw new CloudRuntimeException("Failed to take VM snapshot on PowerFlex storage pool");
+                }
+
+                String snapshotGroupId = snapshotGroup.getSnapshotGroupId();
+                List<String> volumeIds = snapshotGroup.getVolumeIds();
+                if (volumeIds != null && !volumeIds.isEmpty()) {
+                    List<VMSnapshotDetailsVO> vmSnapshotDetails = new ArrayList<VMSnapshotDetailsVO>();
+                    vmSnapshotDetails.add(new VMSnapshotDetailsVO(vmSnapshot.getId(), "SnapshotGroupId", snapshotGroupId, false));
+
+                    for (int index = 0; index < volumeIds.size(); index++) {
+                        String volumeSnapshotName = srcVolumeDestSnapshotMap.get(ScaleIOUtil.getVolumePath(volumeTOs.get(index).getPath()));
+                        String pathWithScaleIOVolumeName = ScaleIOUtil.updatedPathWithVolumeName(volumeIds.get(index), volumeSnapshotName);
+                        vmSnapshotDetails.add(new VMSnapshotDetailsVO(vmSnapshot.getId(), "Vol_" + volumeTOs.get(index).getId() + "_Snapshot", pathWithScaleIOVolumeName, false));
+                    }
+
+                    vmSnapshotDetailsDao.saveDetails(vmSnapshotDetails);
+                }
+
+                finalizeCreate(vmSnapshotVO, volumeTOs);
+                result = true;
+                LOGGER.debug("Create vm snapshot " + vmSnapshot.getName() + " succeeded for vm: " + userVm.getInstanceName());
+
+                long new_chain_size=0;
+                for (VolumeObjectTO volumeTo : volumeTOs) {
+                    publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_CREATE, vmSnapshot, userVm, volumeTo);
+                    new_chain_size += volumeTo.getSize();
+                }
+                publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_ON_PRIMARY, vmSnapshot, userVm, new_chain_size - prev_chain_size, virtual_size);
+                return vmSnapshot;
+            } catch (Exception e) {
+                String errMsg = "Unable to take vm snapshot due to: " + e.getMessage();
+                LOGGER.warn(errMsg, e);
+                throw new CloudRuntimeException(errMsg);
+            }
+        } finally {
+            if (!result) {
+                try {
+                    vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationFailed);
+
+                    String subject = "Take snapshot failed for VM: " + userVm.getDisplayName();
+                    String message = "Snapshot operation failed for VM: " + userVm.getDisplayName() + ", Please check and delete if any stale volumes created with VM snapshot id: " + vmSnapshot.getVmId();
+                    alertManager.sendAlert(AlertManager.AlertType.ALERT_TYPE_VM_SNAPSHOT, userVm.getDataCenterId(), userVm.getPodIdToDeployIn(), subject, message);
+                } catch (NoTransitionException e1) {
+                    LOGGER.error("Cannot set vm snapshot state due to: " + e1.getMessage());
+                }
+            }
+        }
+    }
+
+    @DB
+    protected void finalizeCreate(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeTOs) {
+        try {
+            Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
+                @Override
+                public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
+                    // update chain size for the volumes in the VM snapshot
+                    for (VolumeObjectTO volume : volumeTOs) {
+                        VolumeVO volumeVO = volumeDao.findById(volume.getId());
+                        if (volumeVO != null) {
+                            long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() == null ? 0 : volumeVO.getVmSnapshotChainSize();
+                            vmSnapshotChainSize += volumeVO.getSize();
+                            volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
+                            volumeDao.persist(volumeVO);
+                        }
+                    }
+
+                    vmSnapshot.setCurrent(true);
+
+                    // change current snapshot
+                    if (vmSnapshot.getParent() != null) {
+                        VMSnapshotVO previousCurrent = vmSnapshotDao.findById(vmSnapshot.getParent());
+                        previousCurrent.setCurrent(false);
+                        vmSnapshotDao.persist(previousCurrent);
+                    }
+                    vmSnapshotDao.persist(vmSnapshot);
+
+                    vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationSucceeded);
+                }
+            });
+        } catch (Exception e) {
+            String errMsg = "Error while finalize create vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
+            LOGGER.error(errMsg, e);
+            throw new CloudRuntimeException(errMsg);
+        }
+    }
+
+    @Override
+    public boolean revertVMSnapshot(VMSnapshot vmSnapshot) {
+        VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
+        UserVmVO userVm = userVmDao.findById(vmSnapshot.getVmId());
+
+        try {
+            vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshotVO, VMSnapshot.Event.RevertRequested);
+        } catch (NoTransitionException e) {
+            throw new CloudRuntimeException(e.getMessage());
+        }
+
+        boolean result = false;
+        try {
+            List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
+            Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
+            Map<String, String> srcSnapshotDestVolumeMap = new HashMap<>();
+            for (VolumeObjectTO volume : volumeTOs) {
+                VMSnapshotDetailsVO vmSnapshotDetail = vmSnapshotDetailsDao.findDetail(vmSnapshotVO.getId(), "Vol_" + volume.getId() + "_Snapshot");
+                String srcSnapshotVolumeId = ScaleIOUtil.getVolumePath(vmSnapshotDetail.getValue());
+                String destVolumeId = ScaleIOUtil.getVolumePath(volume.getPath());
+                srcSnapshotDestVolumeMap.put(srcSnapshotVolumeId, destVolumeId);
+            }
+
+            String systemId = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID).getValue();
+            if (systemId == null) {
+                throw new CloudRuntimeException("Failed to get the system id for PowerFlex storage pool for reverting VM snapshot: " + vmSnapshot.getName());
+            }
+
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            result = client.revertSnapshot(systemId, srcSnapshotDestVolumeMap);
+            if (!result) {
+                throw new CloudRuntimeException("Failed to revert VM snapshot on PowerFlex storage pool");
+            }
+
+            finalizeRevert(vmSnapshotVO, volumeTOs);
+            result = true;
+        } catch (Exception e) {
+            String errMsg = "Revert VM: " + userVm.getInstanceName() + " to snapshot: " + vmSnapshotVO.getName() + " failed due to " + e.getMessage();
+            LOGGER.error(errMsg, e);
+            throw new CloudRuntimeException(errMsg);
+        } finally {
+            if (!result) {
+                try {
+                    vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationFailed);
+                } catch (NoTransitionException e1) {
+                    LOGGER.error("Cannot set vm snapshot state due to: " + e1.getMessage());
+                }
+            }
+        }
+        return result;
+    }
+
+    @DB
+    protected void finalizeRevert(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeToList) {
+        try {
+            Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
+                @Override
+                public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
+                    // update chain size for the volumes in the VM snapshot
+                    for (VolumeObjectTO volume : volumeToList) {
+                        VolumeVO volumeVO = volumeDao.findById(volume.getId());
+                        if (volumeVO != null && volumeVO.getVmSnapshotChainSize() != null && volumeVO.getVmSnapshotChainSize() >= volumeVO.getSize()) {
+                            long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() - volumeVO.getSize();
+                            volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
+                            volumeDao.persist(volumeVO);
+                        }
+                    }
+
+                    // update current snapshot, current snapshot is the one reverted to
+                    VMSnapshotVO previousCurrent = vmSnapshotDao.findCurrentSnapshotByVmId(vmSnapshot.getVmId());
+                    if (previousCurrent != null) {
+                        previousCurrent.setCurrent(false);
+                        vmSnapshotDao.persist(previousCurrent);
+                    }
+                    vmSnapshot.setCurrent(true);
+                    vmSnapshotDao.persist(vmSnapshot);
+
+                    vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.OperationSucceeded);
+                }
+            });
+        } catch (Exception e) {
+            String errMsg = "Error while finalize revert vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
+            LOGGER.error(errMsg, e);
+            throw new CloudRuntimeException(errMsg);
+        }
+    }
+
+    @Override
+    public boolean deleteVMSnapshot(VMSnapshot vmSnapshot) {
+        UserVmVO userVm = userVmDao.findById(vmSnapshot.getVmId());
+        VMSnapshotVO vmSnapshotVO = (VMSnapshotVO)vmSnapshot;
+
+        try {
+            vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.ExpungeRequested);
+        } catch (NoTransitionException e) {
+            LOGGER.debug("Failed to change vm snapshot state with event ExpungeRequested");
+            throw new CloudRuntimeException("Failed to change vm snapshot state with event ExpungeRequested: " + e.getMessage());
+        }
+
+        try {
+            List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(vmSnapshot.getVmId());
+            Long storagePoolId = vmSnapshotHelper.getStoragePoolForVM(userVm.getId());
+            String systemId = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID).getValue();
+            if (systemId == null) {
+                throw new CloudRuntimeException("Failed to get the system id for PowerFlex storage pool for deleting VM snapshot: " + vmSnapshot.getName());
+            }
+
+            VMSnapshotDetailsVO vmSnapshotDetailsVO = vmSnapshotDetailsDao.findDetail(vmSnapshot.getId(), "SnapshotGroupId");
+            if (vmSnapshotDetailsVO == null) {
+                throw new CloudRuntimeException("Failed to get snapshot group id for the VM snapshot: " + vmSnapshot.getName());
+            }
+
+            String snapshotGroupId = vmSnapshotDetailsVO.getValue();
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            int volumesDeleted = client.deleteSnapshotGroup(systemId, snapshotGroupId);
+            if (volumesDeleted <= 0) {
+                throw new CloudRuntimeException("Failed to delete VM snapshot: " + vmSnapshot.getName());
+            } else if (volumesDeleted != volumeTOs.size()) {
+                LOGGER.warn("Unable to delete all volumes of the VM snapshot: " + vmSnapshot.getName());
+            }
+
+            finalizeDelete(vmSnapshotVO, volumeTOs);
+            long full_chain_size=0;
+            for (VolumeObjectTO volumeTo : volumeTOs) {
+                publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_DELETE, vmSnapshot, userVm, volumeTo);
+                full_chain_size += volumeTo.getSize();
+            }
+            publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_OFF_PRIMARY, vmSnapshot, userVm, full_chain_size, 0L);
+            return true;
+        } catch (Exception e) {
+            String errMsg = "Unable to delete vm snapshot: " + vmSnapshot.getName() + " of vm " + userVm.getInstanceName() + " due to " + e.getMessage();
+            LOGGER.warn(errMsg, e);
+            throw new CloudRuntimeException(errMsg);
+        }
+    }
+
+    @DB
+    protected void finalizeDelete(VMSnapshotVO vmSnapshot, List<VolumeObjectTO> volumeTOs) {
+        try {
+            Transaction.execute(new TransactionCallbackWithExceptionNoReturn<NoTransitionException>() {
+                @Override
+                public void doInTransactionWithoutResult(TransactionStatus status) throws NoTransitionException {
+                    // update chain size for the volumes in the VM snapshot
+                    for (VolumeObjectTO volume : volumeTOs) {
+                        VolumeVO volumeVO = volumeDao.findById(volume.getId());
+                        if (volumeVO != null && volumeVO.getVmSnapshotChainSize() != null && volumeVO.getVmSnapshotChainSize() >= volumeVO.getSize()) {
+                            long vmSnapshotChainSize = volumeVO.getVmSnapshotChainSize() - volumeVO.getSize();
+                            volumeVO.setVmSnapshotChainSize(vmSnapshotChainSize);
+                            volumeDao.persist(volumeVO);
+                        }
+                    }
+
+                    // update children's parent snapshots
+                    List<VMSnapshotVO> children = vmSnapshotDao.listByParent(vmSnapshot.getId());
+                    for (VMSnapshotVO child : children) {
+                        child.setParent(vmSnapshot.getParent());
+                        vmSnapshotDao.persist(child);
+                    }
+
+                    // update current snapshot
+                    VMSnapshotVO current = vmSnapshotDao.findCurrentSnapshotByVmId(vmSnapshot.getVmId());
+                    if (current != null && current.getId() == vmSnapshot.getId() && vmSnapshot.getParent() != null) {
+                        VMSnapshotVO parent = vmSnapshotDao.findById(vmSnapshot.getParent());
+                        parent.setCurrent(true);
+                        vmSnapshotDao.persist(parent);
+                    }
+                    vmSnapshot.setCurrent(false);
+                    vmSnapshotDao.persist(vmSnapshot);
+
+                    vmSnapshotDao.remove(vmSnapshot.getId());
+                }
+            });
+        } catch (Exception e) {
+            String errMsg = "Error while finalize delete vm snapshot: " + vmSnapshot.getName() + " due to " + e.getMessage();
+            LOGGER.error(errMsg, e);
+            throw new CloudRuntimeException(errMsg);
+        }
+    }
+
+    @Override
+    public boolean deleteVMSnapshotFromDB(VMSnapshot vmSnapshot, boolean unmanage) {
+        try {
+            vmSnapshotHelper.vmSnapshotStateTransitTo(vmSnapshot, VMSnapshot.Event.ExpungeRequested);
+        } catch (NoTransitionException e) {
+            LOGGER.debug("Failed to change vm snapshot state with event ExpungeRequested");
+            throw new CloudRuntimeException("Failed to change vm snapshot state with event ExpungeRequested: " + e.getMessage());
+        }
+        UserVm userVm = userVmDao.findById(vmSnapshot.getVmId());
+        List<VolumeObjectTO> volumeTOs = vmSnapshotHelper.getVolumeTOList(userVm.getId());
+        long full_chain_size = 0;
+        for (VolumeObjectTO volumeTo: volumeTOs) {
+            volumeTo.setSize(0);
+            publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_DELETE, vmSnapshot, userVm, volumeTo);
+            full_chain_size += volumeTo.getSize();
+        }
+        if (unmanage) {
+            publishUsageEvent(EventTypes.EVENT_VM_SNAPSHOT_OFF_PRIMARY, vmSnapshot, userVm, full_chain_size, 0L);
+        }
+        return vmSnapshotDao.remove(vmSnapshot.getId());
+    }
+
+    private void publishUsageEvent(String type, VMSnapshot vmSnapshot, UserVm userVm, VolumeObjectTO volumeTo) {
+        VolumeVO volume = volumeDao.findById(volumeTo.getId());
+        Long diskOfferingId = volume.getDiskOfferingId();
+        Long offeringId = null;
+        if (diskOfferingId != null) {
+            DiskOfferingVO offering = diskOfferingDao.findById(diskOfferingId);
+            if (offering != null && (offering.getType() == DiskOfferingVO.Type.Disk)) {
+                offeringId = offering.getId();
+            }
+        }
+        Map<String, String> details = new HashMap<>();
+        if (vmSnapshot != null) {
+            details.put(UsageEventVO.DynamicParameters.vmSnapshotId.name(), String.valueOf(vmSnapshot.getId()));
+        }
+        UsageEventUtils.publishUsageEvent(type, vmSnapshot.getAccountId(), userVm.getDataCenterId(), userVm.getId(), vmSnapshot.getName(), offeringId, volume.getId(), // save volume's id into templateId field
+                volumeTo.getSize(), VMSnapshot.class.getName(), vmSnapshot.getUuid(), details);
+    }
+
+    private void publishUsageEvent(String type, VMSnapshot vmSnapshot, UserVm userVm, Long vmSnapSize, Long virtualSize) {
+        try {
+            Map<String, String> details = new HashMap<>();
+            if (vmSnapshot != null) {
+                details.put(UsageEventVO.DynamicParameters.vmSnapshotId.name(), String.valueOf(vmSnapshot.getId()));
+            }
+            UsageEventUtils.publishUsageEvent(type, vmSnapshot.getAccountId(), userVm.getDataCenterId(), userVm.getId(), vmSnapshot.getName(), 0L, 0L, vmSnapSize, virtualSize,
+                    VMSnapshot.class.getName(), vmSnapshot.getUuid(), details);
+        } catch (Exception e) {
+            LOGGER.error("Failed to publish usage event " + type, e);
+        }
+    }
+
+    private ScaleIOGatewayClient getScaleIOClient(final Long storagePoolId) throws Exception {
+        return ScaleIOGatewayClientConnectionPool.getInstance().getClient(storagePoolId, storagePoolDetailsDao);
+    }
+}
diff --git a/engine/storage/snapshot/src/main/resources/META-INF/cloudstack/storage/spring-engine-storage-snapshot-storage-context.xml b/engine/storage/snapshot/src/main/resources/META-INF/cloudstack/storage/spring-engine-storage-snapshot-storage-context.xml
index 2bfb3c3..2084ce2 100644
--- a/engine/storage/snapshot/src/main/resources/META-INF/cloudstack/storage/spring-engine-storage-snapshot-storage-context.xml
+++ b/engine/storage/snapshot/src/main/resources/META-INF/cloudstack/storage/spring-engine-storage-snapshot-storage-context.xml
@@ -36,7 +36,13 @@
     <bean id="cephSnapshotStrategy"
         class="org.apache.cloudstack.storage.snapshot.CephSnapshotStrategy" />
 
+    <bean id="scaleioSnapshotStrategy"
+          class="org.apache.cloudstack.storage.snapshot.ScaleIOSnapshotStrategy" />
+
     <bean id="DefaultVMSnapshotStrategy"
         class="org.apache.cloudstack.storage.vmsnapshot.DefaultVMSnapshotStrategy" />
 
+    <bean id="ScaleIOVMSnapshotStrategy"
+          class="org.apache.cloudstack.storage.vmsnapshot.ScaleIOVMSnapshotStrategy" />
+
 </beans>
diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java
index cfe32c2..2a1c257 100644
--- a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java
+++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java
@@ -28,13 +28,13 @@ import javax.naming.ConfigurationException;
 
 import com.cloud.exception.StorageUnavailableException;
 import com.cloud.storage.StoragePoolStatus;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
-import org.apache.log4j.Logger;
 
 import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
 import org.apache.cloudstack.engine.subsystem.api.storage.StoragePoolAllocator;
 import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
 import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.log4j.Logger;
 
 import com.cloud.capacity.Capacity;
 import com.cloud.capacity.dao.CapacityDao;
@@ -211,12 +211,16 @@ public abstract class AbstractStoragePoolAllocator extends AdapterBase implement
             return false;
         }
 
+        Volume volume = volumeDao.findById(dskCh.getVolumeId());
+        if(!storageMgr.storagePoolCompatibleWithVolumePool(pool, volume)) {
+            return false;
+        }
+
         if (pool.isManaged() && !storageUtil.managedStoragePoolCanScale(pool, plan.getClusterId(), plan.getHostId())) {
             return false;
         }
 
         // check capacity
-        Volume volume = volumeDao.findById(dskCh.getVolumeId());
         List<Volume> requestVolumes = new ArrayList<>();
         requestVolumes.add(volume);
         if (dskCh.getHypervisorType() == HypervisorType.VMware) {
diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
index 301704a..225f781 100644
--- a/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
+++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
@@ -48,15 +48,10 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
     @Inject
     private CapacityDao capacityDao;
 
-
     @Override
     protected List<StoragePool> select(DiskProfile dskCh, VirtualMachineProfile vmProfile, DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
         LOGGER.debug("ZoneWideStoragePoolAllocator to find storage pool");
 
-        if (dskCh.useLocalStorage()) {
-            return null;
-        }
-
         if (LOGGER.isTraceEnabled()) {
             // Log the pools details that are ignored because they are in disabled state
             List<StoragePoolVO> disabledPools = storagePoolDao.findDisabledPoolsByScope(plan.getDataCenterId(), null, null, ScopeType.ZONE);
@@ -92,7 +87,6 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
             avoid.addPool(pool.getId());
         }
 
-
         for (StoragePoolVO storage : storagePools) {
             if (suitablePools.size() == returnUpTo) {
                 break;
@@ -114,7 +108,6 @@ public class ZoneWideStoragePoolAllocator extends AbstractStoragePoolAllocator {
         return !ScopeType.ZONE.equals(storagePoolVO.getScope()) || !storagePoolVO.isManaged();
     }
 
-
     @Override
     protected List<StoragePool> reorderPoolsByCapacity(DeploymentPlan plan,
         List<StoragePool> pools) {
diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/helper/VMSnapshotHelperImpl.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/helper/VMSnapshotHelperImpl.java
index cadbad3..0184244 100644
--- a/engine/storage/src/main/java/org/apache/cloudstack/storage/helper/VMSnapshotHelperImpl.java
+++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/helper/VMSnapshotHelperImpl.java
@@ -37,6 +37,7 @@ import com.cloud.exception.InvalidParameterValueException;
 import com.cloud.host.Host;
 import com.cloud.host.HostVO;
 import com.cloud.host.dao.HostDao;
+import com.cloud.storage.Storage;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.utils.fsm.NoTransitionException;
@@ -148,4 +149,33 @@ public class VMSnapshotHelperImpl implements VMSnapshotHelper {
         return result;
     }
 
+    @Override
+    public Long getStoragePoolForVM(Long vmId) {
+        List<VolumeVO> rootVolumes = volumeDao.findReadyRootVolumesByInstance(vmId);
+        if (rootVolumes == null || rootVolumes.isEmpty()) {
+            throw new InvalidParameterValueException("Failed to find root volume for the user vm:" + vmId);
+        }
+
+        VolumeVO rootVolume = rootVolumes.get(0);
+        StoragePoolVO rootVolumePool = primaryDataStoreDao.findById(rootVolume.getPoolId());
+        if (rootVolumePool == null) {
+            throw new InvalidParameterValueException("Failed to find root volume storage pool for the user vm:" + vmId);
+        }
+
+        if (rootVolumePool.isInMaintenance()) {
+            throw new InvalidParameterValueException("Storage pool for the user vm:" + vmId + " is in maintenance");
+        }
+
+        return rootVolumePool.getId();
+    }
+
+    @Override
+    public Storage.StoragePoolType getStoragePoolType(Long poolId) {
+        StoragePoolVO storagePool = primaryDataStoreDao.findById(poolId);
+        if (storagePool == null) {
+            throw new InvalidParameterValueException("storage pool is not found");
+        }
+
+        return storagePool.getPoolType();
+    }
 }
diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
index 9cf73e6..0c55545 100644
--- a/engine/storage/src/main/java/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
+++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
@@ -71,6 +71,7 @@ import com.cloud.alert.AlertManager;
 import com.cloud.configuration.Config;
 import com.cloud.exception.AgentUnavailableException;
 import com.cloud.exception.OperationTimedoutException;
+import com.cloud.host.Host;
 import com.cloud.host.dao.HostDao;
 import com.cloud.secstorage.CommandExecLogDao;
 import com.cloud.secstorage.CommandExecLogVO;
@@ -363,6 +364,11 @@ public abstract class BaseImageStoreDriverImpl implements ImageStoreDriver {
         }
     }
 
+    @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+        copyAsync(srcData, destData, callback);
+    }
+
     private Answer sendToLeastBusyEndpoint(List<EndPoint> eps, CopyCommand cmd) {
         Answer answer = null;
         EndPoint endPoint = null;
diff --git a/engine/storage/src/main/java/org/apache/cloudstack/storage/vmsnapshot/VMSnapshotHelper.java b/engine/storage/src/main/java/org/apache/cloudstack/storage/vmsnapshot/VMSnapshotHelper.java
index 2e7e13b..35153a1 100644
--- a/engine/storage/src/main/java/org/apache/cloudstack/storage/vmsnapshot/VMSnapshotHelper.java
+++ b/engine/storage/src/main/java/org/apache/cloudstack/storage/vmsnapshot/VMSnapshotHelper.java
@@ -23,6 +23,7 @@ import java.util.List;
 import org.apache.cloudstack.storage.to.VolumeObjectTO;
 
 import com.cloud.agent.api.VMSnapshotTO;
+import com.cloud.storage.Storage;
 import com.cloud.utils.fsm.NoTransitionException;
 import com.cloud.vm.snapshot.VMSnapshot;
 import com.cloud.vm.snapshot.VMSnapshotVO;
@@ -35,4 +36,8 @@ public interface VMSnapshotHelper {
     List<VolumeObjectTO> getVolumeTOList(Long vmId);
 
     VMSnapshotTO getSnapshotWithParents(VMSnapshotVO snapshot);
+
+    Long getStoragePoolForVM(Long vmId);
+
+    Storage.StoragePoolType getStoragePoolType(Long poolId);
 }
diff --git a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/datastore/PrimaryDataStoreImpl.java b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/datastore/PrimaryDataStoreImpl.java
index 18a7f3c..f557ac3 100644
--- a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/datastore/PrimaryDataStoreImpl.java
+++ b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/datastore/PrimaryDataStoreImpl.java
@@ -203,8 +203,7 @@ public class PrimaryDataStoreImpl implements PrimaryDataStore {
 
     @Override
     public String getName() {
-        // TODO Auto-generated method stub
-        return null;
+        return pdsv.getName();
     }
 
     @Override
diff --git a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java
index 9750fb1..5ec9cfb 100644
--- a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java
+++ b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeObject.java
@@ -29,11 +29,9 @@ import com.cloud.storage.VolumeDetailVO;
 import com.cloud.storage.dao.VMTemplateDao;
 import com.cloud.storage.dao.VolumeDetailsDao;
 import com.cloud.vm.VmDetailConstants;
-import org.apache.cloudstack.api.ApiConstants;
-import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
-import org.apache.commons.lang.StringUtils;
-import org.apache.log4j.Logger;
 
+import org.apache.cloudstack.resourcedetail.dao.DiskOfferingDetailsDao;
+import org.apache.cloudstack.api.ApiConstants;
 import org.apache.cloudstack.engine.subsystem.api.storage.DataObjectInStore;
 import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
 import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
@@ -44,6 +42,8 @@ import org.apache.cloudstack.storage.datastore.ObjectInDataStoreManager;
 import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreDao;
 import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreVO;
 import org.apache.cloudstack.storage.to.VolumeObjectTO;
+import org.apache.commons.lang.StringUtils;
+import org.apache.log4j.Logger;
 
 import com.cloud.agent.api.Answer;
 import com.cloud.agent.api.storage.DownloadAnswer;
@@ -53,6 +53,7 @@ import com.cloud.hypervisor.Hypervisor.HypervisorType;
 import com.cloud.offering.DiskOffering.DiskCacheMode;
 import com.cloud.storage.DataStoreRole;
 import com.cloud.storage.DiskOfferingVO;
+import com.cloud.storage.Storage;
 import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.Storage.ProvisioningType;
 import com.cloud.storage.Volume;
@@ -626,6 +627,11 @@ public class VolumeObject implements VolumeInfo {
     }
 
     @Override
+    public Storage.StoragePoolType getStoragePoolType() {
+        return volumeVO.getPoolType();
+    }
+
+    @Override
     public Long getLastPoolId() {
         return volumeVO.getLastPoolId();
     }
diff --git a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
index 5e3493a..68940d4 100644
--- a/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
+++ b/engine/storage/volume/src/main/java/org/apache/cloudstack/storage/volume/VolumeServiceImpl.java
@@ -31,6 +31,7 @@ import javax.inject.Inject;
 import com.cloud.storage.VMTemplateVO;
 import com.cloud.storage.dao.VMTemplateDao;
 import org.apache.cloudstack.engine.cloud.entity.api.VolumeEntity;
+import org.apache.cloudstack.engine.orchestration.service.VolumeOrchestrationService;
 import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
 import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
 import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
@@ -47,6 +48,7 @@ import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
 import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
 import org.apache.cloudstack.engine.subsystem.api.storage.Scope;
 import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.TemplateDataFactory;
 import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
 import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
 import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
@@ -64,6 +66,8 @@ import org.apache.cloudstack.storage.datastore.PrimaryDataStoreProviderManager;
 import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
 import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
 import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
 import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
 import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreDao;
 import org.apache.cloudstack.storage.datastore.db.VolumeDataStoreVO;
@@ -88,6 +92,7 @@ import com.cloud.dc.dao.ClusterDao;
 import com.cloud.event.EventTypes;
 import com.cloud.event.UsageEventUtils;
 import com.cloud.exception.ResourceAllocationException;
+import com.cloud.exception.StorageAccessException;
 import com.cloud.host.Host;
 import com.cloud.host.HostVO;
 import com.cloud.host.dao.HostDao;
@@ -101,13 +106,16 @@ import com.cloud.server.ManagementService;
 import com.cloud.storage.DataStoreRole;
 import com.cloud.storage.RegisterVolumePayload;
 import com.cloud.storage.ScopeType;
+import com.cloud.storage.Storage;
 import com.cloud.storage.Storage.StoragePoolType;
+import com.cloud.storage.StorageManager;
 import com.cloud.storage.StoragePool;
 import com.cloud.storage.VMTemplateStoragePoolVO;
 import com.cloud.storage.VMTemplateStorageResourceAssoc;
 import com.cloud.storage.VMTemplateStorageResourceAssoc.Status;
 import com.cloud.storage.Volume;
 import com.cloud.storage.Volume.State;
+import com.cloud.storage.VolumeDetailVO;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.VMTemplatePoolDao;
 import com.cloud.storage.dao.VolumeDao;
@@ -122,6 +130,7 @@ import com.cloud.utils.db.DB;
 import com.cloud.utils.db.GlobalLock;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.vm.VirtualMachine;
+import com.google.common.base.Strings;
 
 import static com.cloud.storage.resource.StorageProcessor.REQUEST_TEMPLATE_RELOAD;
 
@@ -163,6 +172,8 @@ public class VolumeServiceImpl implements VolumeService {
     @Inject
     private PrimaryDataStoreDao storagePoolDao;
     @Inject
+    private StoragePoolDetailsDao _storagePoolDetailsDao;
+    @Inject
     private HostDetailsDao hostDetailsDao;
     @Inject
     private ManagementService mgr;
@@ -172,6 +183,12 @@ public class VolumeServiceImpl implements VolumeService {
     private VolumeDetailsDao _volumeDetailsDao;
     @Inject
     private VMTemplateDao templateDao;
+    @Inject
+    private TemplateDataFactory tmplFactory;
+    @Inject
+    private VolumeOrchestrationService _volumeMgr;
+    @Inject
+    private StorageManager _storageMgr;
 
     private final static String SNAPSHOT_ID = "SNAPSHOT_ID";
 
@@ -380,6 +397,14 @@ public class VolumeServiceImpl implements VolumeService {
         return future;
     }
 
+    public void ensureVolumeIsExpungeReady(long volumeId) {
+        VolumeVO volume = volDao.findById(volumeId);
+        if (volume != null && volume.getPodId() != null) {
+            volume.setPodId(null);
+            volDao.update(volumeId, volume);
+        }
+    }
+
     private boolean volumeExistsOnPrimary(VolumeVO vol) {
         Long poolId = vol.getPoolId();
 
@@ -794,6 +819,39 @@ public class VolumeServiceImpl implements VolumeService {
         return null;
     }
 
+    @DB
+    protected Void createVolumeFromBaseManagedImageCallBack(AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> callback, CreateVolumeFromBaseImageContext<VolumeApiResult> context) {
+        CopyCommandResult result = callback.getResult();
+        DataObject vo = context.vo;
+        DataObject tmplOnPrimary = context.templateOnStore;
+        VolumeApiResult volResult = new VolumeApiResult((VolumeObject)vo);
+
+        if (result.isSuccess()) {
+            VolumeVO volume = volDao.findById(vo.getId());
+            CopyCmdAnswer answer = (CopyCmdAnswer)result.getAnswer();
+            VolumeObjectTO volumeObjectTo = (VolumeObjectTO)answer.getNewData();
+            volume.setPath(volumeObjectTo.getPath());
+            if (volumeObjectTo.getFormat() != null) {
+                volume.setFormat(volumeObjectTo.getFormat());
+            }
+
+            volDao.update(volume.getId(), volume);
+            vo.processEvent(Event.OperationSuccessed);
+        } else {
+            volResult.setResult(result.getResult());
+
+            try {
+                destroyAndReallocateManagedVolume((VolumeInfo) vo);
+            } catch (CloudRuntimeException ex) {
+                s_logger.warn("Couldn't destroy managed volume: " + vo.getId());
+            }
+        }
+
+        AsyncCallFuture<VolumeApiResult> future = context.getFuture();
+        future.complete(volResult);
+        return null;
+    }
+
     /**
      * Creates a template volume on managed storage, which will be used for creating ROOT volumes by cloning.
      *
@@ -809,6 +867,9 @@ public class VolumeServiceImpl implements VolumeService {
 
         if (templatePoolRef == null) {
             throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
+        } else if (templatePoolRef.getState() == ObjectInDataStoreStateMachine.State.Ready) {
+            // Template already exists
+            return templateOnPrimary;
         }
 
         // At this point, we have an entry in the DB that points to our cached template.
@@ -824,13 +885,6 @@ public class VolumeServiceImpl implements VolumeService {
             throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
         }
 
-        // Template already exists
-        if (templatePoolRef.getState() == ObjectInDataStoreStateMachine.State.Ready) {
-            _tmpltPoolDao.releaseFromLockTable(templatePoolRefId);
-
-            return templateOnPrimary;
-        }
-
         try {
             // create a cache volume on the back-end
 
@@ -875,27 +929,25 @@ public class VolumeServiceImpl implements VolumeService {
      * @param destHost The host that we will use for the copy
      */
     private void copyTemplateToManagedTemplateVolume(TemplateInfo srcTemplateInfo, TemplateInfo templateOnPrimary, VMTemplateStoragePoolVO templatePoolRef, PrimaryDataStore destPrimaryDataStore,
-                                                     Host destHost) {
+            Host destHost) throws StorageAccessException {
         AsyncCallFuture<VolumeApiResult> copyTemplateFuture = new AsyncCallFuture<>();
         int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
         long templatePoolRefId = templatePoolRef.getId();
 
-        templatePoolRef = _tmpltPoolDao.acquireInLockTable(templatePoolRefId, storagePoolMaxWaitSeconds);
-
-        if (templatePoolRef == null) {
-            throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
-        }
-
-        if (templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
-            // There can be cases where we acquired the lock, but the template
-            // was already copied by a previous thread. Just return in that case.
+        try {
+            templatePoolRef = _tmpltPoolDao.acquireInLockTable(templatePoolRefId, storagePoolMaxWaitSeconds);
 
-            s_logger.debug("Template already downloaded, nothing to do");
+            if (templatePoolRef == null) {
+                throw new CloudRuntimeException("Unable to acquire lock on VMTemplateStoragePool: " + templatePoolRefId);
+            }
 
-            return;
-        }
+            if (templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
+                // There can be cases where we acquired the lock, but the template
+                // was already copied by a previous thread. Just return in that case.
+                s_logger.debug("Template already downloaded, nothing to do");
+                return;
+            }
 
-        try {
             // copy the template from sec storage to the created volume
             CreateBaseImageContext<CreateCmdResult> copyContext = new CreateBaseImageContext<>(null, null, destPrimaryDataStore, srcTemplateInfo, copyTemplateFuture, templateOnPrimary,
                     templatePoolRefId);
@@ -913,6 +965,7 @@ public class VolumeServiceImpl implements VolumeService {
             details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcTemplateInfo.getUniqueName());
             details.put(PrimaryDataStore.REMOVE_AFTER_COPY, Boolean.TRUE.toString());
             details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(templateOnPrimary.getSize()));
+            details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
 
             ChapInfo chapInfo = getChapInfo(templateOnPrimary, destPrimaryDataStore);
 
@@ -923,11 +976,15 @@ public class VolumeServiceImpl implements VolumeService {
                 details.put(PrimaryDataStore.CHAP_TARGET_SECRET, chapInfo.getTargetSecret());
             }
 
-            templateOnPrimary.processEvent(Event.CopyingRequested);
-
             destPrimaryDataStore.setDetails(details);
 
-            grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
+            try {
+                grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
+            } catch (Exception e) {
+                throw new StorageAccessException("Unable to grant access to template: " + templateOnPrimary.getId() + " on host: " + destHost.getId());
+            }
+
+            templateOnPrimary.processEvent(Event.CopyingRequested);
 
             VolumeApiResult result;
 
@@ -955,6 +1012,8 @@ public class VolumeServiceImpl implements VolumeService {
                 // something weird happens to the volume (XenServer creates an SR, but the VDI copy can fail).
                 // For now, I just retry the copy.
             }
+        } catch (StorageAccessException e) {
+            throw e;
         } catch (Throwable e) {
             s_logger.debug("Failed to create a template on primary storage", e);
 
@@ -1031,6 +1090,126 @@ public class VolumeServiceImpl implements VolumeService {
         }
     }
 
+    private void createManagedVolumeCopyManagedTemplateAsync(VolumeInfo volumeInfo, PrimaryDataStore destPrimaryDataStore, TemplateInfo srcTemplateOnPrimary, Host destHost, AsyncCallFuture<VolumeApiResult> future) throws StorageAccessException {
+        VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), srcTemplateOnPrimary.getId(), null);
+
+        if (templatePoolRef == null) {
+            throw new CloudRuntimeException("Failed to find template " + srcTemplateOnPrimary.getUniqueName() + " in storage pool " + srcTemplateOnPrimary.getId());
+        }
+
+        if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
+            throw new CloudRuntimeException("Template " + srcTemplateOnPrimary.getUniqueName() + " has not been downloaded to primary storage.");
+        }
+
+        String volumeDetailKey = "POOL_TEMPLATE_ID_COPY_ON_HOST_" + destHost.getId();
+
+        try {
+            try {
+                grantAccess(srcTemplateOnPrimary, destHost, destPrimaryDataStore);
+            } catch (Exception e) {
+                throw new StorageAccessException("Unable to grant access to src template: " + srcTemplateOnPrimary.getId() + " on host: " + destHost.getId());
+            }
+
+            _volumeDetailsDao.addDetail(volumeInfo.getId(), volumeDetailKey, String.valueOf(templatePoolRef.getId()), false);
+
+            // Create a volume on managed storage.
+            AsyncCallFuture<VolumeApiResult> createVolumeFuture = createVolumeAsync(volumeInfo, destPrimaryDataStore);
+            VolumeApiResult createVolumeResult = createVolumeFuture.get();
+
+            if (createVolumeResult.isFailed()) {
+                throw new CloudRuntimeException("Creation of a volume failed: " + createVolumeResult.getResult());
+            }
+
+            // Refresh the volume info from the DB.
+            volumeInfo = volFactory.getVolume(volumeInfo.getId(), destPrimaryDataStore);
+
+            volumeInfo.processEvent(Event.CreateRequested);
+            CreateVolumeFromBaseImageContext<VolumeApiResult> context = new CreateVolumeFromBaseImageContext<>(null, volumeInfo, destPrimaryDataStore, srcTemplateOnPrimary, future, null, null);
+            AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> caller = AsyncCallbackDispatcher.create(this);
+            caller.setCallback(caller.getTarget().createVolumeFromBaseManagedImageCallBack(null, null));
+            caller.setContext(context);
+
+            Map<String, String> details = new HashMap<String, String>();
+            details.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
+            details.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
+            details.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
+            details.put(PrimaryDataStore.MANAGED_STORE_TARGET, volumeInfo.get_iScsiName());
+            details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, volumeInfo.getName());
+            details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(volumeInfo.getSize()));
+            details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
+            destPrimaryDataStore.setDetails(details);
+
+            grantAccess(volumeInfo, destHost, destPrimaryDataStore);
+
+            try {
+                motionSrv.copyAsync(srcTemplateOnPrimary, volumeInfo, destHost, caller);
+            } finally {
+                revokeAccess(volumeInfo, destHost, destPrimaryDataStore);
+            }
+        } catch (StorageAccessException e) {
+            throw e;
+        } catch (Throwable e) {
+            s_logger.debug("Failed to copy managed template on primary storage", e);
+            String errMsg = "Failed due to " + e.toString();
+
+            try {
+                destroyAndReallocateManagedVolume(volumeInfo);
+            } catch (CloudRuntimeException ex) {
+                s_logger.warn("Failed to destroy managed volume: " + volumeInfo.getId());
+                errMsg += " : " + ex.getMessage();
+            }
+
+            VolumeApiResult result = new VolumeApiResult(volumeInfo);
+            result.setResult(errMsg);
+            future.complete(result);
+        } finally {
+            _volumeDetailsDao.removeDetail(volumeInfo.getId(), volumeDetailKey);
+
+            List<VolumeDetailVO> volumeDetails = _volumeDetailsDao.findDetails(volumeDetailKey, String.valueOf(templatePoolRef.getId()), false);
+            if (volumeDetails == null || volumeDetails.isEmpty()) {
+                revokeAccess(srcTemplateOnPrimary, destHost, destPrimaryDataStore);
+            }
+        }
+    }
+
+    private void destroyAndReallocateManagedVolume(VolumeInfo volumeInfo) {
+        if (volumeInfo == null) {
+            return;
+        }
+
+        VolumeVO volume = volDao.findById(volumeInfo.getId());
+        if (volume == null) {
+            return;
+        }
+
+        if (volume.getState() == State.Allocated) { // Possible states here: Allocated, Ready & Creating
+            return;
+        }
+
+        volumeInfo.processEvent(Event.DestroyRequested);
+
+        Volume newVol = _volumeMgr.allocateDuplicateVolume(volume, null);
+        VolumeVO newVolume = (VolumeVO) newVol;
+        newVolume.set_iScsiName(null);
+        volDao.update(newVolume.getId(), newVolume);
+        s_logger.debug("Allocated new volume: " + newVolume.getId() + " for the VM: " + volume.getInstanceId());
+
+        try {
+            AsyncCallFuture<VolumeApiResult> expungeVolumeFuture = expungeVolumeAsync(volumeInfo);
+            VolumeApiResult expungeVolumeResult = expungeVolumeFuture.get();
+            if (expungeVolumeResult.isFailed()) {
+                s_logger.warn("Failed to expunge volume: " + volumeInfo.getId() + " that was created");
+                throw new CloudRuntimeException("Failed to expunge volume: " + volumeInfo.getId() + " that was created");
+            }
+        } catch (Exception ex) {
+            if (canVolumeBeRemoved(volumeInfo.getId())) {
+                volDao.remove(volumeInfo.getId());
+            }
+            s_logger.warn("Unable to expunge volume: " + volumeInfo.getId() + " due to: " + ex.getMessage());
+            throw new CloudRuntimeException("Unable to expunge volume: " + volumeInfo.getId() + " due to: " + ex.getMessage());
+        }
+    }
+
     private void createManagedVolumeCopyTemplateAsync(VolumeInfo volumeInfo, PrimaryDataStore primaryDataStore, TemplateInfo srcTemplateInfo, Host destHost, AsyncCallFuture<VolumeApiResult> future) {
         try {
             // Create a volume on managed storage.
@@ -1061,6 +1240,7 @@ public class VolumeServiceImpl implements VolumeService {
             details.put(PrimaryDataStore.MANAGED_STORE_TARGET, volumeInfo.get_iScsiName());
             details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, volumeInfo.getName());
             details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(volumeInfo.getSize()));
+            details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(primaryDataStore.getId())));
 
             ChapInfo chapInfo = getChapInfo(volumeInfo, primaryDataStore);
 
@@ -1106,7 +1286,109 @@ public class VolumeServiceImpl implements VolumeService {
     }
 
     @Override
-    public AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) {
+    public TemplateInfo createManagedStorageTemplate(long srcTemplateId, long destDataStoreId, long destHostId) throws StorageAccessException {
+        Host destHost = _hostDao.findById(destHostId);
+        if (destHost == null) {
+            throw new CloudRuntimeException("Destination host should not be null.");
+        }
+
+        TemplateInfo srcTemplateInfo = tmplFactory.getTemplate(srcTemplateId);
+        if (srcTemplateInfo == null) {
+            throw new CloudRuntimeException("Failed to get info of template: " + srcTemplateId);
+        }
+
+        if (Storage.ImageFormat.ISO.equals(srcTemplateInfo.getFormat())) {
+            throw new CloudRuntimeException("Unsupported format: " + Storage.ImageFormat.ISO.toString() + " for managed storage template");
+        }
+
+        GlobalLock lock = null;
+        TemplateInfo templateOnPrimary = null;
+        try {
+            String templateIdManagedPoolIdLockString = "templateId:" + srcTemplateId + "managedPoolId:" + destDataStoreId;
+            lock = GlobalLock.getInternLock(templateIdManagedPoolIdLockString);
+            if (lock == null) {
+                throw new CloudRuntimeException("Unable to create managed storage template, couldn't get global lock on " + templateIdManagedPoolIdLockString);
+            }
+
+            int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
+            if (!lock.lock(storagePoolMaxWaitSeconds)) {
+                s_logger.debug("Unable to create managed storage template, couldn't lock on " + templateIdManagedPoolIdLockString);
+                throw new CloudRuntimeException("Unable to create managed storage template, couldn't lock on " + templateIdManagedPoolIdLockString);
+            }
+
+            PrimaryDataStore destPrimaryDataStore = dataStoreMgr.getPrimaryDataStore(destDataStoreId);
+
+            // Check if template exists on the storage pool. If not, downland and copy to managed storage pool
+            VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destDataStoreId, srcTemplateId, null);
+            if (templatePoolRef != null && templatePoolRef.getDownloadState() == Status.DOWNLOADED) {
+                return tmplFactory.getTemplate(srcTemplateId, destPrimaryDataStore);
+            }
+
+            templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
+            if (templateOnPrimary == null) {
+                throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
+            }
+
+            templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
+            if (templatePoolRef == null) {
+                throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
+            }
+
+            if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
+                // Populate details which will be later read by the storage subsystem.
+                Map<String, String> details = new HashMap<>();
+
+                details.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
+                details.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
+                details.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
+                details.put(PrimaryDataStore.MANAGED_STORE_TARGET, templateOnPrimary.getInstallPath());
+                details.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcTemplateInfo.getUniqueName());
+                details.put(PrimaryDataStore.REMOVE_AFTER_COPY, Boolean.TRUE.toString());
+                details.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(templateOnPrimary.getSize()));
+                details.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
+                destPrimaryDataStore.setDetails(details);
+
+                try {
+                    grantAccess(templateOnPrimary, destHost, destPrimaryDataStore);
+                } catch (Exception e) {
+                    throw new StorageAccessException("Unable to grant access to template: " + templateOnPrimary.getId() + " on host: " + destHost.getId());
+                }
+
+                templateOnPrimary.processEvent(Event.CopyingRequested);
+
+                try {
+                    //Download and copy template to the managed volume
+                    TemplateInfo templateOnPrimaryNow =  tmplFactory.getReadyBypassedTemplateOnManagedStorage(srcTemplateId, templateOnPrimary, destDataStoreId, destHostId);
+                    if (templateOnPrimaryNow == null) {
+                        s_logger.debug("Failed to prepare ready bypassed template: " + srcTemplateId + " on primary storage: " + templateOnPrimary.getId());
+                        throw new CloudRuntimeException("Failed to prepare ready bypassed template: " + srcTemplateId + " on primary storage: " + templateOnPrimary.getId());
+                    }
+                    templateOnPrimary.processEvent(Event.OperationSuccessed);
+                    return templateOnPrimaryNow;
+                } finally {
+                    revokeAccess(templateOnPrimary, destHost, destPrimaryDataStore);
+                }
+            }
+            return null;
+        } catch (StorageAccessException e) {
+            throw e;
+        } catch (Throwable e) {
+            s_logger.debug("Failed to create template on managed primary storage", e);
+            if (templateOnPrimary != null) {
+                templateOnPrimary.processEvent(Event.OperationFailed);
+            }
+
+            throw new CloudRuntimeException(e.getMessage());
+        } finally {
+            if (lock != null) {
+                lock.unlock();
+                lock.releaseRef();
+            }
+        }
+    }
+
+    @Override
+    public AsyncCallFuture<VolumeApiResult> createManagedStorageVolumeFromTemplateAsync(VolumeInfo volumeInfo, long destDataStoreId, TemplateInfo srcTemplateInfo, long destHostId) throws StorageAccessException {
         PrimaryDataStore destPrimaryDataStore = dataStoreMgr.getPrimaryDataStore(destDataStoreId);
         Host destHost = _hostDao.findById(destHostId);
 
@@ -1123,31 +1405,59 @@ public class VolumeServiceImpl implements VolumeService {
         if (storageCanCloneVolume && computeSupportsVolumeClone) {
             s_logger.debug("Storage " + destDataStoreId + " can support cloning using a cached template and compute side is OK with volume cloning.");
 
-            TemplateInfo templateOnPrimary = destPrimaryDataStore.getTemplate(srcTemplateInfo.getId(), null);
+            GlobalLock lock = null;
+            TemplateInfo templateOnPrimary = null;
 
-            if (templateOnPrimary == null) {
-                templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
+            try {
+                String tmplIdManagedPoolIdLockString = "tmplId:" + srcTemplateInfo.getId() + "managedPoolId:" + destDataStoreId;
+                lock = GlobalLock.getInternLock(tmplIdManagedPoolIdLockString);
+                if (lock == null) {
+                    throw new CloudRuntimeException("Unable to create managed storage template/volume, couldn't get global lock on " + tmplIdManagedPoolIdLockString);
+                }
+
+                int storagePoolMaxWaitSeconds = NumbersUtil.parseInt(configDao.getValue(Config.StoragePoolMaxWaitSeconds.key()), 3600);
+                if (!lock.lock(storagePoolMaxWaitSeconds)) {
+                    s_logger.debug("Unable to create managed storage template/volume, couldn't lock on " + tmplIdManagedPoolIdLockString);
+                    throw new CloudRuntimeException("Unable to create managed storage template/volume, couldn't lock on " + tmplIdManagedPoolIdLockString);
+                }
+
+                templateOnPrimary = destPrimaryDataStore.getTemplate(srcTemplateInfo.getId(), null);
 
                 if (templateOnPrimary == null) {
-                    throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
+                    templateOnPrimary = createManagedTemplateVolume(srcTemplateInfo, destPrimaryDataStore);
+
+                    if (templateOnPrimary == null) {
+                        throw new CloudRuntimeException("Failed to create template " + srcTemplateInfo.getUniqueName() + " on primary storage: " + destDataStoreId);
+                    }
                 }
-            }
 
-            // Copy the template to the template volume.
-            VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
+                // Copy the template to the template volume.
+                VMTemplateStoragePoolVO templatePoolRef = _tmpltPoolDao.findByPoolTemplate(destPrimaryDataStore.getId(), templateOnPrimary.getId(), null);
 
-            if (templatePoolRef == null) {
-                throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
-            }
+                if (templatePoolRef == null) {
+                    throw new CloudRuntimeException("Failed to find template " + srcTemplateInfo.getUniqueName() + " in storage pool " + destPrimaryDataStore.getId());
+                }
 
-            if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
-                copyTemplateToManagedTemplateVolume(srcTemplateInfo, templateOnPrimary, templatePoolRef, destPrimaryDataStore, destHost);
+                if (templatePoolRef.getDownloadState() == Status.NOT_DOWNLOADED) {
+                    copyTemplateToManagedTemplateVolume(srcTemplateInfo, templateOnPrimary, templatePoolRef, destPrimaryDataStore, destHost);
+                }
+            } finally {
+                if (lock != null) {
+                    lock.unlock();
+                    lock.releaseRef();
+                }
             }
 
-            // We have a template on primary storage. Clone it to new volume.
-            s_logger.debug("Creating a clone from template on primary storage " + destDataStoreId);
+            if (destPrimaryDataStore.getPoolType() != StoragePoolType.PowerFlex) {
+                // We have a template on primary storage. Clone it to new volume.
+                s_logger.debug("Creating a clone from template on primary storage " + destDataStoreId);
 
-            createManagedVolumeCloneTemplateAsync(volumeInfo, templateOnPrimary, destPrimaryDataStore, future);
+                createManagedVolumeCloneTemplateAsync(volumeInfo, templateOnPrimary, destPrimaryDataStore, future);
+            } else {
+                // We have a template on PowerFlex primary storage. Create new volume and copy to it.
+                s_logger.debug("Copying the template to the volume on primary storage");
+                createManagedVolumeCopyManagedTemplateAsync(volumeInfo, destPrimaryDataStore, templateOnPrimary, destHost, future);
+            }
         } else {
             s_logger.debug("Primary storage does not support cloning or no support for UUID resigning on the host side; copying the template normally");
 
@@ -1300,6 +1610,8 @@ public class VolumeServiceImpl implements VolumeService {
         // part  here to make sure the credentials do not get stored in the db unencrypted.
         if (pool.getPoolType() == StoragePoolType.SMB && folder != null && folder.contains("?")) {
             folder = folder.substring(0, folder.indexOf("?"));
+        } else  if (pool.getPoolType() == StoragePoolType.PowerFlex) {
+            folder = volume.getFolder();
         }
 
         VolumeVO newVol = new VolumeVO(volume);
@@ -1309,6 +1621,7 @@ public class VolumeServiceImpl implements VolumeService {
         newVol.setFolder(folder);
         newVol.setPodId(pool.getPodId());
         newVol.setPoolId(pool.getId());
+        newVol.setPoolType(pool.getPoolType());
         newVol.setLastPoolId(lastPoolId);
         newVol.setPodId(pool.getPodId());
         return volDao.persist(newVol);
@@ -1325,7 +1638,6 @@ public class VolumeServiceImpl implements VolumeService {
             this.destVolume = destVolume;
             this.future = future;
         }
-
     }
 
     protected AsyncCallFuture<VolumeApiResult> copyVolumeFromImageToPrimary(VolumeInfo srcVolume, DataStore destStore) {
@@ -1435,8 +1747,8 @@ public class VolumeServiceImpl implements VolumeService {
 
     @Override
     public AsyncCallFuture<VolumeApiResult> copyVolume(VolumeInfo srcVolume, DataStore destStore) {
+        DataStore srcStore = srcVolume.getDataStore();
         if (s_logger.isDebugEnabled()) {
-            DataStore srcStore = srcVolume.getDataStore();
             String srcRole = (srcStore != null && srcStore.getRole() != null ? srcVolume.getDataStore().getRole().toString() : "<unknown role>");
 
             String msg = String.format("copying %s(id=%d, role=%s) to %s (id=%d, role=%s)"
@@ -1457,6 +1769,11 @@ public class VolumeServiceImpl implements VolumeService {
             return copyVolumeFromPrimaryToImage(srcVolume, destStore);
         }
 
+        if (srcStore.getRole() == DataStoreRole.Primary && destStore.getRole() == DataStoreRole.Primary && ((PrimaryDataStore) destStore).isManaged() &&
+                requiresNewManagedVolumeInDestStore((PrimaryDataStore) srcStore, (PrimaryDataStore) destStore)) {
+            return copyManagedVolume(srcVolume, destStore);
+        }
+
         // OfflineVmwareMigration: aren't we missing secondary to secondary in this logic?
 
         AsyncCallFuture<VolumeApiResult> future = new AsyncCallFuture<VolumeApiResult>();
@@ -1502,6 +1819,14 @@ public class VolumeServiceImpl implements VolumeService {
                 destVolume.processEvent(Event.MigrationCopyFailed);
                 srcVolume.processEvent(Event.OperationFailed);
                 destroyVolume(destVolume.getId());
+                if (destVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
+                    s_logger.info("Dest volume " + destVolume.getId() + " can be removed");
+                    destVolume.processEvent(Event.ExpungeRequested);
+                    destVolume.processEvent(Event.OperationSuccessed);
+                    volDao.remove(destVolume.getId());
+                    future.complete(res);
+                    return null;
+                }
                 destVolume = volFactory.getVolume(destVolume.getId());
                 AsyncCallFuture<VolumeApiResult> destroyFuture = expungeVolumeAsync(destVolume);
                 destroyFuture.get();
@@ -1512,6 +1837,14 @@ public class VolumeServiceImpl implements VolumeService {
                 volDao.updateUuid(srcVolume.getId(), destVolume.getId());
                 try {
                     destroyVolume(srcVolume.getId());
+                    if (srcVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
+                        s_logger.info("Src volume " + srcVolume.getId() + " can be removed");
+                        srcVolume.processEvent(Event.ExpungeRequested);
+                        srcVolume.processEvent(Event.OperationSuccessed);
+                        volDao.remove(srcVolume.getId());
+                        future.complete(res);
+                        return null;
+                    }
                     srcVolume = volFactory.getVolume(srcVolume.getId());
                     AsyncCallFuture<VolumeApiResult> destroyFuture = expungeVolumeAsync(srcVolume);
                     // If volume destroy fails, this could be because of vdi is still in use state, so wait and retry.
@@ -1534,6 +1867,213 @@ public class VolumeServiceImpl implements VolumeService {
         return null;
     }
 
+    private class CopyManagedVolumeContext<T> extends AsyncRpcContext<T> {
+        final VolumeInfo srcVolume;
+        final VolumeInfo destVolume;
+        final Host host;
+        final AsyncCallFuture<VolumeApiResult> future;
+
+        public CopyManagedVolumeContext(AsyncCompletionCallback<T> callback, AsyncCallFuture<VolumeApiResult> future, VolumeInfo srcVolume, VolumeInfo destVolume, Host host) {
+            super(callback);
+            this.srcVolume = srcVolume;
+            this.destVolume = destVolume;
+            this.host = host;
+            this.future = future;
+        }
+    }
+
+    private AsyncCallFuture<VolumeApiResult> copyManagedVolume(VolumeInfo srcVolume, DataStore destStore) {
+        AsyncCallFuture<VolumeApiResult> future = new AsyncCallFuture<VolumeApiResult>();
+        VolumeApiResult res = new VolumeApiResult(srcVolume);
+        try {
+            if (!snapshotMgr.canOperateOnVolume(srcVolume)) {
+                s_logger.debug("There are snapshots creating for this volume, can not move this volume");
+                res.setResult("There are snapshots creating for this volume, can not move this volume");
+                future.complete(res);
+                return future;
+            }
+
+            if (snapshotMgr.backedUpSnapshotsExistsForVolume(srcVolume)) {
+                s_logger.debug("There are backed up snapshots for this volume, can not move.");
+                res.setResult("[UNSUPPORTED] There are backed up snapshots for this volume, can not move. Please try again after removing them.");
+                future.complete(res);
+                return future;
+            }
+
+            List<Long> poolIds = new ArrayList<Long>();
+            poolIds.add(srcVolume.getPoolId());
+            poolIds.add(destStore.getId());
+
+            Host hostWithPoolsAccess = _storageMgr.findUpAndEnabledHostWithAccessToStoragePools(poolIds);
+            if (hostWithPoolsAccess == null) {
+                s_logger.debug("No host(s) available with pool access, can not move this volume");
+                res.setResult("No host(s) available with pool access, can not move this volume");
+                future.complete(res);
+                return future;
+            }
+
+            VolumeVO destVol = duplicateVolumeOnAnotherStorage(srcVolume, (StoragePool)destStore);
+            VolumeInfo destVolume = volFactory.getVolume(destVol.getId(), destStore);
+
+            // Create a volume on managed storage.
+            AsyncCallFuture<VolumeApiResult> createVolumeFuture = createVolumeAsync(destVolume, destStore);
+            VolumeApiResult createVolumeResult = createVolumeFuture.get();
+            if (createVolumeResult.isFailed()) {
+                throw new CloudRuntimeException("Creation of a dest volume failed: " + createVolumeResult.getResult());
+            }
+
+            // Refresh the volume info from the DB.
+            destVolume = volFactory.getVolume(destVolume.getId(), destStore);
+
+            PrimaryDataStore srcPrimaryDataStore = (PrimaryDataStore) srcVolume.getDataStore();
+            if (srcPrimaryDataStore.isManaged()) {
+                Map<String, String> srcPrimaryDataStoreDetails = new HashMap<String, String>();
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_HOST, srcPrimaryDataStore.getHostAddress());
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(srcPrimaryDataStore.getPort()));
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET, srcVolume.get_iScsiName());
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, srcVolume.getName());
+                srcPrimaryDataStoreDetails.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(srcVolume.getSize()));
+                srcPrimaryDataStoreDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(srcPrimaryDataStore.getId())));
+                srcPrimaryDataStore.setDetails(srcPrimaryDataStoreDetails);
+                grantAccess(srcVolume, hostWithPoolsAccess, srcVolume.getDataStore());
+            }
+
+            PrimaryDataStore destPrimaryDataStore = (PrimaryDataStore) destStore;
+            Map<String, String> destPrimaryDataStoreDetails = new HashMap<String, String>();
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED, Boolean.TRUE.toString());
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_HOST, destPrimaryDataStore.getHostAddress());
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.STORAGE_PORT, String.valueOf(destPrimaryDataStore.getPort()));
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET, destVolume.get_iScsiName());
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.MANAGED_STORE_TARGET_ROOT_VOLUME, destVolume.getName());
+            destPrimaryDataStoreDetails.put(PrimaryDataStore.VOLUME_SIZE, String.valueOf(destVolume.getSize()));
+            destPrimaryDataStoreDetails.put(StorageManager.STORAGE_POOL_DISK_WAIT.toString(), String.valueOf(StorageManager.STORAGE_POOL_DISK_WAIT.valueIn(destPrimaryDataStore.getId())));
+            destPrimaryDataStore.setDetails(destPrimaryDataStoreDetails);
+
+            grantAccess(destVolume, hostWithPoolsAccess, destStore);
+
+            destVolume.processEvent(Event.CreateRequested);
+            srcVolume.processEvent(Event.MigrationRequested);
+
+            CopyManagedVolumeContext<VolumeApiResult> context = new CopyManagedVolumeContext<VolumeApiResult>(null, future, srcVolume, destVolume, hostWithPoolsAccess);
+            AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> caller = AsyncCallbackDispatcher.create(this);
+            caller.setCallback(caller.getTarget().copyManagedVolumeCallBack(null, null)).setContext(context);
+
+            motionSrv.copyAsync(srcVolume, destVolume, hostWithPoolsAccess, caller);
+        } catch (Exception e) {
+            s_logger.error("Copy to managed volume failed due to: " + e);
+            if(s_logger.isDebugEnabled()) {
+                s_logger.debug("Copy to managed volume failed.", e);
+            }
+            res.setResult(e.toString());
+            future.complete(res);
+        }
+
+        return future;
+    }
+
+    protected Void copyManagedVolumeCallBack(AsyncCallbackDispatcher<VolumeServiceImpl, CopyCommandResult> callback, CopyManagedVolumeContext<VolumeApiResult> context) {
+        VolumeInfo srcVolume = context.srcVolume;
+        VolumeInfo destVolume = context.destVolume;
+        Host host = context.host;
+        CopyCommandResult result = callback.getResult();
+        AsyncCallFuture<VolumeApiResult> future = context.future;
+        VolumeApiResult res = new VolumeApiResult(destVolume);
+
+        try {
+            if (srcVolume.getDataStore() != null && ((PrimaryDataStore) srcVolume.getDataStore()).isManaged()) {
+                revokeAccess(srcVolume, host, srcVolume.getDataStore());
+            }
+            revokeAccess(destVolume, host, destVolume.getDataStore());
+
+            if (result.isFailed()) {
+                res.setResult(result.getResult());
+                destVolume.processEvent(Event.MigrationCopyFailed);
+                srcVolume.processEvent(Event.OperationFailed);
+                try {
+                    destroyVolume(destVolume.getId());
+                    destVolume = volFactory.getVolume(destVolume.getId());
+                    AsyncCallFuture<VolumeApiResult> destVolumeDestroyFuture = expungeVolumeAsync(destVolume);
+                    destVolumeDestroyFuture.get();
+                    // If dest managed volume destroy fails, wait and retry.
+                    if (destVolumeDestroyFuture.get().isFailed()) {
+                        Thread.sleep(5 * 1000);
+                        destVolumeDestroyFuture = expungeVolumeAsync(destVolume);
+                        destVolumeDestroyFuture.get();
+                    }
+                    future.complete(res);
+                } catch (Exception e) {
+                    s_logger.debug("failed to clean up managed volume on storage", e);
+                }
+            } else {
+                srcVolume.processEvent(Event.OperationSuccessed);
+                destVolume.processEvent(Event.MigrationCopySucceeded, result.getAnswer());
+                volDao.updateUuid(srcVolume.getId(), destVolume.getId());
+                try {
+                    destroyVolume(srcVolume.getId());
+                    srcVolume = volFactory.getVolume(srcVolume.getId());
+                    AsyncCallFuture<VolumeApiResult> srcVolumeDestroyFuture = expungeVolumeAsync(srcVolume);
+                    // If src volume destroy fails, wait and retry.
+                    if (srcVolumeDestroyFuture.get().isFailed()) {
+                        Thread.sleep(5 * 1000);
+                        srcVolumeDestroyFuture = expungeVolumeAsync(srcVolume);
+                        srcVolumeDestroyFuture.get();
+                    }
+                    future.complete(res);
+                } catch (Exception e) {
+                    s_logger.debug("failed to clean up volume on storage", e);
+                }
+            }
+        } catch (Exception e) {
+            s_logger.debug("Failed to process copy managed volume callback", e);
+            res.setResult(e.toString());
+            future.complete(res);
+        }
+
+        return null;
+    }
+
+    private boolean requiresNewManagedVolumeInDestStore(PrimaryDataStore srcDataStore, PrimaryDataStore destDataStore) {
+        if (srcDataStore == null || destDataStore == null) {
+            s_logger.warn("Unable to check for new volume, either src or dest pool is null");
+            return false;
+        }
+
+        if (srcDataStore.getPoolType() == StoragePoolType.PowerFlex && destDataStore.getPoolType() == StoragePoolType.PowerFlex) {
+            if (srcDataStore.getId() == destDataStore.getId()) {
+                return false;
+            }
+
+            final String STORAGE_POOL_SYSTEM_ID = "powerflex.storagepool.system.id";
+            String srcPoolSystemId = null;
+            StoragePoolDetailVO srcPoolSystemIdDetail = _storagePoolDetailsDao.findDetail(srcDataStore.getId(), STORAGE_POOL_SYSTEM_ID);
+            if (srcPoolSystemIdDetail != null) {
+                srcPoolSystemId = srcPoolSystemIdDetail.getValue();
+            }
+
+            String destPoolSystemId = null;
+            StoragePoolDetailVO destPoolSystemIdDetail = _storagePoolDetailsDao.findDetail(destDataStore.getId(), STORAGE_POOL_SYSTEM_ID);
+            if (destPoolSystemIdDetail != null) {
+                destPoolSystemId = destPoolSystemIdDetail.getValue();
+            }
+
+            if (Strings.isNullOrEmpty(srcPoolSystemId) || Strings.isNullOrEmpty(destPoolSystemId)) {
+                s_logger.warn("PowerFlex src pool: " + srcDataStore.getId() + " or dest pool: " + destDataStore.getId() +
+                        " storage instance details are not available");
+                return false;
+            }
+
+            if (!srcPoolSystemId.equals(destPoolSystemId)) {
+                s_logger.debug("PowerFlex src pool: " + srcDataStore.getId() + " and dest pool: "  + destDataStore.getId() +
+                        " belongs to different storage instances, create new managed volume");
+                return true;
+            }
+        }
+
+        // New volume not required for all other cases (address any cases required in future)
+        return false;
+    }
+
     private class MigrateVolumeContext<T> extends AsyncRpcContext<T> {
         final VolumeInfo srcVolume;
         final VolumeInfo destVolume;
@@ -1569,7 +2109,7 @@ public class VolumeServiceImpl implements VolumeService {
             caller.setCallback(caller.getTarget().migrateVolumeCallBack(null, null)).setContext(context);
             motionSrv.copyAsync(srcVolume, destVolume, caller);
         } catch (Exception e) {
-            s_logger.debug("Failed to copy volume", e);
+            s_logger.debug("Failed to migrate volume", e);
             res.setResult(e.toString());
             future.complete(res);
         }
@@ -1588,6 +2128,10 @@ public class VolumeServiceImpl implements VolumeService {
                 future.complete(res);
             } else {
                 srcVolume.processEvent(Event.OperationSuccessed);
+                if (srcVolume.getStoragePoolType() == StoragePoolType.PowerFlex) {
+                    future.complete(res);
+                    return null;
+                }
                 snapshotMgr.cleanupSnapshotsByVolume(srcVolume.getId());
                 future.complete(res);
             }
@@ -2139,4 +2683,4 @@ public class VolumeServiceImpl implements VolumeService {
             volDao.remove(vol.getId());
         }
     }
-}
\ No newline at end of file
+}
diff --git a/framework/direct-download/src/main/java/org/apache/cloudstack/framework/agent/direct/download/DirectDownloadService.java b/framework/direct-download/src/main/java/org/apache/cloudstack/framework/agent/direct/download/DirectDownloadService.java
index ed7bbd7..983f935 100644
--- a/framework/direct-download/src/main/java/org/apache/cloudstack/framework/agent/direct/download/DirectDownloadService.java
+++ b/framework/direct-download/src/main/java/org/apache/cloudstack/framework/agent/direct/download/DirectDownloadService.java
@@ -33,4 +33,9 @@ public interface DirectDownloadService {
      * Upload a stored certificate on database with id 'certificateId' to host with id 'hostId'
      */
     boolean uploadCertificate(long certificateId, long hostId);
+
+    /**
+     * Sync the stored certificates to host with id 'hostId'
+     */
+    boolean syncCertificatesToHost(long hostId, long zoneId);
 }
diff --git a/plugins/hypervisors/kvm/pom.xml b/plugins/hypervisors/kvm/pom.xml
index a98a0db..394acdc 100644
--- a/plugins/hypervisors/kvm/pom.xml
+++ b/plugins/hypervisors/kvm/pom.xml
@@ -72,6 +72,12 @@
             <artifactId>jna-platform</artifactId>
             <version>${cs.jna.version}</version>
         </dependency>
+        <dependency>
+            <groupId>org.apache.cloudstack</groupId>
+            <artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
+            <version>${project.version}</version>
+            <scope>compile</scope>
+        </dependency>
     </dependencies>
     <build>
         <plugins>
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
index 5804c37..cfa5474 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
@@ -46,9 +46,7 @@ import javax.xml.parsers.DocumentBuilder;
 import javax.xml.parsers.DocumentBuilderFactory;
 import javax.xml.parsers.ParserConfigurationException;
 
-import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceAgentExecutor;
-import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceExecutor;
-import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceServiceExecutor;
+import org.apache.cloudstack.storage.configdrive.ConfigDrive;
 import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
 import org.apache.cloudstack.storage.to.TemplateObjectTO;
 import org.apache.cloudstack.storage.to.VolumeObjectTO;
@@ -91,6 +89,7 @@ import com.cloud.agent.api.HostVmStateReportEntry;
 import com.cloud.agent.api.PingCommand;
 import com.cloud.agent.api.PingRoutingCommand;
 import com.cloud.agent.api.PingRoutingWithNwGroupsCommand;
+import com.cloud.agent.api.SecurityGroupRulesCmd;
 import com.cloud.agent.api.SetupGuestNetworkCommand;
 import com.cloud.agent.api.StartupCommand;
 import com.cloud.agent.api.StartupRoutingCommand;
@@ -113,7 +112,6 @@ import com.cloud.agent.dao.impl.PropertiesStorage;
 import com.cloud.agent.resource.virtualnetwork.VRScripts;
 import com.cloud.agent.resource.virtualnetwork.VirtualRouterDeployer;
 import com.cloud.agent.resource.virtualnetwork.VirtualRoutingResource;
-import com.cloud.agent.api.SecurityGroupRulesCmd;
 import com.cloud.dc.Vlan;
 import com.cloud.exception.InternalErrorException;
 import com.cloud.host.Host.Type;
@@ -146,6 +144,9 @@ import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.VideoDef;
 import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef;
 import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogAction;
 import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogModel;
+import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceAgentExecutor;
+import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceExecutor;
+import com.cloud.hypervisor.kvm.resource.rolling.maintenance.RollingMaintenanceServiceExecutor;
 import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper;
 import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtUtilitiesHelper;
 import com.cloud.hypervisor.kvm.storage.IscsiStorageCleanupMonitor;
@@ -239,6 +240,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     public static final String SSHPUBKEYPATH = SSHKEYSPATH + File.separator + "id_rsa.pub.cloud";
     public static final String DEFAULTDOMRSSHPORT = "3922";
 
+    public final static String HOST_CACHE_PATH_PARAMETER = "host.cache.location";
+    public final static String CONFIG_DIR = "config";
+
     public static final String BASH_SCRIPT_PATH = "/bin/bash";
 
     private String _mountPoint = "/mnt";
@@ -518,6 +522,14 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
         return directDownloadTemporaryDownloadPath;
     }
 
+    public String getConfigPath() {
+        return getCachePath() + "/" + CONFIG_DIR;
+    }
+
+    public String getCachePath() {
+        return cachePath;
+    }
+
     public String getResizeVolumePath() {
         return _resizeVolumePath;
     }
@@ -570,6 +582,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     protected boolean dpdkSupport = false;
     protected String dpdkOvsPath;
     protected String directDownloadTemporaryDownloadPath;
+    protected String cachePath;
 
     private String getEndIpFromStartIp(final String startIp, final int numIps) {
         final String[] tokens = startIp.split("[.]");
@@ -621,6 +634,10 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
         return "/var/lib/libvirt/images";
     }
 
+    private String getDefaultCachePath() {
+        return "/var/cache/cloud";
+    }
+
     protected String getDefaultNetworkScriptsDir() {
         return "scripts/vm/network/vnet";
     }
@@ -710,6 +727,11 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
             directDownloadTemporaryDownloadPath = getDefaultDirectDownloadTemporaryPath();
         }
 
+        cachePath = (String) params.get(HOST_CACHE_PATH_PARAMETER);
+        if (org.apache.commons.lang.StringUtils.isBlank(cachePath)) {
+            cachePath = getDefaultCachePath();
+        }
+
         params.put("domr.scripts.dir", domrScriptsDir);
 
         _virtRouterResource = new VirtualRoutingResource(this);
@@ -2461,11 +2483,21 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
     }
 
     public String getVolumePath(final Connect conn, final DiskTO volume) throws LibvirtException, URISyntaxException {
+        return getVolumePath(conn, volume, false);
+    }
+
+    public String getVolumePath(final Connect conn, final DiskTO volume, boolean diskOnHostCache) throws LibvirtException, URISyntaxException {
         final DataTO data = volume.getData();
         final DataStoreTO store = data.getDataStore();
 
         if (volume.getType() == Volume.Type.ISO && data.getPath() != null && (store instanceof NfsTO ||
                 store instanceof PrimaryDataStoreTO && data instanceof TemplateObjectTO && !((TemplateObjectTO) data).isDirectDownload())) {
+
+            if (data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR) && diskOnHostCache) {
+                String configDrivePath = getConfigPath() + "/" + data.getPath();
+                return configDrivePath;
+            }
+
             final String isoPath = store.getUrl().split("\\?")[0] + File.separator + data.getPath();
             final int index = isoPath.lastIndexOf("/");
             final String path = isoPath.substring(0, index);
@@ -2503,7 +2535,11 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
             if (volume.getType() == Volume.Type.ISO && data.getPath() != null) {
                 DataStoreTO dataStore = data.getDataStore();
                 String dataStoreUrl = null;
-                if (dataStore instanceof NfsTO) {
+                if (data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR) && vmSpec.isConfigDriveOnHostCache() && data instanceof TemplateObjectTO) {
+                    String configDrivePath = getConfigPath() + "/" + data.getPath();
+                    physicalDisk = new KVMPhysicalDisk(configDrivePath, ((TemplateObjectTO) data).getUuid(), null);
+                    physicalDisk.setFormat(PhysicalDiskFormat.FILE);
+                } else if (dataStore instanceof NfsTO) {
                     NfsTO nfsStore = (NfsTO)data.getDataStore();
                     dataStoreUrl = nfsStore.getUrl();
                     physicalDisk = getPhysicalDiskFromNfsStore(dataStoreUrl, data);
@@ -2592,6 +2628,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
                      */
                     disk.defNetworkBasedDisk(physicalDisk.getPath().replace("rbd:", ""), pool.getSourceHost(), pool.getSourcePort(), pool.getAuthUserName(),
                             pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
+                } else if (pool.getType() == StoragePoolType.PowerFlex) {
+                    disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData);
                 } else if (pool.getType() == StoragePoolType.Gluster) {
                     final String mountpoint = pool.getLocalPath();
                     final String path = physicalDisk.getPath();
@@ -2675,7 +2713,6 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
                 }
             }
         }
-
     }
 
     private KVMPhysicalDisk getPhysicalDiskPrimaryStore(PrimaryDataStoreTO primaryDataStoreTO, DataTO data) {
@@ -2837,6 +2874,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
                 if (attachingPool.getType() == StoragePoolType.RBD) {
                     diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), attachingPool.getAuthUserName(),
                             attachingPool.getUuid(), devId, busT, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
+                } else if (attachingPool.getType() == StoragePoolType.PowerFlex) {
+                    diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT);
                 } else if (attachingPool.getType() == StoragePoolType.Gluster) {
                     diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null,
                             null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2);
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
index 56519ae..1bdf2db 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolDef.java
@@ -18,7 +18,7 @@ package com.cloud.hypervisor.kvm.resource;
 
 public class LibvirtStoragePoolDef {
     public enum PoolType {
-        ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"), DIR("dir"), RBD("rbd"), GLUSTERFS("glusterfs");
+        ISCSI("iscsi"), NETFS("netfs"), LOGICAL("logical"), DIR("dir"), RBD("rbd"), GLUSTERFS("glusterfs"), POWERFLEX("powerflex");
         String _poolType;
 
         PoolType(String poolType) {
@@ -178,7 +178,7 @@ public class LibvirtStoragePoolDef {
             storagePoolBuilder.append("'/>\n");
             storagePoolBuilder.append("</source>\n");
         }
-        if (_poolType != PoolType.RBD) {
+        if (_poolType != PoolType.RBD && _poolType != PoolType.POWERFLEX) {
             storagePoolBuilder.append("<target>\n");
             storagePoolBuilder.append("<path>" + _targetPath + "</path>\n");
             storagePoolBuilder.append("</target>\n");
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
index 7b70c37..bd7deaa 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
@@ -55,7 +55,7 @@ public class LibvirtStoragePoolXMLParser {
             String host = getAttrValue("host", "name", source);
             String format = getAttrValue("format", "type", source);
 
-            if (type.equalsIgnoreCase("rbd")) {
+            if (type.equalsIgnoreCase("rbd") || type.equalsIgnoreCase("powerflex")) {
                 int port = 0;
                 String xmlPort = getAttrValue("host", "port", source);
                 if (StringUtils.isNotBlank(xmlPort)) {
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCheckUrlCommand.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCheckUrlCommand.java
index efc0090..2618f20 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCheckUrlCommand.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtCheckUrlCommand.java
@@ -18,13 +18,15 @@
 //
 package com.cloud.hypervisor.kvm.resource.wrapper;
 
+import org.apache.cloudstack.agent.directdownload.CheckUrlAnswer;
+import org.apache.cloudstack.agent.directdownload.CheckUrlCommand;
+import org.apache.log4j.Logger;
+
 import com.cloud.hypervisor.kvm.resource.LibvirtComputingResource;
 import com.cloud.resource.CommandWrapper;
 import com.cloud.resource.ResourceWrapper;
 import com.cloud.utils.UriUtils;
-import org.apache.cloudstack.agent.directdownload.CheckUrlAnswer;
-import org.apache.cloudstack.agent.directdownload.CheckUrlCommand;
-import org.apache.log4j.Logger;
+import com.cloud.utils.storage.QCOW2Utils;
 
 @ResourceWrapper(handles =  CheckUrlCommand.class)
 public class LibvirtCheckUrlCommand extends CommandWrapper<CheckUrlCommand, CheckUrlAnswer, LibvirtComputingResource> {
@@ -39,7 +41,12 @@ public class LibvirtCheckUrlCommand extends CommandWrapper<CheckUrlCommand, Chec
         Long remoteSize = null;
         try {
             UriUtils.checkUrlExistence(url);
-            remoteSize = UriUtils.getRemoteSize(url);
+
+            if ("qcow2".equalsIgnoreCase(cmd.getFormat())) {
+                remoteSize = QCOW2Utils.getVirtualSize(url);
+            } else {
+                remoteSize = UriUtils.getRemoteSize(url);
+            }
         }
         catch (IllegalArgumentException e) {
             s_logger.warn(e.getMessage());
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtGetVolumeStatsCommandWrapper.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtGetVolumeStatsCommandWrapper.java
index 00bdfcd..a2f50ac 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtGetVolumeStatsCommandWrapper.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtGetVolumeStatsCommandWrapper.java
@@ -50,7 +50,12 @@ public final class LibvirtGetVolumeStatsCommandWrapper extends CommandWrapper<Ge
             StoragePoolType poolType = cmd.getPoolType();
             HashMap<String, VolumeStatsEntry> statEntry = new HashMap<String, VolumeStatsEntry>();
             for (String volumeUuid : cmd.getVolumeUuids()) {
-                statEntry.put(volumeUuid, getVolumeStat(libvirtComputingResource, conn, volumeUuid, storeUuid, poolType));
+                VolumeStatsEntry volumeStatsEntry = getVolumeStat(libvirtComputingResource, conn, volumeUuid, storeUuid, poolType);
+                if (volumeStatsEntry == null) {
+                    String msg = "Can't get disk stats as pool or disk details unavailable for volume: " + volumeUuid + " on the storage pool: " + storeUuid;
+                    return new GetVolumeStatsAnswer(cmd, msg, null);
+                }
+                statEntry.put(volumeUuid, volumeStatsEntry);
             }
             return new GetVolumeStatsAnswer(cmd, "", statEntry);
         } catch (LibvirtException | CloudRuntimeException e) {
@@ -58,10 +63,17 @@ public final class LibvirtGetVolumeStatsCommandWrapper extends CommandWrapper<Ge
         }
     }
 
-
     private VolumeStatsEntry getVolumeStat(final LibvirtComputingResource libvirtComputingResource, final Connect conn, final String volumeUuid, final String storeUuid, final StoragePoolType poolType) throws LibvirtException {
         KVMStoragePool sourceKVMPool = libvirtComputingResource.getStoragePoolMgr().getStoragePool(poolType, storeUuid);
+        if (sourceKVMPool == null) {
+            return null;
+        }
+
         KVMPhysicalDisk sourceKVMVolume = sourceKVMPool.getPhysicalDisk(volumeUuid);
+        if (sourceKVMVolume == null) {
+            return null;
+        }
+
         return new VolumeStatsEntry(volumeUuid, sourceKVMVolume.getSize(), sourceKVMVolume.getVirtualSize());
     }
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
index 6baae85..6067150 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
@@ -24,16 +24,21 @@ import java.nio.file.Path;
 import java.nio.file.Paths;
 
 import org.apache.cloudstack.storage.configdrive.ConfigDriveBuilder;
+import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
 import org.apache.log4j.Logger;
 
 import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.HandleConfigDriveIsoAnswer;
 import com.cloud.agent.api.HandleConfigDriveIsoCommand;
+import com.cloud.agent.api.to.DataStoreTO;
 import com.cloud.hypervisor.kvm.resource.LibvirtComputingResource;
 import com.cloud.hypervisor.kvm.storage.KVMStoragePool;
 import com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager;
+import com.cloud.network.element.NetworkElement;
 import com.cloud.resource.CommandWrapper;
 import com.cloud.resource.ResourceWrapper;
 import com.cloud.storage.Storage;
+import com.cloud.utils.exception.CloudRuntimeException;
 
 @ResourceWrapper(handles =  HandleConfigDriveIsoCommand.class)
 public final class LibvirtHandleConfigDriveCommandWrapper extends CommandWrapper<HandleConfigDriveIsoCommand, Answer, LibvirtComputingResource> {
@@ -41,38 +46,103 @@ public final class LibvirtHandleConfigDriveCommandWrapper extends CommandWrapper
 
     @Override
     public Answer execute(final HandleConfigDriveIsoCommand command, final LibvirtComputingResource libvirtComputingResource) {
-        final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
-        final KVMStoragePool pool = storagePoolMgr.getStoragePool(Storage.StoragePoolType.NetworkFilesystem, command.getDestStore().getUuid());
-        if (pool == null) {
-            return new Answer(command, false, "Pool not found, config drive for KVM is only supported for NFS");
-        }
+        String mountPoint = null;
+
+        try {
+            if (command.isCreate()) {
+                LOG.debug("Creating config drive: " + command.getIsoFile());
+
+                NetworkElement.Location location = NetworkElement.Location.PRIMARY;
+                if (command.isHostCachePreferred()) {
+                    LOG.debug("Using the KVM host for config drive");
+                    mountPoint = libvirtComputingResource.getConfigPath();
+                    location = NetworkElement.Location.HOST;
+                } else {
+                    final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
+                    KVMStoragePool pool = null;
+                    String poolUuid = null;
+                    Storage.StoragePoolType poolType = null;
+                    DataStoreTO dataStoreTO = command.getDestStore();
+                    if (dataStoreTO != null) {
+                        if (dataStoreTO instanceof PrimaryDataStoreTO) {
+                            PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) dataStoreTO;
+                            poolType = primaryDataStoreTO.getPoolType();
+                        } else {
+                            poolType = Storage.StoragePoolType.NetworkFilesystem;
+                        }
+                        poolUuid = command.getDestStore().getUuid();
+                        pool = storagePoolMgr.getStoragePool(poolType, poolUuid);
+                    }
+
+                    if (pool == null || poolType == null) {
+                        return new HandleConfigDriveIsoAnswer(command, "Unable to create config drive, Pool " + (poolUuid != null ? poolUuid : "") + " not found");
+                    }
+
+                    if (pool.supportsConfigDriveIso()) {
+                        LOG.debug("Using the pool: " + poolUuid + " for config drive");
+                        mountPoint = pool.getLocalPath();
+                    } else if (command.getUseHostCacheOnUnsupportedPool()) {
+                        LOG.debug("Config drive for KVM is not supported for pool type: " + poolType.toString() + ", using the KVM host");
+                        mountPoint = libvirtComputingResource.getConfigPath();
+                        location = NetworkElement.Location.HOST;
+                    } else {
+                        LOG.debug("Config drive for KVM is not supported for pool type: " + poolType.toString());
+                        return new HandleConfigDriveIsoAnswer(command, "Config drive for KVM is not supported for pool type: " + poolType.toString());
+                    }
+                }
+
+                Path isoPath = Paths.get(mountPoint, command.getIsoFile());
+                File isoFile = new File(mountPoint, command.getIsoFile());
+
+                if (command.getIsoData() == null) {
+                    return new HandleConfigDriveIsoAnswer(command, "Invalid config drive ISO data received");
+                }
+                if (isoFile.exists()) {
+                    LOG.debug("An old config drive iso already exists");
+                }
 
-        final String mountPoint = pool.getLocalPath();
-        final Path isoPath = Paths.get(mountPoint, command.getIsoFile());
-        final File isoFile = new File(mountPoint, command.getIsoFile());
-        if (command.isCreate()) {
-            LOG.debug("Creating config drive: " + command.getIsoFile());
-            if (command.getIsoData() == null) {
-                return new Answer(command, false, "Invalid config drive ISO data received");
-            }
-            if (isoFile.exists()) {
-                LOG.debug("An old config drive iso already exists");
-            }
-            try {
                 Files.createDirectories(isoPath.getParent());
                 ConfigDriveBuilder.base64StringToFile(command.getIsoData(), mountPoint, command.getIsoFile());
-            } catch (IOException e) {
-                return new Answer(command, false, "Failed due to exception: " + e.getMessage());
-            }
-        } else {
-            try {
-                Files.deleteIfExists(isoPath);
-            } catch (IOException e) {
-                LOG.warn("Failed to delete config drive: " + isoPath.toAbsolutePath().toString());
-                return new Answer(command, false, "Failed due to exception: " + e.getMessage());
+
+                return new HandleConfigDriveIsoAnswer(command, location);
+            } else {
+                LOG.debug("Deleting config drive: " + command.getIsoFile());
+                Path configDrivePath = null;
+
+                if (command.isHostCachePreferred()) {
+                    // Check and delete config drive in host storage if exists
+                    mountPoint = libvirtComputingResource.getConfigPath();
+                    configDrivePath = Paths.get(mountPoint, command.getIsoFile());
+                    Files.deleteIfExists(configDrivePath);
+                } else {
+                    final KVMStoragePoolManager storagePoolMgr = libvirtComputingResource.getStoragePoolMgr();
+                    KVMStoragePool pool = null;
+                    DataStoreTO dataStoreTO = command.getDestStore();
+                    if (dataStoreTO != null) {
+                        if (dataStoreTO instanceof PrimaryDataStoreTO) {
+                            PrimaryDataStoreTO primaryDataStoreTO = (PrimaryDataStoreTO) dataStoreTO;
+                            Storage.StoragePoolType poolType = primaryDataStoreTO.getPoolType();
+                            pool = storagePoolMgr.getStoragePool(poolType, command.getDestStore().getUuid());
+                        } else {
+                            pool = storagePoolMgr.getStoragePool(Storage.StoragePoolType.NetworkFilesystem, command.getDestStore().getUuid());
+                        }
+                    }
+
+                    if (pool != null && pool.supportsConfigDriveIso()) {
+                        mountPoint = pool.getLocalPath();
+                        configDrivePath = Paths.get(mountPoint, command.getIsoFile());
+                        Files.deleteIfExists(configDrivePath);
+                    }
+                }
+
+                return new HandleConfigDriveIsoAnswer(command);
             }
+        } catch (final IOException e) {
+            LOG.debug("Failed to handle config drive due to " + e.getMessage(), e);
+            return new HandleConfigDriveIsoAnswer(command, "Failed due to exception: " + e.getMessage());
+        } catch (final CloudRuntimeException e) {
+            LOG.debug("Failed to handle config drive due to " + e.getMessage(), e);
+            return new HandleConfigDriveIsoAnswer(command, "Failed due to exception: " + e.toString());
         }
-
-        return new Answer(command);
     }
 }
\ No newline at end of file
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtPrepareForMigrationCommandWrapper.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtPrepareForMigrationCommandWrapper.java
index f3f50aa..38cd995 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtPrepareForMigrationCommandWrapper.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtPrepareForMigrationCommandWrapper.java
@@ -19,11 +19,22 @@
 
 package com.cloud.hypervisor.kvm.resource.wrapper;
 
+import java.net.URISyntaxException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.cloudstack.storage.configdrive.ConfigDrive;
+import org.apache.commons.collections.MapUtils;
+import org.apache.log4j.Logger;
+import org.libvirt.Connect;
+import org.libvirt.LibvirtException;
+
 import com.cloud.agent.api.Answer;
 import com.cloud.agent.api.PrepareForMigrationAnswer;
 import com.cloud.agent.api.PrepareForMigrationCommand;
-import com.cloud.agent.api.to.DpdkTO;
+import com.cloud.agent.api.to.DataTO;
 import com.cloud.agent.api.to.DiskTO;
+import com.cloud.agent.api.to.DpdkTO;
 import com.cloud.agent.api.to.NicTO;
 import com.cloud.agent.api.to.VirtualMachineTO;
 import com.cloud.exception.InternalErrorException;
@@ -36,14 +47,6 @@ import com.cloud.resource.ResourceWrapper;
 import com.cloud.storage.Volume;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.utils.script.Script;
-import org.apache.commons.collections.MapUtils;
-import org.apache.log4j.Logger;
-import org.libvirt.Connect;
-import org.libvirt.LibvirtException;
-
-import java.net.URISyntaxException;
-import java.util.HashMap;
-import java.util.Map;
 
 @ResourceWrapper(handles =  PrepareForMigrationCommand.class)
 public final class LibvirtPrepareForMigrationCommandWrapper extends CommandWrapper<PrepareForMigrationCommand, Answer, LibvirtComputingResource> {
@@ -86,7 +89,12 @@ public final class LibvirtPrepareForMigrationCommandWrapper extends CommandWrapp
             final DiskTO[] volumes = vm.getDisks();
             for (final DiskTO volume : volumes) {
                 if (volume.getType() == Volume.Type.ISO) {
-                    libvirtComputingResource.getVolumePath(conn, volume);
+                    final DataTO data = volume.getData();
+                    if (data != null && data.getPath() != null && data.getPath().startsWith(ConfigDrive.CONFIGDRIVEDIR)) {
+                        libvirtComputingResource.getVolumePath(conn, volume, vm.isConfigDriveOnHostCache());
+                    } else {
+                        libvirtComputingResource.getVolumePath(conn, volume);
+                    }
                 }
             }
 
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java
index 0418dbb..7684789 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStorageAdaptor.java
@@ -330,6 +330,12 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
 
     @Override
     public boolean disconnectPhysicalDisk(Map<String, String> volumeToDisconnect) {
+        String poolType = volumeToDisconnect.get(DiskTO.PROTOCOL_TYPE);
+        // Unsupported pool types
+        if (poolType != null && poolType.equalsIgnoreCase(StoragePoolType.PowerFlex.toString())) {
+            return false;
+        }
+
         String host = volumeToDisconnect.get(DiskTO.STORAGE_HOST);
         String port = volumeToDisconnect.get(DiskTO.STORAGE_PORT);
         String path = volumeToDisconnect.get(DiskTO.IQN);
@@ -447,7 +453,7 @@ public class IscsiAdmStorageAdaptor implements StorageAdaptor {
     }
 
     @Override
-    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
+    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
         return null;
     }
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java
index 865dfab..8e4af76 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/IscsiAdmStoragePool.java
@@ -19,9 +19,9 @@ package com.cloud.hypervisor.kvm.storage;
 import java.util.List;
 import java.util.Map;
 
-import com.cloud.storage.Storage;
 import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
 
+import com.cloud.storage.Storage;
 import com.cloud.storage.Storage.StoragePoolType;
 
 public class IscsiAdmStoragePool implements KVMStoragePool {
@@ -165,4 +165,9 @@ public class IscsiAdmStoragePool implements KVMStoragePool {
     public String getLocalPath() {
         return _localPath;
     }
+
+    @Override
+    public boolean supportsConfigDriveIso() {
+        return false;
+    }
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java
index be7a8b0..46d78e5 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePool.java
@@ -19,9 +19,9 @@ package com.cloud.hypervisor.kvm.storage;
 import java.util.List;
 import java.util.Map;
 
-import com.cloud.storage.Storage;
 import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
 
+import com.cloud.storage.Storage;
 import com.cloud.storage.Storage.StoragePoolType;
 
 public interface KVMStoragePool {
@@ -70,4 +70,6 @@ public interface KVMStoragePool {
     PhysicalDiskFormat getDefaultFormat();
 
     public boolean createFolder(String path);
+
+    public boolean supportsConfigDriveIso();
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java
index 544c47f..e747093 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStoragePoolManager.java
@@ -22,15 +22,15 @@ import java.util.Arrays;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.UUID;
 import java.util.Set;
+import java.util.UUID;
 import java.util.concurrent.ConcurrentHashMap;
 
-import org.apache.log4j.Logger;
-
 import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
 import org.apache.cloudstack.storage.to.VolumeObjectTO;
 import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
+import org.apache.log4j.Logger;
+import org.reflections.Reflections;
 
 import com.cloud.agent.api.to.DiskTO;
 import com.cloud.agent.api.to.VirtualMachineTO;
@@ -44,8 +44,6 @@ import com.cloud.storage.Volume;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.vm.VirtualMachine;
 
-import org.reflections.Reflections;
-
 public class KVMStoragePoolManager {
     private static final Logger s_logger = Logger.getLogger(KVMStoragePoolManager.class);
 
@@ -100,6 +98,7 @@ public class KVMStoragePoolManager {
         // add other storage adaptors here
         // this._storageMapper.put("newadaptor", new NewStorageAdaptor(storagelayer));
         this._storageMapper.put(StoragePoolType.ManagedNFS.toString(), new ManagedNfsStorageAdaptor(storagelayer));
+        this._storageMapper.put(StoragePoolType.PowerFlex.toString(), new ScaleIOStorageAdaptor(storagelayer));
 
         // add any adaptors that wish to register themselves via annotation
         Reflections reflections = new Reflections("com.cloud.hypervisor.kvm.storage");
@@ -253,7 +252,7 @@ public class KVMStoragePoolManager {
             if (info != null) {
                 pool = createStoragePool(info.name, info.host, info.port, info.path, info.userInfo, info.poolType, info.type);
             } else {
-                throw new CloudRuntimeException("Could not fetch storage pool " + uuid + " from libvirt");
+                throw new CloudRuntimeException("Could not fetch storage pool " + uuid + " from libvirt due to " + e.getMessage());
             }
         }
         return pool;
@@ -286,36 +285,38 @@ public class KVMStoragePoolManager {
 
     public KVMPhysicalDisk getPhysicalDisk(StoragePoolType type, String poolUuid, String volName) {
         int cnt = 0;
-        int retries = 10;
+        int retries = 100;
         KVMPhysicalDisk vol = null;
         //harden get volume, try cnt times to get volume, in case volume is created on other host
+        //Poll more frequently and return immediately once disk is found
         String errMsg = "";
         while (cnt < retries) {
             try {
                 KVMStoragePool pool = getStoragePool(type, poolUuid);
                 vol = pool.getPhysicalDisk(volName);
                 if (vol != null) {
-                    break;
+                    return vol;
                 }
             } catch (Exception e) {
-                s_logger.debug("Failed to find volume:" + volName + " due to" + e.toString() + ", retry:" + cnt);
+                s_logger.debug("Failed to find volume:" + volName + " due to " + e.toString() + ", retry:" + cnt);
                 errMsg = e.toString();
             }
 
             try {
-                Thread.sleep(30000);
+                Thread.sleep(3000);
             } catch (InterruptedException e) {
                 s_logger.debug("[ignored] interupted while trying to get storage pool.");
             }
             cnt++;
         }
 
+        KVMStoragePool pool = getStoragePool(type, poolUuid);
+        vol = pool.getPhysicalDisk(volName);
         if (vol == null) {
             throw new CloudRuntimeException(errMsg);
         } else {
             return vol;
         }
-
     }
 
     public KVMStoragePool createStoragePool(String name, String host, int port, String path, String userInfo, StoragePoolType type) {
@@ -377,6 +378,10 @@ public class KVMStoragePoolManager {
             return adaptor.createDiskFromTemplate(template, name,
                     PhysicalDiskFormat.DIR, provisioningType,
                     size, destPool, timeout);
+        } else if (destPool.getType() == StoragePoolType.PowerFlex) {
+            return adaptor.createDiskFromTemplate(template, name,
+                    PhysicalDiskFormat.RAW, provisioningType,
+                    size, destPool, timeout);
         } else {
             return adaptor.createDiskFromTemplate(template, name,
                     PhysicalDiskFormat.QCOW2, provisioningType,
@@ -405,9 +410,9 @@ public class KVMStoragePoolManager {
         return adaptor.createDiskFromTemplateBacking(template, name, format, size, destPool, timeout);
     }
 
-    public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
+    public KVMPhysicalDisk createPhysicalDiskFromDirectDownloadTemplate(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
         StorageAdaptor adaptor = getStorageAdaptor(destPool.getType());
-        return adaptor.createTemplateFromDirectDownloadFile(templateFilePath, destPool, isIso);
+        return adaptor.createTemplateFromDirectDownloadFile(templateFilePath, destTemplatePath, destPool, format, timeout);
     }
 
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
index cc47c55..d1d0f0c 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
@@ -37,7 +37,6 @@ import java.util.UUID;
 
 import javax.naming.ConfigurationException;
 
-import com.cloud.utils.Pair;
 import org.apache.cloudstack.agent.directdownload.DirectDownloadAnswer;
 import org.apache.cloudstack.agent.directdownload.DirectDownloadCommand;
 import org.apache.cloudstack.agent.directdownload.HttpDirectDownloadCommand;
@@ -117,6 +116,8 @@ import com.cloud.storage.template.Processor.FormatInfo;
 import com.cloud.storage.template.QCOW2Processor;
 import com.cloud.storage.template.TemplateLocation;
 import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.Pair;
+import com.cloud.utils.UriUtils;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.cloud.utils.script.Script;
 import com.cloud.utils.storage.S3.S3Utils;
@@ -255,11 +256,15 @@ public class KVMStorageProcessor implements StorageProcessor {
 
                 String path = details != null ? details.get("managedStoreTarget") : null;
 
-                storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details);
+                if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details)) {
+                    s_logger.warn("Failed to connect physical disk at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
+                }
 
                 primaryVol = storagePoolMgr.copyPhysicalDisk(tmplVol, path != null ? path : destTempl.getUuid(), primaryPool, cmd.getWaitInMillSeconds());
 
-                storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path);
+                if (!storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path)) {
+                    s_logger.warn("Failed to disconnect physical disk at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
+                }
             } else {
                 primaryVol = storagePoolMgr.copyPhysicalDisk(tmplVol, UUID.randomUUID().toString(), primaryPool, cmd.getWaitInMillSeconds());
             }
@@ -273,7 +278,7 @@ public class KVMStorageProcessor implements StorageProcessor {
                 final TemplateObjectTO newTemplate = new TemplateObjectTO();
                 newTemplate.setPath(primaryVol.getName());
                 newTemplate.setSize(primaryVol.getSize());
-                if (primaryPool.getType() == StoragePoolType.RBD) {
+                if (primaryPool.getType() == StoragePoolType.RBD || primaryPool.getType() == StoragePoolType.PowerFlex) {
                     newTemplate.setFormat(ImageFormat.RAW);
                 } else {
                     newTemplate.setFormat(ImageFormat.QCOW2);
@@ -381,6 +386,27 @@ public class KVMStorageProcessor implements StorageProcessor {
             if (primaryPool.getType() == StoragePoolType.CLVM) {
                 templatePath = ((NfsTO)imageStore).getUrl() + File.separator + templatePath;
                 vol = templateToPrimaryDownload(templatePath, primaryPool, volume.getUuid(), volume.getSize(), cmd.getWaitInMillSeconds());
+            } if (primaryPool.getType() == StoragePoolType.PowerFlex) {
+                Map<String, String> details = primaryStore.getDetails();
+                String path = details != null ? details.get("managedStoreTarget") : null;
+
+                if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath, details)) {
+                    s_logger.warn("Failed to connect base template volume at path: " + templatePath + ", in storage pool id: " + primaryStore.getUuid());
+                }
+
+                BaseVol = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), templatePath);
+                if (BaseVol == null) {
+                    s_logger.debug("Failed to get the physical disk for base template volume at path: " + templatePath);
+                    throw new CloudRuntimeException("Failed to get the physical disk for base template volume at path: " + templatePath);
+                }
+
+                if (!storagePoolMgr.connectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path, details)) {
+                    s_logger.warn("Failed to connect new volume at path: " + path + ", in storage pool id: " + primaryStore.getUuid());
+                }
+
+                vol = storagePoolMgr.copyPhysicalDisk(BaseVol, path != null ? path : volume.getUuid(), primaryPool, cmd.getWaitInMillSeconds());
+
+                storagePoolMgr.disconnectPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), path);
             } else {
                 if (templatePath.contains("/mnt")) {
                     //upgrade issue, if the path contains path, need to extract the volume uuid from path
@@ -1344,6 +1370,9 @@ public class KVMStorageProcessor implements StorageProcessor {
         } catch (final InternalErrorException e) {
             s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e);
             return new AttachAnswer(e.toString());
+        } catch (final CloudRuntimeException e) {
+            s_logger.debug("Failed to attach volume: " + vol.getPath() + ", due to ", e);
+            return new AttachAnswer(e.toString());
         }
     }
 
@@ -1375,6 +1404,9 @@ public class KVMStorageProcessor implements StorageProcessor {
         } catch (final InternalErrorException e) {
             s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e);
             return new DettachAnswer(e.toString());
+        } catch (final CloudRuntimeException e) {
+            s_logger.debug("Failed to detach volume: " + vol.getPath() + ", due to ", e);
+            return new DettachAnswer(e.toString());
         }
     }
 
@@ -1728,6 +1760,7 @@ public class KVMStorageProcessor implements StorageProcessor {
         final PrimaryDataStoreTO pool = cmd.getDestPool();
         DirectTemplateDownloader downloader;
         KVMPhysicalDisk template;
+        KVMStoragePool destPool = null;
 
         try {
             s_logger.debug("Verifying temporary location for downloading the template exists on the host");
@@ -1739,14 +1772,20 @@ public class KVMStorageProcessor implements StorageProcessor {
                 return new DirectDownloadAnswer(false, msg, true);
             }
 
-            s_logger.debug("Checking for free space on the host for downloading the template");
-            if (!isEnoughSpaceForDownloadTemplateOnTemporaryLocation(cmd.getTemplateSize())) {
+            Long templateSize = null;
+            if (!org.apache.commons.lang.StringUtils.isBlank(cmd.getUrl())) {
+                String url = cmd.getUrl();
+                templateSize = UriUtils.getRemoteSize(url);
+            }
+
+            s_logger.debug("Checking for free space on the host for downloading the template with physical size: " + templateSize + " and virtual size: " + cmd.getTemplateSize());
+            if (!isEnoughSpaceForDownloadTemplateOnTemporaryLocation(templateSize)) {
                 String msg = "Not enough space on the defined temporary location to download the template " + cmd.getTemplateId();
                 s_logger.error(msg);
                 return new DirectDownloadAnswer(false, msg, true);
             }
 
-            KVMStoragePool destPool = storagePoolMgr.getStoragePool(pool.getPoolType(), pool.getUuid());
+            destPool = storagePoolMgr.getStoragePool(pool.getPoolType(), pool.getUuid());
             downloader = getDirectTemplateDownloaderFromCommand(cmd, destPool, temporaryDownloadPath);
             s_logger.debug("Trying to download template");
             Pair<Boolean, String> result = downloader.downloadTemplate();
@@ -1759,7 +1798,19 @@ public class KVMStorageProcessor implements StorageProcessor {
                 s_logger.warn("Couldn't validate template checksum");
                 return new DirectDownloadAnswer(false, "Checksum validation failed", false);
             }
-            template = storagePoolMgr.createPhysicalDiskFromDirectDownloadTemplate(tempFilePath, destPool, cmd.isIso());
+
+            final TemplateObjectTO destTemplate = cmd.getDestData();
+            String destTemplatePath = (destTemplate != null) ? destTemplate.getPath() : null;
+
+            if (!storagePoolMgr.connectPhysicalDisk(pool.getPoolType(), pool.getUuid(), destTemplatePath, null)) {
+                s_logger.warn("Unable to connect physical disk at path: " + destTemplatePath + ", in storage pool id: " + pool.getUuid());
+            }
+
+            template = storagePoolMgr.createPhysicalDiskFromDirectDownloadTemplate(tempFilePath, destTemplatePath, destPool, cmd.getFormat(), cmd.getWaitInMillSeconds());
+
+            if (!storagePoolMgr.disconnectPhysicalDisk(pool.getPoolType(), pool.getUuid(), destTemplatePath)) {
+                s_logger.warn("Unable to disconnect physical disk at path: " + destTemplatePath + ", in storage pool id: " + pool.getUuid());
+            }
         } catch (CloudRuntimeException e) {
             s_logger.warn("Error downloading template " + cmd.getTemplateId() + " due to: " + e.getMessage());
             return new DirectDownloadAnswer(false, "Unable to download template: " + e.getMessage(), true);
@@ -1780,23 +1831,56 @@ public class KVMStorageProcessor implements StorageProcessor {
         final ImageFormat destFormat = destVol.getFormat();
         final DataStoreTO srcStore = srcData.getDataStore();
         final DataStoreTO destStore = destData.getDataStore();
-        final PrimaryDataStoreTO primaryStore = (PrimaryDataStoreTO)srcStore;
-        final PrimaryDataStoreTO primaryStoreDest = (PrimaryDataStoreTO)destStore;
+        final PrimaryDataStoreTO srcPrimaryStore = (PrimaryDataStoreTO)srcStore;
+        final PrimaryDataStoreTO destPrimaryStore = (PrimaryDataStoreTO)destStore;
         final String srcVolumePath = srcData.getPath();
         final String destVolumePath = destData.getPath();
         KVMStoragePool destPool = null;
 
         try {
-            final String volumeName = UUID.randomUUID().toString();
+            s_logger.debug("Copying src volume (id: " + srcVol.getId() + ", format: " + srcFormat + ", path: " + srcVolumePath + ", primary storage: [id: " + srcPrimaryStore.getId() + ", type: "  + srcPrimaryStore.getPoolType() + "]) to dest volume (id: " +
+                    destVol.getId() + ", format: " + destFormat + ", path: " + destVolumePath + ", primary storage: [id: " + destPrimaryStore.getId() + ", type: "  + destPrimaryStore.getPoolType() + "]).");
+
+            if (srcPrimaryStore.isManaged()) {
+                if (!storagePoolMgr.connectPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath, srcPrimaryStore.getDetails())) {
+                    s_logger.warn("Failed to connect src volume at path: " + srcVolumePath + ", in storage pool id: " + srcPrimaryStore.getUuid());
+                }
+            }
+
+            final KVMPhysicalDisk volume = storagePoolMgr.getPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath);
+            if (volume == null) {
+                s_logger.debug("Failed to get physical disk for volume: " + srcVolumePath);
+                throw new CloudRuntimeException("Failed to get physical disk for volume at path: " + srcVolumePath);
+            }
 
-            final String destVolumeName = volumeName + "." + destFormat.getFileExtension();
-            final KVMPhysicalDisk volume = storagePoolMgr.getPhysicalDisk(primaryStore.getPoolType(), primaryStore.getUuid(), srcVolumePath);
             volume.setFormat(PhysicalDiskFormat.valueOf(srcFormat.toString()));
 
-            destPool = storagePoolMgr.getStoragePool(primaryStoreDest.getPoolType(), primaryStoreDest.getUuid());
+            String destVolumeName = null;
+            if (destPrimaryStore.isManaged()) {
+                if (!storagePoolMgr.connectPhysicalDisk(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid(), destVolumePath, destPrimaryStore.getDetails())) {
+                    s_logger.warn("Failed to connect dest volume at path: " + destVolumePath + ", in storage pool id: " + destPrimaryStore.getUuid());
+                }
+                String managedStoreTarget = destPrimaryStore.getDetails() != null ? destPrimaryStore.getDetails().get("managedStoreTarget") : null;
+                destVolumeName = managedStoreTarget != null ? managedStoreTarget : destVolumePath;
+            } else {
+                final String volumeName = UUID.randomUUID().toString();
+                destVolumeName = volumeName + "." + destFormat.getFileExtension();
+            }
+
+            destPool = storagePoolMgr.getStoragePool(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid());
             storagePoolMgr.copyPhysicalDisk(volume, destVolumeName, destPool, cmd.getWaitInMillSeconds());
+
+            if (srcPrimaryStore.isManaged()) {
+                storagePoolMgr.disconnectPhysicalDisk(srcPrimaryStore.getPoolType(), srcPrimaryStore.getUuid(), srcVolumePath);
+            }
+
+            if (destPrimaryStore.isManaged()) {
+                storagePoolMgr.disconnectPhysicalDisk(destPrimaryStore.getPoolType(), destPrimaryStore.getUuid(), destVolumePath);
+            }
+
             final VolumeObjectTO newVol = new VolumeObjectTO();
-            newVol.setPath(destVolumePath + File.separator + destVolumeName);
+            String path = destPrimaryStore.isManaged() ? destVolumeName : destVolumePath + File.separator + destVolumeName;
+            newVol.setPath(path);
             newVol.setFormat(destFormat);
             return new CopyCmdAnswer(newVol);
         } catch (final CloudRuntimeException e) {
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
index f9c627b..630b988 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
@@ -24,6 +24,10 @@ import java.util.List;
 import java.util.Map;
 import java.util.UUID;
 
+import org.apache.cloudstack.utils.qemu.QemuImg;
+import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
+import org.apache.cloudstack.utils.qemu.QemuImgException;
+import org.apache.cloudstack.utils.qemu.QemuImgFile;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.log4j.Logger;
 import org.libvirt.Connect;
@@ -42,12 +46,6 @@ import com.ceph.rbd.RbdException;
 import com.ceph.rbd.RbdImage;
 import com.ceph.rbd.jna.RbdImageInfo;
 import com.ceph.rbd.jna.RbdSnapInfo;
-
-import org.apache.cloudstack.utils.qemu.QemuImg;
-import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
-import org.apache.cloudstack.utils.qemu.QemuImgException;
-import org.apache.cloudstack.utils.qemu.QemuImgFile;
-
 import com.cloud.exception.InternalErrorException;
 import com.cloud.hypervisor.kvm.resource.LibvirtConnection;
 import com.cloud.hypervisor.kvm.resource.LibvirtSecretDef;
@@ -160,20 +158,20 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
     }
 
     @Override
-    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
+    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
         File sourceFile = new File(templateFilePath);
         if (!sourceFile.exists()) {
             throw new CloudRuntimeException("Direct download template file " + sourceFile + " does not exist on this host");
         }
         String templateUuid = UUID.randomUUID().toString();
-        if (isIso) {
+        if (Storage.ImageFormat.ISO.equals(format)) {
             templateUuid += ".iso";
         }
         String destinationFile = destPool.getLocalPath() + File.separator + templateUuid;
 
         if (destPool.getType() == StoragePoolType.NetworkFilesystem || destPool.getType() == StoragePoolType.Filesystem
             || destPool.getType() == StoragePoolType.SharedMountPoint) {
-            if (!isIso && isTemplateExtractable(templateFilePath)) {
+            if (!Storage.ImageFormat.ISO.equals(format) && isTemplateExtractable(templateFilePath)) {
                 extractDownloadedTemplate(templateFilePath, destPool, destinationFile);
             } else {
                 Script.runSimpleBashScript("mv " + templateFilePath + " " + destinationFile);
@@ -451,11 +449,13 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
                 type = StoragePoolType.CLVM;
             } else if (spd.getPoolType() == LibvirtStoragePoolDef.PoolType.GLUSTERFS) {
                 type = StoragePoolType.Gluster;
+            } else if (spd.getPoolType() == LibvirtStoragePoolDef.PoolType.POWERFLEX) {
+                type = StoragePoolType.PowerFlex;
             }
 
             LibvirtStoragePool pool = new LibvirtStoragePool(uuid, storage.getName(), type, this, storage);
 
-            if (pool.getType() != StoragePoolType.RBD)
+            if (pool.getType() != StoragePoolType.RBD && pool.getType() != StoragePoolType.PowerFlex)
                 pool.setLocalPath(spd.getTargetPath());
             else
                 pool.setLocalPath("");
@@ -545,7 +545,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
             s_logger.debug("Failed to get physical disk:", e);
             throw new CloudRuntimeException(e.toString());
         }
-
     }
 
     @Override
@@ -1022,7 +1021,6 @@ public class LibvirtStorageAdaptor implements StorageAdaptor {
             }
         }
 
-
         return disk;
     }
 
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java
index 1b554f7..b2e8dec 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/LibvirtStoragePool.java
@@ -45,6 +45,7 @@ public class LibvirtStoragePool implements KVMStoragePool {
     protected String authSecret;
     protected String sourceHost;
     protected int sourcePort;
+
     protected String sourceDir;
 
     public LibvirtStoragePool(String uuid, String name, StoragePoolType type, StorageAdaptor adaptor, StoragePool pool) {
@@ -56,7 +57,6 @@ public class LibvirtStoragePool implements KVMStoragePool {
         this.used = 0;
         this.available = 0;
         this._pool = pool;
-
     }
 
     public void setCapacity(long capacity) {
@@ -101,7 +101,7 @@ public class LibvirtStoragePool implements KVMStoragePool {
 
     @Override
     public PhysicalDiskFormat getDefaultFormat() {
-        if (getStoragePoolType() == StoragePoolType.CLVM || getStoragePoolType() == StoragePoolType.RBD) {
+        if (getStoragePoolType() == StoragePoolType.CLVM || getStoragePoolType() == StoragePoolType.RBD || getStoragePoolType() == StoragePoolType.PowerFlex) {
             return PhysicalDiskFormat.RAW;
         } else {
             return PhysicalDiskFormat.QCOW2;
@@ -271,4 +271,12 @@ public class LibvirtStoragePool implements KVMStoragePool {
     public boolean createFolder(String path) {
         return this._storageAdaptor.createFolder(this.uuid, path);
     }
+
+    @Override
+    public boolean supportsConfigDriveIso() {
+        if (this.type == StoragePoolType.NetworkFilesystem) {
+            return true;
+        }
+        return false;
+    }
 }
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java
index 1ea4f62..6db2f82 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ManagedNfsStorageAdaptor.java
@@ -35,6 +35,7 @@ import com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolDef;
 import com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolDef.PoolType;
 import com.cloud.hypervisor.kvm.resource.LibvirtStorageVolumeDef;
 import com.cloud.hypervisor.kvm.resource.LibvirtStorageVolumeXMLParser;
+import com.cloud.storage.Storage;
 import com.cloud.storage.Storage.ImageFormat;
 import com.cloud.storage.Storage.ProvisioningType;
 import com.cloud.storage.Storage.StoragePoolType;
@@ -319,7 +320,7 @@ public class ManagedNfsStorageAdaptor implements StorageAdaptor {
     }
 
     @Override
-    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso) {
+    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
         return null;
     }
 
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java
new file mode 100644
index 0000000..62eb544
--- /dev/null
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStorageAdaptor.java
@@ -0,0 +1,393 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package com.cloud.hypervisor.kvm.storage;
+
+import java.io.File;
+import java.io.FileFilter;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+
+import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
+import org.apache.cloudstack.utils.qemu.QemuImg;
+import org.apache.cloudstack.utils.qemu.QemuImgException;
+import org.apache.cloudstack.utils.qemu.QemuImgFile;
+import org.apache.commons.io.filefilter.WildcardFileFilter;
+import org.apache.log4j.Logger;
+
+import com.cloud.storage.Storage;
+import com.cloud.storage.StorageLayer;
+import com.cloud.storage.StorageManager;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.utils.script.OutputInterpreter;
+import com.cloud.utils.script.Script;
+import com.google.common.base.Strings;
+
+@StorageAdaptorInfo(storagePoolType= Storage.StoragePoolType.PowerFlex)
+public class ScaleIOStorageAdaptor implements StorageAdaptor {
+    private static final Logger LOGGER = Logger.getLogger(ScaleIOStorageAdaptor.class);
+    private static final Map<String, KVMStoragePool> MapStorageUuidToStoragePool = new HashMap<>();
+    private static final int DEFAULT_DISK_WAIT_TIME_IN_SECS = 60;
+    private StorageLayer storageLayer;
+
+    public ScaleIOStorageAdaptor(StorageLayer storagelayer) {
+        storageLayer = storagelayer;
+    }
+
+    @Override
+    public KVMStoragePool getStoragePool(String uuid) {
+        KVMStoragePool pool = MapStorageUuidToStoragePool.get(uuid);
+        if (pool == null) {
+            LOGGER.error("Pool: " + uuid + " not found, probably sdc not connected on agent start");
+            throw new CloudRuntimeException("Pool: " + uuid + " not found, reconnect sdc and restart agent if sdc not connected on agent start");
+        }
+
+        return pool;
+    }
+
+    @Override
+    public KVMStoragePool getStoragePool(String uuid, boolean refreshInfo) {
+        return getStoragePool(uuid);
+    }
+
+    @Override
+    public KVMPhysicalDisk getPhysicalDisk(String volumePath, KVMStoragePool pool) {
+        if (Strings.isNullOrEmpty(volumePath) || pool == null) {
+            LOGGER.error("Unable to get physical disk, volume path or pool not specified");
+            return null;
+        }
+
+        String volumeId = ScaleIOUtil.getVolumePath(volumePath);
+
+        try {
+            String diskFilePath = null;
+            String systemId = ScaleIOUtil.getSystemIdForVolume(volumeId);
+            if (!Strings.isNullOrEmpty(systemId) && systemId.length() == ScaleIOUtil.IDENTIFIER_LENGTH) {
+                // Disk path format: /dev/disk/by-id/emc-vol-<SystemID>-<VolumeID>
+                final String diskFileName = ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
+                diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + diskFileName;
+                final File diskFile = new File(diskFilePath);
+                if (!diskFile.exists()) {
+                    LOGGER.debug("Physical disk file: " + diskFilePath + " doesn't exists on the storage pool: " + pool.getUuid());
+                    return null;
+                }
+            } else {
+                LOGGER.debug("Try with wildcard filter to get the physical disk: " + volumeId + " on the storage pool: " + pool.getUuid());
+                final File dir = new File(ScaleIOUtil.DISK_PATH);
+                final FileFilter fileFilter = new WildcardFileFilter(ScaleIOUtil.DISK_NAME_PREFIX_FILTER + volumeId);
+                final File[] files = dir.listFiles(fileFilter);
+                if (files != null && files.length == 1) {
+                    diskFilePath = files[0].getAbsolutePath();
+                } else {
+                    LOGGER.debug("Unable to find the physical disk: " + volumeId + " on the storage pool: " + pool.getUuid());
+                    return null;
+                }
+            }
+
+            KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool);
+            disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
+
+            long diskSize = getPhysicalDiskSize(diskFilePath);
+            disk.setSize(diskSize);
+            disk.setVirtualSize(diskSize);
+
+            return disk;
+        } catch (Exception e) {
+            LOGGER.error("Failed to get the physical disk: " + volumePath + " on the storage pool: " + pool.getUuid() + " due to " + e.getMessage());
+            throw new CloudRuntimeException("Failed to get the physical disk: " + volumePath + " on the storage pool: " + pool.getUuid());
+        }
+    }
+
+    @Override
+    public KVMStoragePool createStoragePool(String uuid, String host, int port, String path, String userInfo, Storage.StoragePoolType type) {
+        ScaleIOStoragePool storagePool = new ScaleIOStoragePool(uuid, host, port, path, type, this);
+        MapStorageUuidToStoragePool.put(uuid, storagePool);
+        return storagePool;
+    }
+
+    @Override
+    public boolean deleteStoragePool(String uuid) {
+        return MapStorageUuidToStoragePool.remove(uuid) != null;
+    }
+
+    @Override
+    public KVMPhysicalDisk createPhysicalDisk(String name, KVMStoragePool pool, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) {
+        return null;
+    }
+
+    @Override
+    public boolean connectPhysicalDisk(String volumePath, KVMStoragePool pool, Map<String, String> details) {
+        if (Strings.isNullOrEmpty(volumePath) || pool == null) {
+            LOGGER.error("Unable to connect physical disk due to insufficient data");
+            throw new CloudRuntimeException("Unable to connect physical disk due to insufficient data");
+        }
+
+        volumePath = ScaleIOUtil.getVolumePath(volumePath);
+
+        int waitTimeInSec = DEFAULT_DISK_WAIT_TIME_IN_SECS;
+        if (details != null && details.containsKey(StorageManager.STORAGE_POOL_DISK_WAIT.toString())) {
+            String waitTime = details.get(StorageManager.STORAGE_POOL_DISK_WAIT.toString());
+            if (!Strings.isNullOrEmpty(waitTime)) {
+                waitTimeInSec = Integer.valueOf(waitTime).intValue();
+            }
+        }
+        return waitForDiskToBecomeAvailable(volumePath, pool, waitTimeInSec);
+    }
+
+    private boolean waitForDiskToBecomeAvailable(String volumePath, KVMStoragePool pool, int waitTimeInSec) {
+        LOGGER.debug("Waiting for the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid() + " to become available for " + waitTimeInSec + " secs");
+        int timeBetweenTries = 1000; // Try more frequently (every sec) and return early if disk is found
+        KVMPhysicalDisk physicalDisk = null;
+
+        // Rescan before checking for the physical disk
+        ScaleIOUtil.rescanForNewVolumes();
+
+        while (waitTimeInSec > 0) {
+            physicalDisk = getPhysicalDisk(volumePath, pool);
+            if (physicalDisk != null && physicalDisk.getSize() > 0) {
+                LOGGER.debug("Found the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid());
+                return true;
+            }
+
+            waitTimeInSec--;
+
+            try {
+                Thread.sleep(timeBetweenTries);
+            } catch (Exception ex) {
+                // don't do anything
+            }
+        }
+
+        physicalDisk = getPhysicalDisk(volumePath, pool);
+        if (physicalDisk != null && physicalDisk.getSize() > 0) {
+            LOGGER.debug("Found the volume using id: " + volumePath + " of the storage pool: " + pool.getUuid());
+            return true;
+        }
+
+        LOGGER.debug("Unable to find the volume with id: " + volumePath + " of the storage pool: " + pool.getUuid());
+        return false;
+    }
+
+    private long getPhysicalDiskSize(String diskPath) {
+        if (Strings.isNullOrEmpty(diskPath)) {
+            return 0;
+        }
+
+        Script diskCmd = new Script("blockdev", LOGGER);
+        diskCmd.add("--getsize64", diskPath);
+
+        OutputInterpreter.OneLineParser parser = new OutputInterpreter.OneLineParser();
+        String result = diskCmd.execute(parser);
+
+        if (result != null) {
+            LOGGER.warn("Unable to get the disk size at path: " + diskPath);
+            return 0;
+        } else {
+            LOGGER.info("Able to retrieve the disk size at path:" + diskPath);
+        }
+
+        return Long.parseLong(parser.getLine());
+    }
+
+    @Override
+    public boolean disconnectPhysicalDisk(String volumePath, KVMStoragePool pool) {
+        return true;
+    }
+
+    @Override
+    public boolean disconnectPhysicalDisk(Map<String, String> volumeToDisconnect) {
+        return true;
+    }
+
+    @Override
+    public boolean disconnectPhysicalDiskByPath(String localPath) {
+        return true;
+    }
+
+    @Override
+    public boolean deletePhysicalDisk(String uuid, KVMStoragePool pool, Storage.ImageFormat format) {
+        return true;
+    }
+
+    @Override
+    public KVMPhysicalDisk createDiskFromTemplate(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size, KVMStoragePool destPool, int timeout) {
+        return null;
+    }
+
+    @Override
+    public KVMPhysicalDisk createTemplateFromDisk(KVMPhysicalDisk disk, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool) {
+        return null;
+    }
+
+    @Override
+    public List<KVMPhysicalDisk> listPhysicalDisks(String storagePoolUuid, KVMStoragePool pool) {
+        return null;
+    }
+
+    @Override
+    public KVMPhysicalDisk copyPhysicalDisk(KVMPhysicalDisk disk, String name, KVMStoragePool destPool, int timeout) {
+        if (Strings.isNullOrEmpty(name) || disk == null || destPool == null) {
+            LOGGER.error("Unable to copy physical disk due to insufficient data");
+            throw new CloudRuntimeException("Unable to copy physical disk due to insufficient data");
+        }
+
+        LOGGER.debug("Copy physical disk with size: " + disk.getSize() + ", virtualsize: " + disk.getVirtualSize()+ ", format: " + disk.getFormat());
+
+        KVMPhysicalDisk destDisk = destPool.getPhysicalDisk(name);
+        if (destDisk == null) {
+            LOGGER.error("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid());
+            throw new CloudRuntimeException("Failed to find the disk: " + name + " of the storage pool: " + destPool.getUuid());
+        }
+
+        destDisk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
+        destDisk.setVirtualSize(disk.getVirtualSize());
+        destDisk.setSize(disk.getSize());
+
+        QemuImg qemu = new QemuImg(timeout);
+        QemuImgFile srcFile = null;
+        QemuImgFile destFile = null;
+
+        try {
+            srcFile = new QemuImgFile(disk.getPath(), disk.getFormat());
+            destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat());
+
+            LOGGER.debug("Starting copy from source image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath());
+            qemu.convert(srcFile, destFile);
+            LOGGER.debug("Succesfully converted source image " + srcFile.getFileName() + " to PowerFlex volume: " + destDisk.getPath());
+        }  catch (QemuImgException e) {
+            LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage());
+            destDisk = null;
+        }
+
+        return destDisk;
+    }
+
+    @Override
+    public KVMPhysicalDisk createDiskFromSnapshot(KVMPhysicalDisk snapshot, String snapshotName, String name, KVMStoragePool destPool, int timeout) {
+        return null;
+    }
+
+    @Override
+    public boolean refresh(KVMStoragePool pool) {
+        return true;
+    }
+
+    @Override
+    public boolean deleteStoragePool(KVMStoragePool pool) {
+        return deleteStoragePool(pool.getUuid());
+    }
+
+    @Override
+    public boolean createFolder(String uuid, String path) {
+        return true;
+    }
+
+    @Override
+    public KVMPhysicalDisk createDiskFromTemplateBacking(KVMPhysicalDisk template, String name, QemuImg.PhysicalDiskFormat format, long size, KVMStoragePool destPool, int timeout) {
+        return null;
+    }
+
+    @Override
+    public KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout) {
+        if (Strings.isNullOrEmpty(templateFilePath) || Strings.isNullOrEmpty(destTemplatePath) || destPool == null) {
+            LOGGER.error("Unable to create template from direct download template file due to insufficient data");
+            throw new CloudRuntimeException("Unable to create template from direct download template file due to insufficient data");
+        }
+
+        LOGGER.debug("Create template from direct download template - file path: " + templateFilePath + ", dest path: " + destTemplatePath + ", format: " + format.toString());
+
+        File sourceFile = new File(templateFilePath);
+        if (!sourceFile.exists()) {
+            throw new CloudRuntimeException("Direct download template file " + templateFilePath + " does not exist on this host");
+        }
+
+        if (destTemplatePath == null || destTemplatePath.isEmpty()) {
+            LOGGER.error("Failed to create template, target template disk path not provided");
+            throw new CloudRuntimeException("Target template disk path not provided");
+        }
+
+        if (destPool.getType() != Storage.StoragePoolType.PowerFlex) {
+            throw new CloudRuntimeException("Unsupported storage pool type: " + destPool.getType().toString());
+        }
+
+        if (Storage.ImageFormat.RAW.equals(format) && Storage.ImageFormat.QCOW2.equals(format)) {
+            LOGGER.error("Failed to create template, unsupported template format: " + format.toString());
+            throw new CloudRuntimeException("Unsupported template format: " + format.toString());
+        }
+
+        String srcTemplateFilePath = templateFilePath;
+        KVMPhysicalDisk destDisk = null;
+        QemuImgFile srcFile = null;
+        QemuImgFile destFile = null;
+        try {
+            destDisk = destPool.getPhysicalDisk(destTemplatePath);
+            if (destDisk == null) {
+                LOGGER.error("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid());
+                throw new CloudRuntimeException("Failed to find the disk: " + destTemplatePath + " of the storage pool: " + destPool.getUuid());
+            }
+
+            if (isTemplateExtractable(templateFilePath)) {
+                srcTemplateFilePath = sourceFile.getParent() + "/" + UUID.randomUUID().toString();
+                LOGGER.debug("Extract the downloaded template " + templateFilePath + " to " + srcTemplateFilePath);
+                String extractCommand = getExtractCommandForDownloadedFile(templateFilePath, srcTemplateFilePath);
+                Script.runSimpleBashScript(extractCommand);
+                Script.runSimpleBashScript("rm -f " + templateFilePath);
+            }
+
+            QemuImg.PhysicalDiskFormat srcFileFormat = QemuImg.PhysicalDiskFormat.RAW;
+            if (format == Storage.ImageFormat.RAW) {
+                srcFileFormat = QemuImg.PhysicalDiskFormat.RAW;
+            } else if (format == Storage.ImageFormat.QCOW2) {
+                srcFileFormat = QemuImg.PhysicalDiskFormat.QCOW2;
+            }
+
+            srcFile = new QemuImgFile(srcTemplateFilePath, srcFileFormat);
+            destFile = new QemuImgFile(destDisk.getPath(), destDisk.getFormat());
+
+            LOGGER.debug("Starting copy from source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
+            QemuImg qemu = new QemuImg(timeout);
+            qemu.convert(srcFile, destFile);
+            LOGGER.debug("Succesfully converted source downloaded template " + srcFile.getFileName() + " to PowerFlex template volume: " + destDisk.getPath());
+        }  catch (QemuImgException e) {
+            LOGGER.error("Failed to convert from " + srcFile.getFileName() + " to " + destFile.getFileName() + " the error was: " + e.getMessage());
+            destDisk = null;
+        } finally {
+            Script.runSimpleBashScript("rm -f " + srcTemplateFilePath);
+        }
+
+        return destDisk;
+    }
+
+    private boolean isTemplateExtractable(String templatePath) {
+        String type = Script.runSimpleBashScript("file " + templatePath + " | awk -F' ' '{print $2}'");
+        return type.equalsIgnoreCase("bzip2") || type.equalsIgnoreCase("gzip") || type.equalsIgnoreCase("zip");
+    }
+
+    private String getExtractCommandForDownloadedFile(String downloadedTemplateFile, String templateFile) {
+        if (downloadedTemplateFile.endsWith(".zip")) {
+            return "unzip -p " + downloadedTemplateFile + " | cat > " + templateFile;
+        } else if (downloadedTemplateFile.endsWith(".bz2")) {
+            return "bunzip2 -c " + downloadedTemplateFile + " > " + templateFile;
+        } else if (downloadedTemplateFile.endsWith(".gz")) {
+            return "gunzip -c " + downloadedTemplateFile + " > " + templateFile;
+        } else {
+            throw new CloudRuntimeException("Unable to extract template " + downloadedTemplateFile);
+        }
+    }
+}
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java
new file mode 100644
index 0000000..4ead92d
--- /dev/null
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePool.java
@@ -0,0 +1,181 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package com.cloud.hypervisor.kvm.storage;
+
+import java.util.List;
+import java.util.Map;
+
+import org.apache.cloudstack.utils.qemu.QemuImg;
+
+import com.cloud.storage.Storage;
+
+public class ScaleIOStoragePool implements KVMStoragePool {
+    private String uuid;
+    private String sourceHost;
+    private int sourcePort;
+    private String sourceDir;
+    private Storage.StoragePoolType storagePoolType;
+    private StorageAdaptor storageAdaptor;
+    private long capacity;
+    private long used;
+    private long available;
+
+    public ScaleIOStoragePool(String uuid, String host, int port, String path, Storage.StoragePoolType poolType, StorageAdaptor adaptor) {
+        this.uuid = uuid;
+        sourceHost = host;
+        sourcePort = port;
+        sourceDir = path;
+        storagePoolType = poolType;
+        storageAdaptor = adaptor;
+        capacity = 0;
+        used = 0;
+        available = 0;
+    }
+
+    @Override
+    public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, QemuImg.PhysicalDiskFormat format, Storage.ProvisioningType provisioningType, long size) {
+        return null;
+    }
+
+    @Override
+    public KVMPhysicalDisk createPhysicalDisk(String volumeUuid, Storage.ProvisioningType provisioningType, long size) {
+        return null;
+    }
+
+    @Override
+    public boolean connectPhysicalDisk(String volumeUuid, Map<String, String> details) {
+        return storageAdaptor.connectPhysicalDisk(volumeUuid, this, details);
+    }
+
+    @Override
+    public KVMPhysicalDisk getPhysicalDisk(String volumeId) {
+        return storageAdaptor.getPhysicalDisk(volumeId, this);
+    }
+
+    @Override
+    public boolean disconnectPhysicalDisk(String volumeUuid) {
+        return storageAdaptor.disconnectPhysicalDisk(volumeUuid, this);
+    }
+
+    @Override
+    public boolean deletePhysicalDisk(String volumeUuid, Storage.ImageFormat format) {
+        return true;
+    }
+
+    @Override
+    public List<KVMPhysicalDisk> listPhysicalDisks() {
+        return null;
+    }
+
+    @Override
+    public String getUuid() {
+        return uuid;
+    }
+
+    public void setCapacity(long capacity) {
+        this.capacity = capacity;
+    }
+
+    @Override
+    public long getCapacity() {
+        return this.capacity;
+    }
+
+    public void setUsed(long used) {
+        this.used = used;
+    }
+
+    @Override
+    public long getUsed() {
+        return this.used;
+    }
+
+    public void setAvailable(long available) {
+        this.available = available;
+    }
+
+    @Override
+    public long getAvailable() {
+        return this.available;
+    }
+
+    @Override
+    public boolean refresh() {
+        return false;
+    }
+
+    @Override
+    public boolean isExternalSnapshot() {
+        return true;
+    }
+
+    @Override
+    public String getLocalPath() {
+        return null;
+    }
+
+    @Override
+    public String getSourceHost() {
+        return this.sourceHost;
+    }
+
+    @Override
+    public String getSourceDir() {
+        return this.sourceDir;
+    }
+
+    @Override
+    public int getSourcePort() {
+        return this.sourcePort;
+    }
+
+    @Override
+    public String getAuthUserName() {
+        return null;
+    }
+
+    @Override
+    public String getAuthSecret() {
+        return null;
+    }
+
+    @Override
+    public Storage.StoragePoolType getType() {
+        return storagePoolType;
+    }
+
+    @Override
+    public boolean delete() {
+        return false;
+    }
+
+    @Override
+    public QemuImg.PhysicalDiskFormat getDefaultFormat() {
+        return QemuImg.PhysicalDiskFormat.RAW;
+    }
+
+    @Override
+    public boolean createFolder(String path) {
+        return false;
+    }
+
+    @Override
+    public boolean supportsConfigDriveIso() {
+        return false;
+    }
+}
diff --git a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java
index 99f2876..570c207 100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/storage/StorageAdaptor.java
@@ -86,7 +86,8 @@ public interface StorageAdaptor {
      * Create physical disk on Primary Storage from direct download template on the host (in temporary location)
      * @param templateFilePath
      * @param destPool
-     * @param isIso
+     * @param format
+     * @param timeout
      */
-    KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, KVMStoragePool destPool, boolean isIso);
+    KVMPhysicalDisk createTemplateFromDirectDownloadFile(String templateFilePath, String destTemplatePath, KVMStoragePool destPool, Storage.ImageFormat format, int timeout);
 }
diff --git a/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePoolTest.java b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePoolTest.java
new file mode 100644
index 0000000..4f18c38
--- /dev/null
+++ b/plugins/hypervisors/kvm/src/test/java/com/cloud/hypervisor/kvm/storage/ScaleIOStoragePoolTest.java
@@ -0,0 +1,155 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package com.cloud.hypervisor.kvm.storage;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+import java.io.File;
+import java.io.FileFilter;
+
+import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
+import org.apache.cloudstack.utils.qemu.QemuImg;
+import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.mockito.Mock;
+import org.powermock.api.mockito.PowerMockito;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import com.cloud.storage.Storage.StoragePoolType;
+import com.cloud.storage.StorageLayer;
+
+@PrepareForTest(ScaleIOUtil.class)
+@RunWith(PowerMockRunner.class)
+public class ScaleIOStoragePoolTest {
+
+    ScaleIOStoragePool pool;
+
+    StorageAdaptor adapter;
+
+    @Mock
+    StorageLayer storageLayer;
+
+    @Before
+    public void setUp() throws Exception {
+        final String uuid = "345fc603-2d7e-47d2-b719-a0110b3732e6";
+        final StoragePoolType type = StoragePoolType.PowerFlex;
+
+        adapter = spy(new ScaleIOStorageAdaptor(storageLayer));
+        pool = new ScaleIOStoragePool(uuid, "192.168.1.19", 443, "a519be2f00000000", type, adapter);
+    }
+
+    @After
+    public void tearDown() throws Exception {
+    }
+
+    @Test
+    public void testAttributes() {
+        assertEquals(pool.getCapacity(), 0);
+        assertEquals(pool.getUsed(), 0);
+        assertEquals(pool.getAvailable(), 0);
+        assertEquals(pool.getUuid(), "345fc603-2d7e-47d2-b719-a0110b3732e6");
+        assertEquals(pool.getSourceHost(), "192.168.1.19");
+        assertEquals(pool.getSourcePort(), 443);
+        assertEquals(pool.getSourceDir(), "a519be2f00000000");
+        assertEquals(pool.getType(), StoragePoolType.PowerFlex);
+
+        pool.setCapacity(131072);
+        pool.setUsed(24576);
+        pool.setAvailable(106496);
+
+        assertEquals(pool.getCapacity(), 131072);
+        assertEquals(pool.getUsed(), 24576);
+        assertEquals(pool.getAvailable(), 106496);
+    }
+
+    @Test
+    public void testDefaults() {
+        assertEquals(pool.getDefaultFormat(), PhysicalDiskFormat.RAW);
+        assertEquals(pool.getType(), StoragePoolType.PowerFlex);
+
+        assertNull(pool.getAuthUserName());
+        assertNull(pool.getAuthSecret());
+
+        Assert.assertFalse(pool.supportsConfigDriveIso());
+        assertTrue(pool.isExternalSnapshot());
+    }
+
+    public void testGetPhysicalDiskWithWildcardFileFilter() throws Exception {
+        final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
+        final String systemId = "218ce1797566a00f";
+
+        File dir = PowerMockito.mock(File.class);
+        PowerMockito.whenNew(File.class).withAnyArguments().thenReturn(dir);
+
+        // TODO: Mock file in dir
+        File[] files = new File[1];
+        String volumeId = ScaleIOUtil.getVolumePath(volumePath);
+        String diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
+        files[0] = new File(diskFilePath);
+        PowerMockito.when(dir.listFiles(any(FileFilter.class))).thenReturn(files);
+
+        KVMPhysicalDisk disk = adapter.getPhysicalDisk(volumePath, pool);
+        assertNull(disk);
+    }
+
+    @Test
+    public void testGetPhysicalDiskWithSystemId() throws Exception {
+        final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
+        final String volumeId = ScaleIOUtil.getVolumePath(volumePath);
+        final String systemId = "218ce1797566a00f";
+        PowerMockito.mockStatic(ScaleIOUtil.class);
+        when(ScaleIOUtil.getSystemIdForVolume(volumeId)).thenReturn(systemId);
+
+        // TODO: Mock file exists
+        File file = PowerMockito.mock(File.class);
+        PowerMockito.whenNew(File.class).withAnyArguments().thenReturn(file);
+        PowerMockito.when(file.exists()).thenReturn(true);
+
+        KVMPhysicalDisk disk = adapter.getPhysicalDisk(volumePath, pool);
+        assertNull(disk);
+    }
+
+    @Test
+    public void testConnectPhysicalDisk() {
+        final String volumePath = "6c3362b500000001:vol-139-3d2c-12f0";
+        final String volumeId = ScaleIOUtil.getVolumePath(volumePath);
+        final String systemId = "218ce1797566a00f";
+        final String diskFilePath = ScaleIOUtil.DISK_PATH + File.separator + ScaleIOUtil.DISK_NAME_PREFIX + systemId + "-" + volumeId;
+        KVMPhysicalDisk disk = new KVMPhysicalDisk(diskFilePath, volumePath, pool);
+        disk.setFormat(QemuImg.PhysicalDiskFormat.RAW);
+        disk.setSize(8192);
+        disk.setVirtualSize(8192);
+
+        assertEquals(disk.getPath(), "/dev/disk/by-id/emc-vol-218ce1797566a00f-6c3362b500000001");
+
+        when(adapter.getPhysicalDisk(volumeId, pool)).thenReturn(disk);
+
+        final boolean result = adapter.connectPhysicalDisk(volumePath, pool, null);
+        assertTrue(result);
+    }
+}
\ No newline at end of file
diff --git a/plugins/pom.xml b/plugins/pom.xml
index 4dcc3f9..29cfbc1 100755
--- a/plugins/pom.xml
+++ b/plugins/pom.xml
@@ -121,6 +121,7 @@
         <module>storage/volume/nexenta</module>
         <module>storage/volume/sample</module>
         <module>storage/volume/solidfire</module>
+        <module>storage/volume/scaleio</module>
 
         <module>storage-allocators/random</module>
 
diff --git a/plugins/storage/volume/cloudbyte/src/main/java/org/apache/cloudstack/storage/datastore/driver/ElastistorPrimaryDataStoreDriver.java b/plugins/storage/volume/cloudbyte/src/main/java/org/apache/cloudstack/storage/datastore/driver/ElastistorPrimaryDataStoreDriver.java
index 89e8c4f..f9e6146 100644
--- a/plugins/storage/volume/cloudbyte/src/main/java/org/apache/cloudstack/storage/datastore/driver/ElastistorPrimaryDataStoreDriver.java
+++ b/plugins/storage/volume/cloudbyte/src/main/java/org/apache/cloudstack/storage/datastore/driver/ElastistorPrimaryDataStoreDriver.java
@@ -48,6 +48,7 @@ import com.cloud.agent.api.Answer;
 import com.cloud.agent.api.to.DataObjectType;
 import com.cloud.agent.api.to.DataStoreTO;
 import com.cloud.agent.api.to.DataTO;
+import com.cloud.host.Host;
 import com.cloud.storage.DiskOfferingVO;
 import com.cloud.storage.ResizeVolumePayload;
 import com.cloud.storage.Storage.StoragePoolType;
@@ -59,6 +60,7 @@ import com.cloud.storage.dao.DiskOfferingDao;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.dao.VolumeDetailsDao;
 import com.cloud.user.AccountManager;
+import com.cloud.utils.Pair;
 import com.cloud.utils.exception.CloudRuntimeException;
 
 /**
@@ -259,7 +261,11 @@ public class ElastistorPrimaryDataStoreDriver extends CloudStackPrimaryDataStore
     @Override
     public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {
         throw new UnsupportedOperationException();
+    }
 
+    @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+        throw new UnsupportedOperationException();
     }
 
     @Override
@@ -409,4 +415,28 @@ public class ElastistorPrimaryDataStoreDriver extends CloudStackPrimaryDataStore
         return mapCapabilities;
     }
 
+    @Override
+    public boolean canProvideStorageStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        return null;
+    }
+
+    @Override
+    public boolean canProvideVolumeStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        return true;
+    }
 }
diff --git a/plugins/storage/volume/datera/src/main/java/org/apache/cloudstack/storage/datastore/driver/DateraPrimaryDataStoreDriver.java b/plugins/storage/volume/datera/src/main/java/org/apache/cloudstack/storage/datastore/driver/DateraPrimaryDataStoreDriver.java
index fa1f3d4..49559d2 100644
--- a/plugins/storage/volume/datera/src/main/java/org/apache/cloudstack/storage/datastore/driver/DateraPrimaryDataStoreDriver.java
+++ b/plugins/storage/volume/datera/src/main/java/org/apache/cloudstack/storage/datastore/driver/DateraPrimaryDataStoreDriver.java
@@ -17,6 +17,37 @@
 
 package org.apache.cloudstack.storage.datastore.driver;
 
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.inject.Inject;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
+import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
+import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
+import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
+import org.apache.cloudstack.storage.command.CommandResult;
+import org.apache.cloudstack.storage.command.CreateObjectAnswer;
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.cloudstack.storage.datastore.util.DateraObject;
+import org.apache.cloudstack.storage.datastore.util.DateraUtil;
+import org.apache.cloudstack.storage.to.SnapshotObjectTO;
+import org.apache.log4j.Logger;
+
 import com.cloud.agent.api.Answer;
 import com.cloud.agent.api.to.DataObjectType;
 import com.cloud.agent.api.to.DataStoreTO;
@@ -44,40 +75,12 @@ import com.cloud.storage.dao.SnapshotDetailsVO;
 import com.cloud.storage.dao.VMTemplatePoolDao;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.dao.VolumeDetailsDao;
+import com.cloud.utils.Pair;
 import com.cloud.utils.StringUtils;
 import com.cloud.utils.db.GlobalLock;
 import com.cloud.utils.exception.CloudRuntimeException;
 import com.google.common.base.Preconditions;
 import com.google.common.primitives.Ints;
-import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
-import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
-import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
-import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
-import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
-import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
-import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
-import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
-import org.apache.cloudstack.storage.command.CommandResult;
-import org.apache.cloudstack.storage.command.CreateObjectAnswer;
-import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
-import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
-import org.apache.cloudstack.storage.datastore.util.DateraObject;
-import org.apache.cloudstack.storage.datastore.util.DateraUtil;
-import org.apache.cloudstack.storage.to.SnapshotObjectTO;
-import org.apache.log4j.Logger;
-
-import javax.inject.Inject;
-import java.io.UnsupportedEncodingException;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
 
 import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
 
@@ -1255,6 +1258,12 @@ public class DateraPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
     }
 
     @Override
+    public void copyAsync(DataObject srcData, DataObject destData,
+            Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+        throw new UnsupportedOperationException();
+    }
+
+    @Override
     public boolean canCopy(DataObject srcData, DataObject destData) {
         return false;
     }
@@ -1825,6 +1834,30 @@ public class DateraPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
     @Override
     public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo,
             QualityOfServiceState qualityOfServiceState) {
+    }
+
+    @Override
+    public boolean canProvideStorageStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        return null;
+    }
 
+    @Override
+    public boolean canProvideVolumeStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        return true;
     }
 }
diff --git a/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java b/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java
index 6ce8741..3cbcc85 100644
--- a/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java
+++ b/plugins/storage/volume/default/src/main/java/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java
@@ -76,6 +76,7 @@ import com.cloud.storage.dao.VMTemplateDao;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.storage.snapshot.SnapshotManager;
 import com.cloud.template.TemplateManager;
+import com.cloud.utils.Pair;
 import com.cloud.vm.dao.VMInstanceDao;
 
 import static com.cloud.utils.NumbersUtil.toHumanReadableSize;
@@ -278,6 +279,11 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
     }
 
     @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+        copyAsync(srcData, destData, callback);
+    }
+
+    @Override
     public boolean canCopy(DataObject srcData, DataObject destData) {
         //BUG fix for CLOUDSTACK-4618
         DataStore store = destData.getDataStore();
@@ -389,4 +395,29 @@ public class CloudStackPrimaryDataStoreDriverImpl implements PrimaryDataStoreDri
 
     @Override
     public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState) {}
+
+    @Override
+    public boolean canProvideStorageStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        return null;
+    }
+
+    @Override
+    public boolean canProvideVolumeStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        return true;
+    }
 }
diff --git a/plugins/storage/volume/nexenta/src/main/java/org/apache/cloudstack/storage/datastore/driver/NexentaPrimaryDataStoreDriver.java b/plugins/storage/volume/nexenta/src/main/java/org/apache/cloudstack/storage/datastore/driver/NexentaPrimaryDataStoreDriver.java
index d59fce4..92f8938 100644
--- a/plugins/storage/volume/nexenta/src/main/java/org/apache/cloudstack/storage/datastore/driver/NexentaPrimaryDataStoreDriver.java
+++ b/plugins/storage/volume/nexenta/src/main/java/org/apache/cloudstack/storage/datastore/driver/NexentaPrimaryDataStoreDriver.java
@@ -53,6 +53,7 @@ import com.cloud.storage.StoragePool;
 import com.cloud.storage.VolumeVO;
 import com.cloud.storage.dao.VolumeDao;
 import com.cloud.user.dao.AccountDao;
+import com.cloud.utils.Pair;
 
 public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
     private static final Logger logger = Logger.getLogger(NexentaPrimaryDataStoreDriver.class);
@@ -200,6 +201,10 @@ public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
     public void copyAsync(DataObject srcdata, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {}
 
     @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+    }
+
+    @Override
     public boolean canCopy(DataObject srcData, DataObject destData) {
         return false;
     }
@@ -209,4 +214,29 @@ public class NexentaPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
 
     @Override
     public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState) {}
+
+    @Override
+    public boolean canProvideStorageStats() {
+        return false;
+    }
+
+    @Override
+    public boolean canProvideVolumeStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
+        return null;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        return true;
+    }
 }
diff --git a/plugins/storage/volume/sample/src/main/java/org/apache/cloudstack/storage/datastore/driver/SamplePrimaryDataStoreDriverImpl.java b/plugins/storage/volume/sample/src/main/java/org/apache/cloudstack/storage/datastore/driver/SamplePrimaryDataStoreDriverImpl.java
index fc0186f..a416277 100644
--- a/plugins/storage/volume/sample/src/main/java/org/apache/cloudstack/storage/datastore/driver/SamplePrimaryDataStoreDriverImpl.java
+++ b/plugins/storage/volume/sample/src/main/java/org/apache/cloudstack/storage/datastore/driver/SamplePrimaryDataStoreDriverImpl.java
@@ -46,6 +46,7 @@ import com.cloud.agent.api.to.DataTO;
 import com.cloud.host.Host;
 import com.cloud.storage.StoragePool;
 import com.cloud.storage.dao.StoragePoolHostDao;
+import com.cloud.utils.Pair;
 import com.cloud.utils.exception.CloudRuntimeException;
 
 public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver {
@@ -225,6 +226,10 @@ public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver
     }
 
     @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+    }
+
+    @Override
     public void resize(DataObject data, AsyncCompletionCallback<CreateCmdResult> callback) {
     }
 
@@ -236,4 +241,28 @@ public class SamplePrimaryDataStoreDriverImpl implements PrimaryDataStoreDriver
     public void takeSnapshot(SnapshotInfo snapshot, AsyncCompletionCallback<CreateCmdResult> callback) {
     }
 
+    @Override
+    public boolean canProvideStorageStats() {
+        return false;
+    }
+
+    @Override
+    public boolean canProvideVolumeStats() {
+        return false;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumeId) {
+        return null;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        return true;
+    }
 }
diff --git a/engine/storage/snapshot/pom.xml b/plugins/storage/volume/scaleio/pom.xml
similarity index 60%
copy from engine/storage/snapshot/pom.xml
copy to plugins/storage/volume/scaleio/pom.xml
index 40e513b..e95087e 100644
--- a/engine/storage/snapshot/pom.xml
+++ b/plugins/storage/volume/scaleio/pom.xml
@@ -19,34 +19,17 @@
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
     <modelVersion>4.0.0</modelVersion>
-    <artifactId>cloud-engine-storage-snapshot</artifactId>
-    <name>Apache CloudStack Engine Storage Snapshot Component</name>
+    <artifactId>cloud-plugin-storage-volume-scaleio</artifactId>
+    <name>Apache CloudStack Plugin - Storage Volume Dell-EMC ScaleIO/PowerFlex Provider</name>
     <parent>
         <groupId>org.apache.cloudstack</groupId>
-        <artifactId>cloud-engine</artifactId>
+        <artifactId>cloudstack-plugins</artifactId>
         <version>4.16.0.0-SNAPSHOT</version>
-        <relativePath>../../pom.xml</relativePath>
+        <relativePath>../../../pom.xml</relativePath>
     </parent>
     <dependencies>
         <dependency>
             <groupId>org.apache.cloudstack</groupId>
-            <artifactId>cloud-engine-storage</artifactId>
-            <version>${project.version}</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.cloudstack</groupId>
-            <artifactId>cloud-engine-api</artifactId>
-            <version>${project.version}</version>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.cloudstack</groupId>
-            <artifactId>cloud-api</artifactId>
-            <version>${project.version}</version>
-            <type>test-jar</type>
-            <scope>test</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.apache.cloudstack</groupId>
             <artifactId>cloud-engine-storage-volume</artifactId>
             <version>${project.version}</version>
         </dependency>
@@ -54,17 +37,10 @@
     <build>
         <plugins>
             <plugin>
-                <artifactId>maven-compiler-plugin</artifactId>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>testCompile</goal>
-                        </goals>
-                    </execution>
-                </executions>
-            </plugin>
-            <plugin>
                 <artifactId>maven-surefire-plugin</artifactId>
+                <configuration>
+                    <skipTests>true</skipTests>
+                </configuration>
                 <executions>
                     <execution>
                         <phase>integration-test</phase>
diff --git a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/ProtectionDomain.java
similarity index 51%
copy from core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/ProtectionDomain.java
index 779a0f4..5d260e0 100644
--- a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/ProtectionDomain.java
@@ -15,24 +15,43 @@
 // specific language governing permissions and limitations
 // under the License.
 
-package com.cloud.agent.api.routing;
+package org.apache.cloudstack.storage.datastore.api;
 
-public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
-    private boolean performFreshChecks;
+public class ProtectionDomain {
+    String id;
+    String name;
+    String protectionDomainState;
+    String systemId;
 
-    protected GetRouterMonitorResultsCommand() {
+    public String getId() {
+        return id;
     }
 
-    public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
-        this.performFreshChecks = performFreshChecks;
+    public void setId(String id) {
+        this.id = id;
     }
 
-    @Override
-    public boolean isQuery() {
-        return true;
+    public String getName() {
+        return name;
     }
 
-    public boolean shouldPerformFreshChecks() {
-        return performFreshChecks;
+    public void setName(String name) {
+        this.name = name;
     }
-}
\ No newline at end of file
+
+    public String getProtectionDomainState() {
+        return protectionDomainState;
+    }
+
+    public void setProtectionDomainState(String protectionDomainState) {
+        this.protectionDomainState = protectionDomainState;
+    }
+
+    public String getSystemId() {
+        return systemId;
+    }
+
+    public void setSystemId(String systemId) {
+        this.systemId = systemId;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Sdc.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Sdc.java
new file mode 100644
index 0000000..71e4077
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Sdc.java
@@ -0,0 +1,138 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+public class Sdc {
+    String id;
+    String name;
+    String mdmConnectionState;
+    Boolean sdcApproved;
+    String perfProfile;
+    String sdcGuid;
+    String sdcIp;
+    String[] sdcIps;
+    String systemId;
+    String osType;
+    String kernelVersion;
+    String softwareVersionInfo;
+    String versionInfo;
+
+    public String getId() {
+        return id;
+    }
+
+    public void setId(String id) {
+        this.id = id;
+    }
+
+    public String getName() {
+        return name;
+    }
+
+    public void setName(String name) {
+        this.name = name;
+    }
+
+    public String getMdmConnectionState() {
+        return mdmConnectionState;
+    }
+
+    public void setMdmConnectionState(String mdmConnectionState) {
+        this.mdmConnectionState = mdmConnectionState;
+    }
+
+    public Boolean getSdcApproved() {
+        return sdcApproved;
+    }
+
+    public void setSdcApproved(Boolean sdcApproved) {
+        this.sdcApproved = sdcApproved;
+    }
+
+    public String getPerfProfile() {
+        return perfProfile;
+    }
+
+    public void setPerfProfile(String perfProfile) {
+        this.perfProfile = perfProfile;
+    }
+
+    public String getSdcGuid() {
+        return sdcGuid;
+    }
+
+    public void setSdcGuid(String sdcGuid) {
+        this.sdcGuid = sdcGuid;
+    }
+
+    public String getSdcIp() {
+        return sdcIp;
+    }
+
+    public void setSdcIp(String sdcIp) {
+        this.sdcIp = sdcIp;
+    }
+
+    public String[] getSdcIps() {
+        return sdcIps;
+    }
+
+    public void setSdcIps(String[] sdcIps) {
+        this.sdcIps = sdcIps;
+    }
+
+    public String getSystemId() {
+        return systemId;
+    }
+
+    public void setSystemId(String systemId) {
+        this.systemId = systemId;
+    }
+
+    public String getOsType() {
+        return osType;
+    }
+
+    public void setOsType(String osType) {
+        this.osType = osType;
+    }
+
+    public String getKernelVersion() {
+        return kernelVersion;
+    }
+
+    public void setKernelVersion(String kernelVersion) {
+        this.kernelVersion = kernelVersion;
+    }
+
+    public String getSoftwareVersionInfo() {
+        return softwareVersionInfo;
+    }
+
+    public void setSoftwareVersionInfo(String softwareVersionInfo) {
+        this.softwareVersionInfo = softwareVersionInfo;
+    }
+
+    public String getVersionInfo() {
+        return versionInfo;
+    }
+
+    public void setVersionInfo(String versionInfo) {
+        this.versionInfo = versionInfo;
+    }
+}
diff --git a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SdcMappingInfo.java
similarity index 68%
copy from core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SdcMappingInfo.java
index ed49997..1b3436a 100644
--- a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SdcMappingInfo.java
@@ -1,4 +1,3 @@
-//
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -15,28 +14,26 @@
 // KIND, either express or implied.  See the License for the
 // specific language governing permissions and limitations
 // under the License.
-//
-
-package org.apache.cloudstack.agent.directdownload;
 
-import com.cloud.agent.api.Command;
+package org.apache.cloudstack.storage.datastore.api;
 
-public class CheckUrlCommand extends Command {
+public class SdcMappingInfo {
+    String sdcId;
+    String sdcIp;
 
-    private String url;
-
-    public String getUrl() {
-        return url;
+    public String getSdcId() {
+        return sdcId;
     }
 
-    public CheckUrlCommand(final String url) {
-        super();
-        this.url = url;
+    public void setSdcId(String sdcId) {
+        this.sdcId = sdcId;
     }
 
-    @Override
-    public boolean executeInSequence() {
-        return false;
+    public String getSdcIp() {
+        return sdcIp;
     }
 
+    public void setSdcIp(String sdcIp) {
+        this.sdcIp = sdcIp;
+    }
 }
diff --git a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDef.java
similarity index 54%
copy from core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDef.java
index 779a0f4..fa97360 100644
--- a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDef.java
@@ -15,24 +15,34 @@
 // specific language governing permissions and limitations
 // under the License.
 
-package com.cloud.agent.api.routing;
+package org.apache.cloudstack.storage.datastore.api;
 
-public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
-    private boolean performFreshChecks;
+public class SnapshotDef {
+    String volumeId;
+    String snapshotName;
+    String allowOnExtManagedVol;
 
-    protected GetRouterMonitorResultsCommand() {
+    public String getVolumeId() {
+        return volumeId;
     }
 
-    public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
-        this.performFreshChecks = performFreshChecks;
+    public void setVolumeId(String volumeId) {
+        this.volumeId = volumeId;
     }
 
-    @Override
-    public boolean isQuery() {
-        return true;
+    public String getSnapshotName() {
+        return snapshotName;
     }
 
-    public boolean shouldPerformFreshChecks() {
-        return performFreshChecks;
+    public void setSnapshotName(String snapshotName) {
+        this.snapshotName = snapshotName;
     }
-}
\ No newline at end of file
+
+    public String getAllowOnExtManagedVol() {
+        return allowOnExtManagedVol;
+    }
+
+    public void setAllowOnExtManagedVol(String allowOnExtManagedVol) {
+        this.allowOnExtManagedVol = allowOnExtManagedVol;
+    }
+}
diff --git a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDefs.java
similarity index 67%
copy from core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDefs.java
index ed49997..a86ae30 100644
--- a/core/src/main/java/org/apache/cloudstack/agent/directdownload/CheckUrlCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotDefs.java
@@ -1,4 +1,3 @@
-//
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -15,28 +14,17 @@
 // KIND, either express or implied.  See the License for the
 // specific language governing permissions and limitations
 // under the License.
-//
-
-package org.apache.cloudstack.agent.directdownload;
-
-import com.cloud.agent.api.Command;
 
-public class CheckUrlCommand extends Command {
+package org.apache.cloudstack.storage.datastore.api;
 
-    private String url;
+public class SnapshotDefs {
+    SnapshotDef[] snapshotDefs;
 
-    public String getUrl() {
-        return url;
+    public SnapshotDef[] getSnapshotDefs() {
+        return snapshotDefs;
     }
 
-    public CheckUrlCommand(final String url) {
-        super();
-        this.url = url;
+    public void setSnapshotDefs(SnapshotDef[] snapshotDefs) {
+        this.snapshotDefs = snapshotDefs;
     }
-
-    @Override
-    public boolean executeInSequence() {
-        return false;
-    }
-
 }
diff --git a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotGroup.java
similarity index 56%
copy from core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotGroup.java
index 779a0f4..bef2cee 100644
--- a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/SnapshotGroup.java
@@ -15,24 +15,32 @@
 // specific language governing permissions and limitations
 // under the License.
 
-package com.cloud.agent.api.routing;
+package org.apache.cloudstack.storage.datastore.api;
 
-public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
-    private boolean performFreshChecks;
+import java.util.Arrays;
+import java.util.List;
 
-    protected GetRouterMonitorResultsCommand() {
+public class SnapshotGroup {
+    String snapshotGroupId;
+    String[] volumeIdList;
+
+    public String getSnapshotGroupId() {
+        return snapshotGroupId;
+    }
+
+    public void setSnapshotGroupId(String snapshotGroupId) {
+        this.snapshotGroupId = snapshotGroupId;
     }
 
-    public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
-        this.performFreshChecks = performFreshChecks;
+    public List<String> getVolumeIds() {
+        return Arrays.asList(volumeIdList);
     }
 
-    @Override
-    public boolean isQuery() {
-        return true;
+    public String[] getVolumeIdList() {
+        return volumeIdList;
     }
 
-    public boolean shouldPerformFreshChecks() {
-        return performFreshChecks;
+    public void setVolumeIdList(String[] volumeIdList) {
+        this.volumeIdList = volumeIdList;
     }
-}
\ No newline at end of file
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePool.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePool.java
new file mode 100644
index 0000000..df903bb
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePool.java
@@ -0,0 +1,75 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+public class StoragePool {
+    String id;
+    String name;
+    String mediaType;
+    String protectionDomainId;
+    String systemId;
+    StoragePoolStatistics statistics;
+
+    public String getId() {
+        return id;
+    }
+
+    public void setId(String id) {
+        this.id = id;
+    }
+
+    public String getName() {
+        return name;
+    }
+
+    public void setName(String name) {
+        this.name = name;
+    }
+
+    public String getMediaType() {
+        return mediaType;
+    }
+
+    public void setMediaType(String mediaType) {
+        this.mediaType = mediaType;
+    }
+
+    public String getProtectionDomainId() {
+        return protectionDomainId;
+    }
+
+    public void setProtectionDomainId(String protectionDomainId) {
+        this.protectionDomainId = protectionDomainId;
+    }
+
+    public String getSystemId() {
+        return systemId;
+    }
+
+    public void setSystemId(String systemId) {
+        this.systemId = systemId;
+    }
+
+    public StoragePoolStatistics getStatistics() {
+        return statistics;
+    }
+
+    public void setStatistics(StoragePoolStatistics statistics) {
+        this.statistics = statistics;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePoolStatistics.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePoolStatistics.java
new file mode 100644
index 0000000..599aa5c
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/StoragePoolStatistics.java
@@ -0,0 +1,85 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+import com.google.common.base.Strings;
+
+public class StoragePoolStatistics {
+    String maxCapacityInKb; // total capacity
+    String spareCapacityInKb; // spare capacity, space not used for volumes creation/allocation
+    String netCapacityInUseInKb; // user data capacity in use
+    String netUnusedCapacityInKb; // capacity available for volume creation (volume space to write)
+
+    public Long getMaxCapacityInKb() {
+        if (Strings.isNullOrEmpty(maxCapacityInKb)) {
+            return Long.valueOf(0);
+        }
+        return Long.valueOf(maxCapacityInKb);
+    }
+
+    public void setMaxCapacityInKb(String maxCapacityInKb) {
+        this.maxCapacityInKb = maxCapacityInKb;
+    }
+
+    public Long getSpareCapacityInKb() {
+        if (Strings.isNullOrEmpty(spareCapacityInKb)) {
+            return Long.valueOf(0);
+        }
+        return Long.valueOf(spareCapacityInKb);
+    }
+
+    public void setSpareCapacityInKb(String spareCapacityInKb) {
+        this.spareCapacityInKb = spareCapacityInKb;
+    }
+
+    public Long getNetCapacityInUseInKb() {
+        if (Strings.isNullOrEmpty(netCapacityInUseInKb)) {
+            return Long.valueOf(0);
+        }
+        return Long.valueOf(netCapacityInUseInKb);
+    }
+
+    public void setNetCapacityInUseInKb(String netCapacityInUseInKb) {
+        this.netCapacityInUseInKb = netCapacityInUseInKb;
+    }
+
+    public Long getNetUnusedCapacityInKb() {
+        if (Strings.isNullOrEmpty(netUnusedCapacityInKb)) {
+            return Long.valueOf(0);
+        }
+        return Long.valueOf(netUnusedCapacityInKb);
+    }
+
+    public Long getNetUnusedCapacityInBytes() {
+        return (getNetUnusedCapacityInKb() * 1024);
+    }
+
+    public void setNetUnusedCapacityInKb(String netUnusedCapacityInKb) {
+        this.netUnusedCapacityInKb = netUnusedCapacityInKb;
+    }
+
+    public Long getNetMaxCapacityInBytes() {
+        // total usable capacity = ("maxCapacityInKb" - "spareCapacityInKb") / 2
+        Long netMaxCapacityInKb = getMaxCapacityInKb() - getSpareCapacityInKb();
+        return ((netMaxCapacityInKb / 2) * 1024);
+    }
+
+    public Long getNetUsedCapacityInBytes() {
+        return (getNetMaxCapacityInBytes() - getNetUnusedCapacityInBytes());
+    }
+}
diff --git a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTree.java
similarity index 60%
copy from core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
copy to plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTree.java
index 779a0f4..824a4c5 100644
--- a/core/src/main/java/com/cloud/agent/api/routing/GetRouterMonitorResultsCommand.java
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTree.java
@@ -15,24 +15,25 @@
 // specific language governing permissions and limitations
 // under the License.
 
-package com.cloud.agent.api.routing;
+package org.apache.cloudstack.storage.datastore.api;
 
-public class GetRouterMonitorResultsCommand extends NetworkElementCommand {
-    private boolean performFreshChecks;
+public class VTree {
+    String storagePoolId;
+    VTreeMigrationInfo vtreeMigrationInfo;
 
-    protected GetRouterMonitorResultsCommand() {
+    public String getStoragePoolId() {
+        return storagePoolId;
     }
 
-    public GetRouterMonitorResultsCommand(boolean performFreshChecks) {
-        this.performFreshChecks = performFreshChecks;
+    public void setStoragePoolId(String storagePoolId) {
+        this.storagePoolId = storagePoolId;
     }
 
-    @Override
-    public boolean isQuery() {
-        return true;
+    public VTreeMigrationInfo getVTreeMigrationInfo() {
+        return vtreeMigrationInfo;
     }
 
-    public boolean shouldPerformFreshChecks() {
-        return performFreshChecks;
+    public void setVTreeMigrationInfo(VTreeMigrationInfo vtreeMigrationInfo) {
+        this.vtreeMigrationInfo = vtreeMigrationInfo;
     }
-}
\ No newline at end of file
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTreeMigrationInfo.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTreeMigrationInfo.java
new file mode 100644
index 0000000..f4e926b
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VTreeMigrationInfo.java
@@ -0,0 +1,76 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+import com.cloud.utils.EnumUtils;
+
+public class VTreeMigrationInfo {
+    public enum MigrationStatus {
+        NotInMigration,
+        MigrationNormal,
+        PendingRetry,
+        InternalPausing,
+        GracefullyPausing,
+        ForcefullyPausing,
+        Paused,
+        PendingMigration,
+        PendingRebalance,
+        None
+    }
+
+    String sourceStoragePoolId;
+    String destinationStoragePoolId;
+    MigrationStatus migrationStatus;
+    Long migrationQueuePosition;
+
+    public String getSourceStoragePoolId() {
+        return sourceStoragePoolId;
+    }
+
+    public void setSourceStoragePoolId(String sourceStoragePoolId) {
+        this.sourceStoragePoolId = sourceStoragePoolId;
+    }
+
+    public String getDestinationStoragePoolId() {
+        return destinationStoragePoolId;
+    }
+
+    public void setDestinationStoragePoolId(String destinationStoragePoolId) {
+        this.destinationStoragePoolId = destinationStoragePoolId;
+    }
+
+    public MigrationStatus getMigrationStatus() {
+        return migrationStatus;
+    }
+
+    public void setMigrationStatus(String migrationStatus) {
+        this.migrationStatus = EnumUtils.fromString(MigrationStatus.class, migrationStatus, MigrationStatus.None);
+    }
+
+    public void setMigrationStatus(MigrationStatus migrationStatus) {
+        this.migrationStatus = migrationStatus;
+    }
+
+    public Long getMigrationQueuePosition() {
+        return migrationQueuePosition;
+    }
+
+    public void setMigrationQueuePosition(Long migrationQueuePosition) {
+        this.migrationQueuePosition = migrationQueuePosition;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Volume.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Volume.java
new file mode 100644
index 0000000..4517a12
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/Volume.java
@@ -0,0 +1,152 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class Volume {
+    public enum VolumeType {
+        ThickProvisioned,
+        ThinProvisioned,
+        Snapshot
+    }
+    String id;
+    String name;
+    String ancestorVolumeId;
+    String consistencyGroupId;
+    Long creationTime;
+    Long sizeInKb;
+    String sizeInGB;
+    String storagePoolId;
+    VolumeType volumeType;
+    String volumeSizeInGb;
+    String vtreeId;
+    SdcMappingInfo[] mappedSdcInfo;
+
+    public String getId() {
+        return id;
+    }
+
+    public void setId(String id) {
+        this.id = id;
+    }
+
+    public String getName() {
+        return name;
+    }
+
+    public void setName(String name) {
+        this.name = name;
+    }
+
+    public String getAncestorVolumeId() {
+        return ancestorVolumeId;
+    }
+
+    public void setAncestorVolumeId(String ancestorVolumeId) {
+        this.ancestorVolumeId = ancestorVolumeId;
+    }
+
+    public String getConsistencyGroupId() {
+        return consistencyGroupId;
+    }
+
+    public void setConsistencyGroupId(String consistencyGroupId) {
+        this.consistencyGroupId = consistencyGroupId;
+    }
+
+    public Long getCreationTime() {
+        return creationTime;
+    }
+
+    public void setCreationTime(Long creationTime) {
+        this.creationTime = creationTime;
+    }
+
+    public Long getSizeInKb() {
+        return sizeInKb;
+    }
+
+    public void setSizeInKb(Long sizeInKb) {
+        this.sizeInKb = sizeInKb;
+    }
+
+    public String getSizeInGB() {
+        return sizeInGB;
+    }
+
+    public void setSizeInGB(Integer sizeInGB) {
+        this.sizeInGB = sizeInGB.toString();
+    }
+
+    public void setVolumeSizeInGb(String volumeSizeInGb) {
+        this.volumeSizeInGb = volumeSizeInGb;
+    }
+
+    public String getStoragePoolId() {
+        return storagePoolId;
+    }
+
+    public void setStoragePoolId(String storagePoolId) {
+        this.storagePoolId = storagePoolId;
+    }
+
+    public String getVolumeSizeInGb() {
+        return volumeSizeInGb;
+    }
+
+    public void setVolumeSizeInGb(Integer volumeSizeInGb) {
+        this.volumeSizeInGb = volumeSizeInGb.toString();
+    }
+
+    public VolumeType getVolumeType() {
+        return volumeType;
+    }
+
+    public void setVolumeType(String volumeType) {
+        this.volumeType = Enum.valueOf(VolumeType.class, volumeType);
+    }
+
+    public void setVolumeType(VolumeType volumeType) {
+        this.volumeType = volumeType;
+    }
+
+    public String getVtreeId() {
+        return vtreeId;
+    }
+
+    public void setVtreeId(String vtreeId) {
+        this.vtreeId = vtreeId;
+    }
+
+    public List<SdcMappingInfo> getMappedSdcList() {
+        if (mappedSdcInfo != null) {
+            return Arrays.asList(mappedSdcInfo);
+        }
+        return null;
+    }
+
+    public SdcMappingInfo[] getMappedSdcInfo() {
+        return mappedSdcInfo;
+    }
+
+    public void setMappedSdcInfo(SdcMappingInfo[] mappedSdcInfo) {
+        this.mappedSdcInfo = mappedSdcInfo;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VolumeStatistics.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VolumeStatistics.java
new file mode 100644
index 0000000..6f48e17
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/api/VolumeStatistics.java
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.api;
+
+public class VolumeStatistics {
+    Long allocatedSizeInKb; // virtual size
+    Long netProvisionedAddressesInKb; // physical size
+
+    public Long getAllocatedSizeInKb() {
+        if (allocatedSizeInKb == null) {
+            return Long.valueOf(0);
+        }
+        return allocatedSizeInKb;
+    }
+
+    public Long getAllocatedSizeInBytes() {
+        return (getAllocatedSizeInKb() * 1024);
+    }
+
+    public void setAllocatedSizeInKb(Long allocatedSizeInKb) {
+        this.allocatedSizeInKb = allocatedSizeInKb;
+    }
+
+    public Long getNetProvisionedAddressesInKb() {
+        if (netProvisionedAddressesInKb == null) {
+            return Long.valueOf(0);
+        }
+        return netProvisionedAddressesInKb;
+    }
+
+    public Long getNetProvisionedAddressesInBytes() {
+        return (getNetProvisionedAddressesInKb() * 1024);
+    }
+
+    public void setNetProvisionedAddressesInKb(Long netProvisionedAddressesInKb) {
+        this.netProvisionedAddressesInKb = netProvisionedAddressesInKb;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClient.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClient.java
new file mode 100644
index 0000000..f6b10f8
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClient.java
@@ -0,0 +1,88 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.client;
+
+import java.net.URISyntaxException;
+import java.security.KeyManagementException;
+import java.security.NoSuchAlgorithmException;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.cloudstack.storage.datastore.api.Sdc;
+import org.apache.cloudstack.storage.datastore.api.SnapshotGroup;
+import org.apache.cloudstack.storage.datastore.api.StoragePool;
+import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics;
+import org.apache.cloudstack.storage.datastore.api.Volume;
+import org.apache.cloudstack.storage.datastore.api.VolumeStatistics;
+
+import com.cloud.storage.Storage;
+
+public interface ScaleIOGatewayClient {
+    String GATEWAY_API_ENDPOINT = "powerflex.gw.url";
+    String GATEWAY_API_USERNAME = "powerflex.gw.username";
+    String GATEWAY_API_PASSWORD = "powerflex.gw.password";
+    String STORAGE_POOL_NAME = "powerflex.storagepool.name";
+    String STORAGE_POOL_SYSTEM_ID = "powerflex.storagepool.system.id";
+
+    static ScaleIOGatewayClient getClient(final String url, final String username, final String password,
+                                          final boolean validateCertificate, final int timeout) throws NoSuchAlgorithmException, KeyManagementException, URISyntaxException {
+        return new ScaleIOGatewayClientImpl(url, username, password, validateCertificate, timeout);
+    }
+
+    // Volume APIs
+    Volume createVolume(final String name, final String storagePoolId,
+                        final Integer sizeInGb, final Storage.ProvisioningType volumeType);
+    List<Volume> listVolumes();
+    List<Volume> listSnapshotVolumes();
+    Volume getVolume(String volumeId);
+    Volume getVolumeByName(String name);
+    boolean renameVolume(final String volumeId, final String newName);
+    Volume resizeVolume(final String volumeId, final Integer sizeInGb);
+    Volume cloneVolume(final String sourceVolumeId, final String destVolumeName);
+    boolean deleteVolume(final String volumeId);
+    boolean migrateVolume(final String srcVolumeId, final String destPoolId, final int timeoutInSecs);
+
+    boolean mapVolumeToSdc(final String volumeId, final String sdcId);
+    boolean mapVolumeToSdcWithLimits(final String volumeId, final String sdcId, final Long iopsLimit, final Long bandwidthLimitInKbps);
+    boolean unmapVolumeFromSdc(final String volumeId, final String sdcId);
+    boolean unmapVolumeFromAllSdcs(final String volumeId);
+    boolean isVolumeMappedToSdc(final String volumeId, final String sdcId);
+
+    // Snapshot APIs
+    SnapshotGroup takeSnapshot(final Map<String, String> srcVolumeDestSnapshotMap);
+    boolean revertSnapshot(final String systemId, final Map<String, String> srcSnapshotDestVolumeMap);
+    int deleteSnapshotGroup(final String systemId, final String snapshotGroupId);
+    Volume takeSnapshot(final String volumeId, final String snapshotVolumeName);
+    boolean revertSnapshot(final String sourceSnapshotVolumeId, final String destVolumeId);
+
+    // Storage Pool APIs
+    List<StoragePool> listStoragePools();
+    StoragePool getStoragePool(String poolId);
+    StoragePoolStatistics getStoragePoolStatistics(String poolId);
+    VolumeStatistics getVolumeStatistics(String volumeId);
+    String getSystemId(String protectionDomainId);
+    List<Volume> listVolumesInStoragePool(String poolId);
+
+    // SDC APIs
+    List<Sdc> listSdcs();
+    Sdc getSdc(String sdcId);
+    Sdc getSdcByIp(String ipAddress);
+    Sdc getConnectedSdcByIp(String ipAddress);
+    List<String> listConnectedSdcIps();
+    boolean isSdcConnected(String ipAddress);
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientConnectionPool.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientConnectionPool.java
new file mode 100644
index 0000000..2daf8e4
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientConnectionPool.java
@@ -0,0 +1,90 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.client;
+
+import java.net.URISyntaxException;
+import java.security.KeyManagementException;
+import java.security.NoSuchAlgorithmException;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
+import org.apache.log4j.Logger;
+
+import com.cloud.storage.StorageManager;
+import com.cloud.utils.crypt.DBEncryptionUtil;
+import com.google.common.base.Preconditions;
+
+public class ScaleIOGatewayClientConnectionPool {
+    private static final Logger LOGGER = Logger.getLogger(ScaleIOGatewayClientConnectionPool.class);
+
+    private ConcurrentHashMap<Long, ScaleIOGatewayClient> gatewayClients;
+
+    private static final ScaleIOGatewayClientConnectionPool instance;
+
+    static {
+        instance = new ScaleIOGatewayClientConnectionPool();
+    }
+
+    public static ScaleIOGatewayClientConnectionPool getInstance() {
+        return instance;
+    }
+
+    private ScaleIOGatewayClientConnectionPool() {
+        gatewayClients = new ConcurrentHashMap<Long, ScaleIOGatewayClient>();
+    }
+
+    public ScaleIOGatewayClient getClient(Long storagePoolId, StoragePoolDetailsDao storagePoolDetailsDao)
+            throws NoSuchAlgorithmException, KeyManagementException, URISyntaxException {
+        Preconditions.checkArgument(storagePoolId != null && storagePoolId > 0, "Invalid storage pool id");
+
+        ScaleIOGatewayClient client = null;
+        synchronized (gatewayClients) {
+            client = gatewayClients.get(storagePoolId);
+            if (client == null) {
+                final String url = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.GATEWAY_API_ENDPOINT).getValue();
+                final String encryptedUsername = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.GATEWAY_API_USERNAME).getValue();
+                final String username = DBEncryptionUtil.decrypt(encryptedUsername);
+                final String encryptedPassword = storagePoolDetailsDao.findDetail(storagePoolId, ScaleIOGatewayClient.GATEWAY_API_PASSWORD).getValue();
+                final String password = DBEncryptionUtil.decrypt(encryptedPassword);
+                final int clientTimeout = StorageManager.STORAGE_POOL_CLIENT_TIMEOUT.valueIn(storagePoolId);
+
+                client = new ScaleIOGatewayClientImpl(url, username, password, false, clientTimeout);
+                gatewayClients.put(storagePoolId, client);
+                LOGGER.debug("Added gateway client for the storage pool: " + storagePoolId);
+            }
+        }
+
+        return client;
+    }
+
+    public boolean removeClient(Long storagePoolId) {
+        Preconditions.checkArgument(storagePoolId != null && storagePoolId > 0, "Invalid storage pool id");
+
+        ScaleIOGatewayClient client = null;
+        synchronized (gatewayClients) {
+            client = gatewayClients.remove(storagePoolId);
+        }
+
+        if (client != null) {
+            LOGGER.debug("Removed gateway client for the storage pool: " + storagePoolId);
+            return true;
+        }
+
+        return false;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientImpl.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientImpl.java
new file mode 100644
index 0000000..5e8568d
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/client/ScaleIOGatewayClientImpl.java
@@ -0,0 +1,1255 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.cloudstack.storage.datastore.client;
+
+import java.io.IOException;
+import java.net.SocketTimeoutException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.security.KeyManagementException;
+import java.security.NoSuchAlgorithmException;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Base64;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.net.ssl.SSLContext;
+import javax.net.ssl.X509TrustManager;
+
+import org.apache.cloudstack.api.ApiErrorCode;
+import org.apache.cloudstack.api.ServerApiException;
+import org.apache.cloudstack.storage.datastore.api.ProtectionDomain;
+import org.apache.cloudstack.storage.datastore.api.Sdc;
+import org.apache.cloudstack.storage.datastore.api.SdcMappingInfo;
+import org.apache.cloudstack.storage.datastore.api.SnapshotDef;
+import org.apache.cloudstack.storage.datastore.api.SnapshotDefs;
+import org.apache.cloudstack.storage.datastore.api.SnapshotGroup;
+import org.apache.cloudstack.storage.datastore.api.StoragePool;
+import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics;
+import org.apache.cloudstack.storage.datastore.api.VTree;
+import org.apache.cloudstack.storage.datastore.api.VTreeMigrationInfo;
+import org.apache.cloudstack.storage.datastore.api.Volume;
+import org.apache.cloudstack.storage.datastore.api.VolumeStatistics;
+import org.apache.cloudstack.utils.security.SSLUtils;
+import org.apache.http.HttpHeaders;
+import org.apache.http.HttpResponse;
+import org.apache.http.HttpStatus;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.config.RequestConfig;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.conn.ConnectTimeoutException;
+import org.apache.http.conn.ssl.NoopHostnameVerifier;
+import org.apache.http.conn.ssl.SSLConnectionSocketFactory;
+import org.apache.http.entity.StringEntity;
+import org.apache.http.impl.client.HttpClientBuilder;
+import org.apache.http.util.EntityUtils;
+import org.apache.log4j.Logger;
+
+import com.cloud.storage.Storage;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.utils.nio.TrustAllManager;
+import com.fasterxml.jackson.annotation.JsonInclude;
+import com.fasterxml.jackson.databind.DeserializationFeature;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.json.JsonMapper;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+
+public class ScaleIOGatewayClientImpl implements ScaleIOGatewayClient {
+    private static final Logger LOG = Logger.getLogger(ScaleIOGatewayClientImpl.class);
+
+    private final URI apiURI;
+    private final HttpClient httpClient;
+    private static final String SESSION_HEADER = "X-RestSvcSessionId";
+    private static final String MDM_CONNECTED_STATE = "Connected";
+
+    private String host;
+    private String username;
+    private String password;
+    private String sessionKey = null;
+
+    // The session token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes
+    // Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
+    private static final long MAX_VALID_SESSION_TIME_IN_MILLISECS = 8 * 60 * 60 * 1000; // 8 hrs
+    private static final long MAX_IDLE_TIME_IN_MILLISECS = 10 * 60 * 1000; // 10 mins
+    private static final long BUFFER_TIME_IN_MILLISECS = 30 * 1000; // keep 30 secs buffer before the expiration (to avoid any last-minute operations)
+
+    private long createTime = 0;
+    private long lastUsedTime = 0;
+
+    public ScaleIOGatewayClientImpl(final String url, final String username, final String password,
+                                    final boolean validateCertificate, final int timeout)
+            throws NoSuchAlgorithmException, KeyManagementException, URISyntaxException {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(url), "Gateway client url cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(username) && !Strings.isNullOrEmpty(password), "Gateway client credentials cannot be null");
+
+        final RequestConfig config = RequestConfig.custom()
+                .setConnectTimeout(timeout * 1000)
+                .setConnectionRequestTimeout(timeout * 1000)
+                .setSocketTimeout(timeout * 1000)
+                .build();
+
+        if (!validateCertificate) {
+            final SSLContext sslcontext = SSLUtils.getSSLContext();
+            sslcontext.init(null, new X509TrustManager[]{new TrustAllManager()}, new SecureRandom());
+            final SSLConnectionSocketFactory factory = new SSLConnectionSocketFactory(sslcontext, NoopHostnameVerifier.INSTANCE);
+            this.httpClient = HttpClientBuilder.create()
+                    .setDefaultRequestConfig(config)
+                    .setSSLSocketFactory(factory)
+                    .build();
+        } else {
+            this.httpClient = HttpClientBuilder.create()
+                    .setDefaultRequestConfig(config)
+                    .build();
+        }
+
+        this.apiURI = new URI(url);
+        this.host = apiURI.getHost();
+        this.username = username;
+        this.password = password;
+
+        authenticate();
+    }
+
+    /////////////////////////////////////////////////////////////
+    //////////////// Private Helper Methods /////////////////////
+    /////////////////////////////////////////////////////////////
+
+    private void authenticate() {
+        final HttpGet request = new HttpGet(apiURI.toString() + "/login");
+        request.setHeader(HttpHeaders.AUTHORIZATION, "Basic " + Base64.getEncoder().encodeToString((username + ":" + password).getBytes()));
+        try {
+            final HttpResponse response = httpClient.execute(request);
+            checkAuthFailure(response);
+            this.sessionKey = EntityUtils.toString(response.getEntity());
+            if (Strings.isNullOrEmpty(this.sessionKey)) {
+                throw new CloudRuntimeException("Failed to create a valid PowerFlex Gateway Session to perform API requests");
+            }
+            this.sessionKey = this.sessionKey.replace("\"", "");
+            if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {
+                throw new CloudRuntimeException("PowerFlex Gateway login failed, please check the provided settings");
+            }
+        } catch (final IOException e) {
+            throw new CloudRuntimeException("Failed to authenticate PowerFlex API Gateway due to:" + e.getMessage());
+        }
+        long now = System.currentTimeMillis();
+        createTime = lastUsedTime = now;
+    }
+
+    private synchronized void renewClientSessionOnExpiry() {
+        if (isSessionExpired()) {
+            LOG.debug("Session expired, renewing");
+            authenticate();
+        }
+    }
+
+    private boolean isSessionExpired() {
+        long now = System.currentTimeMillis() + BUFFER_TIME_IN_MILLISECS;
+        if ((now - createTime) > MAX_VALID_SESSION_TIME_IN_MILLISECS ||
+                (now - lastUsedTime) > MAX_IDLE_TIME_IN_MILLISECS) {
+            return true;
+        }
+        return false;
+    }
+
+    private void checkAuthFailure(final HttpResponse response) {
+        if (response != null && response.getStatusLine().getStatusCode() == HttpStatus.SC_UNAUTHORIZED) {
+            throw new ServerApiException(ApiErrorCode.UNAUTHORIZED, "PowerFlex Gateway API call unauthorized, please check the provided settings");
+        }
+    }
+
+    private void checkResponseOK(final HttpResponse response) {
+        if (response.getStatusLine().getStatusCode() == HttpStatus.SC_NO_CONTENT) {
+            LOG.debug("Requested resource does not exist");
+            return;
+        }
+        if (response.getStatusLine().getStatusCode() == HttpStatus.SC_BAD_REQUEST) {
+            throw new ServerApiException(ApiErrorCode.MALFORMED_PARAMETER_ERROR, "Bad API request");
+        }
+        if (!(response.getStatusLine().getStatusCode() == HttpStatus.SC_OK ||
+                response.getStatusLine().getStatusCode() == HttpStatus.SC_ACCEPTED)) {
+            String responseBody = response.toString();
+            try {
+                responseBody = EntityUtils.toString(response.getEntity());
+            } catch (IOException ignored) {
+            }
+            LOG.debug("HTTP request failed, status code is " + response.getStatusLine().getStatusCode() + ", response is: " + responseBody);
+            throw new ServerApiException(ApiErrorCode.INTERNAL_ERROR, "API failed due to: " + responseBody);
+        }
+    }
+
+    private void checkResponseTimeOut(final Exception e) {
+        if (e instanceof ConnectTimeoutException || e instanceof SocketTimeoutException) {
+            throw new ServerApiException(ApiErrorCode.RESOURCE_UNAVAILABLE_ERROR, "API operation timed out, please try again.");
+        }
+    }
+
+    private HttpResponse get(final String path) throws IOException {
+        renewClientSessionOnExpiry();
+        final HttpGet request = new HttpGet(apiURI.toString() + path);
+        request.setHeader(HttpHeaders.AUTHORIZATION, "Basic " + Base64.getEncoder().encodeToString((this.username + ":" + this.sessionKey).getBytes()));
+        final HttpResponse response = httpClient.execute(request);
+        synchronized (this) {
+            lastUsedTime = System.currentTimeMillis();
+        }
+        String responseStatus = (response != null) ? (response.getStatusLine().getStatusCode() + " " + response.getStatusLine().getReasonPhrase()) : "nil";
+        LOG.debug("GET request path: " + path + ", response: " + responseStatus);
+        checkAuthFailure(response);
+        return response;
+    }
+
+    private HttpResponse post(final String path, final Object obj) throws IOException {
+        renewClientSessionOnExpiry();
+        final HttpPost request = new HttpPost(apiURI.toString() + path);
+        request.setHeader(HttpHeaders.AUTHORIZATION, "Basic " + Base64.getEncoder().encodeToString((this.username + ":" + this.sessionKey).getBytes()));
+        request.setHeader("Content-type", "application/json");
+        if (obj != null) {
+            if (obj instanceof String) {
+                request.setEntity(new StringEntity((String) obj));
+            } else {
+                JsonMapper mapper = new JsonMapper();
+                mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
+                String json = mapper.writer().writeValueAsString(obj);
+                request.setEntity(new StringEntity(json));
+            }
+        }
+        final HttpResponse response = httpClient.execute(request);
+        synchronized (this) {
+            lastUsedTime = System.currentTimeMillis();
+        }
+        String responseStatus = (response != null) ? (response.getStatusLine().getStatusCode() + " " + response.getStatusLine().getReasonPhrase()) : "nil";
+        LOG.debug("POST request path: " + path + ", response: " + responseStatus);
+        checkAuthFailure(response);
+        return response;
+    }
+
+    //////////////////////////////////////////////////
+    //////////////// Volume APIs /////////////////////
+    //////////////////////////////////////////////////
+
+    @Override
+    public Volume createVolume(final String name, final String storagePoolId,
+                               final Integer sizeInGb, final Storage.ProvisioningType volumeType) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(name), "Volume name cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(storagePoolId), "Storage pool id cannot be null");
+        Preconditions.checkArgument(sizeInGb != null && sizeInGb > 0, "Size(GB) must be greater than 0");
+
+        HttpResponse response = null;
+        try {
+            Volume newVolume = new Volume();
+            newVolume.setName(name);
+            newVolume.setStoragePoolId(storagePoolId);
+            newVolume.setVolumeSizeInGb(sizeInGb);
+            if (Storage.ProvisioningType.FAT.equals(volumeType)) {
+                newVolume.setVolumeType(Volume.VolumeType.ThickProvisioned);
+            } else {
+                newVolume.setVolumeType(Volume.VolumeType.ThinProvisioned);
+            }
+            // The basic allocation granularity is 8GB. The volume size will be rounded up.
+            response = post("/types/Volume/instances", newVolume);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            Volume newVolumeObject = mapper.readValue(response.getEntity().getContent(), Volume.class);
+            return getVolume(newVolumeObject.getId());
+        } catch (final IOException e) {
+            LOG.error("Failed to create PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public List<Volume> listVolumes() {
+        HttpResponse response = null;
+        try {
+            response = get("/types/Volume/instances");
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            Volume[] volumes = mapper.readValue(response.getEntity().getContent(), Volume[].class);
+            return Arrays.asList(volumes);
+        } catch (final IOException e) {
+            LOG.error("Failed to list PowerFlex volumes due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return new ArrayList<>();
+    }
+
+    @Override
+    public List<Volume> listSnapshotVolumes() {
+        List<Volume> volumes = listVolumes();
+        List<Volume> snapshotVolumes = new ArrayList<>();
+        if (volumes != null && !volumes.isEmpty()) {
+            for (Volume volume : volumes) {
+                if (volume != null && volume.getVolumeType() == Volume.VolumeType.Snapshot) {
+                    snapshotVolumes.add(volume);
+                }
+            }
+        }
+
+        return snapshotVolumes;
+    }
+
+    @Override
+    public Volume getVolume(String volumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/Volume::" + volumeId);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            return mapper.readValue(response.getEntity().getContent(), Volume.class);
+        } catch (final IOException e) {
+            LOG.error("Failed to get volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public Volume getVolumeByName(String name) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(name), "Volume name cannot be null");
+
+        HttpResponse response = null;
+        try {
+            Volume searchVolume = new Volume();
+            searchVolume.setName(name);
+            response = post("/types/Volume/instances/action/queryIdByKey", searchVolume);
+            checkResponseOK(response);
+            String volumeId = EntityUtils.toString(response.getEntity());
+            if (!Strings.isNullOrEmpty(volumeId)) {
+                return getVolume(volumeId.replace("\"", ""));
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to get volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public boolean renameVolume(final String volumeId, final String newName) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(newName), "New name for volume cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/setVolumeName",
+                    String.format("{\"newName\":\"%s\"}", newName));
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to rename PowerFlex volume due to: ", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public Volume resizeVolume(final String volumeId, final Integer sizeInGB) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(sizeInGB != null && (sizeInGB > 0 && sizeInGB % 8 == 0),
+                "Size(GB) must be greater than 0 and in granularity of 8");
+
+        HttpResponse response = null;
+        try {
+            // Volume capacity can only be increased. sizeInGB must be a positive number in granularity of 8 GB.
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/setVolumeSize",
+                    String.format("{\"sizeInGB\":\"%s\"}", sizeInGB.toString()));
+            checkResponseOK(response);
+            return getVolume(volumeId);
+        } catch (final IOException e) {
+            LOG.error("Failed to resize PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public Volume cloneVolume(final String sourceVolumeId, final String destVolumeName) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sourceVolumeId), "Source volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(destVolumeName), "Dest volume name cannot be null");
+
+        Map<String, String> snapshotMap = new HashMap<>();
+        snapshotMap.put(sourceVolumeId, destVolumeName);
+        takeSnapshot(snapshotMap);
+        return getVolumeByName(destVolumeName);
+    }
+
+    @Override
+    public SnapshotGroup takeSnapshot(final Map<String, String> srcVolumeDestSnapshotMap) {
+        Preconditions.checkArgument(srcVolumeDestSnapshotMap != null && !srcVolumeDestSnapshotMap.isEmpty(), "srcVolumeDestSnapshotMap cannot be null");
+
+        HttpResponse response = null;
+        try {
+            final List<SnapshotDef> defs = new ArrayList<>();
+            for (final String volumeId : srcVolumeDestSnapshotMap.keySet()) {
+                final SnapshotDef snapshotDef = new SnapshotDef();
+                snapshotDef.setVolumeId(volumeId);
+                String snapshotName = srcVolumeDestSnapshotMap.get(volumeId);
+                if (!Strings.isNullOrEmpty(snapshotName)) {
+                    snapshotDef.setSnapshotName(srcVolumeDestSnapshotMap.get(volumeId));
+                }
+                defs.add(snapshotDef);
+            }
+            final SnapshotDefs snapshotDefs = new SnapshotDefs();
+            snapshotDefs.setSnapshotDefs(defs.toArray(new SnapshotDef[0]));
+            response = post("/instances/System/action/snapshotVolumes", snapshotDefs);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            return mapper.readValue(response.getEntity().getContent(), SnapshotGroup.class);
+        } catch (final IOException e) {
+            LOG.error("Failed to take snapshot due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public boolean revertSnapshot(final String systemId, final Map<String, String> srcSnapshotDestVolumeMap) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(systemId), "System id cannot be null");
+        Preconditions.checkArgument(srcSnapshotDestVolumeMap != null && !srcSnapshotDestVolumeMap.isEmpty(), "srcSnapshotDestVolumeMap cannot be null");
+
+        //  Take group snapshot (needs additional storage pool capacity till revert operation) to keep the last state of all volumes ???
+        //  and delete the group snapshot after revert operation
+        //  If revert snapshot failed for any volume, use the group snapshot, to revert volumes to last state
+        Map<String, String> srcVolumeDestSnapshotMap = new HashMap<>();
+        List<String> originalVolumeIds = new ArrayList<>();
+        for (final String sourceSnapshotVolumeId : srcSnapshotDestVolumeMap.keySet()) {
+            String destVolumeId = srcSnapshotDestVolumeMap.get(sourceSnapshotVolumeId);
+            srcVolumeDestSnapshotMap.put(destVolumeId, "");
+            originalVolumeIds.add(destVolumeId);
+        }
+        SnapshotGroup snapshotGroup = takeSnapshot(srcVolumeDestSnapshotMap);
+        if (snapshotGroup == null) {
+            throw new CloudRuntimeException("Failed to snapshot the last vm state");
+        }
+
+        boolean revertSnapshotResult = true;
+        int revertStatusIndex = -1;
+
+        try {
+            // non-atomic operation, try revert each volume
+            for (final String sourceSnapshotVolumeId : srcSnapshotDestVolumeMap.keySet()) {
+                String destVolumeId = srcSnapshotDestVolumeMap.get(sourceSnapshotVolumeId);
+                boolean revertStatus = revertSnapshot(sourceSnapshotVolumeId, destVolumeId);
+                if (!revertStatus) {
+                    revertSnapshotResult = false;
+                    LOG.warn("Failed to revert snapshot for volume id: " + sourceSnapshotVolumeId);
+                    throw new CloudRuntimeException("Failed to revert snapshot for volume id: " + sourceSnapshotVolumeId);
+                } else {
+                    revertStatusIndex++;
+                }
+            }
+        } catch (final Exception e) {
+            LOG.error("Failed to revert vm snapshot due to: " + e.getMessage(), e);
+            throw new CloudRuntimeException("Failed to revert vm snapshot due to: " + e.getMessage());
+        } finally {
+            if (!revertSnapshotResult) {
+                //revert to volume with last state and delete the snapshot group, for already reverted volumes
+                List<String> volumesWithLastState = snapshotGroup.getVolumeIds();
+                for (int index = revertStatusIndex; index >= 0; index--) {
+                    // Handling failure for revert again will become recursive ???
+                    revertSnapshot(volumesWithLastState.get(index), originalVolumeIds.get(index));
+                }
+            }
+            deleteSnapshotGroup(systemId, snapshotGroup.getSnapshotGroupId());
+        }
+
+        return revertSnapshotResult;
+    }
+
+    @Override
+    public int deleteSnapshotGroup(final String systemId, final String snapshotGroupId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(systemId), "System id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(snapshotGroupId), "Snapshot group id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = post(
+                    "/instances/System::" + systemId + "/action/removeConsistencyGroupSnapshots",
+                    String.format("{\"snapGroupId\":\"%s\"}", snapshotGroupId));
+            checkResponseOK(response);
+            JsonNode node = new ObjectMapper().readTree(response.getEntity().getContent());
+            JsonNode noOfVolumesNode = node.get("numberOfVolumes");
+            return noOfVolumesNode.asInt();
+        } catch (final IOException e) {
+            LOG.error("Failed to delete PowerFlex snapshot group due to: " + e.getMessage(), e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return -1;
+    }
+
+    @Override
+    public Volume takeSnapshot(final String volumeId, final String snapshotVolumeName) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(snapshotVolumeName), "Snapshot name cannot be null");
+
+        HttpResponse response = null;
+        try {
+            final SnapshotDef[] snapshotDef = new SnapshotDef[1];
+            snapshotDef[0] = new SnapshotDef();
+            snapshotDef[0].setVolumeId(volumeId);
+            snapshotDef[0].setSnapshotName(snapshotVolumeName);
+            final SnapshotDefs snapshotDefs = new SnapshotDefs();
+            snapshotDefs.setSnapshotDefs(snapshotDef);
+
+            response = post("/instances/System/action/snapshotVolumes", snapshotDefs);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            SnapshotGroup snapshotGroup = mapper.readValue(response.getEntity().getContent(), SnapshotGroup.class);
+            if (snapshotGroup != null) {
+                List<String> volumeIds = snapshotGroup.getVolumeIds();
+                if (volumeIds != null && !volumeIds.isEmpty()) {
+                    return getVolume(volumeIds.get(0));
+                }
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to take snapshot due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public boolean revertSnapshot(final String sourceSnapshotVolumeId, final String destVolumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sourceSnapshotVolumeId), "Source snapshot volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(destVolumeId), "Destination volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            Volume sourceSnapshotVolume = getVolume(sourceSnapshotVolumeId);
+            if (sourceSnapshotVolume == null) {
+                throw new CloudRuntimeException("Source snapshot volume: " + sourceSnapshotVolumeId + " doesn't exists");
+            }
+
+            Volume destVolume = getVolume(destVolumeId);
+            if (sourceSnapshotVolume == null) {
+                throw new CloudRuntimeException("Destination volume: " + destVolumeId + " doesn't exists");
+            }
+
+            if (!sourceSnapshotVolume.getVtreeId().equals(destVolume.getVtreeId())) {
+                throw new CloudRuntimeException("Unable to revert, source snapshot volume and destination volume doesn't belong to same volume tree");
+            }
+
+            response = post(
+                    "/instances/Volume::" + destVolumeId + "/action/overwriteVolumeContent",
+                    String.format("{\"srcVolumeId\":\"%s\",\"allowOnExtManagedVol\":\"TRUE\"}", sourceSnapshotVolumeId));
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to map PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean mapVolumeToSdc(final String volumeId, final String sdcId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sdcId), "Sdc Id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            if (isVolumeMappedToSdc(volumeId, sdcId)) {
+                return true;
+            }
+
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/addMappedSdc",
+                    String.format("{\"sdcId\":\"%s\",\"allowMultipleMappings\":\"TRUE\"}", sdcId));
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to map PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean mapVolumeToSdcWithLimits(final String volumeId, final String sdcId, final Long iopsLimit, final Long bandwidthLimitInKbps) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sdcId), "Sdc Id cannot be null");
+        Preconditions.checkArgument(iopsLimit != null && (iopsLimit == 0 || iopsLimit > 10),
+                "IOPS limit must be 0 (unlimited) or greater than 10");
+        Preconditions.checkArgument(bandwidthLimitInKbps != null && (bandwidthLimitInKbps == 0 || (bandwidthLimitInKbps > 0 && bandwidthLimitInKbps % 1024 == 0)),
+                "Bandwidth limit(Kbps) must be 0 (unlimited) or in granularity of 1024");
+
+        HttpResponse response = null;
+        try {
+            if (mapVolumeToSdc(volumeId, sdcId)) {
+                long iopsLimitVal = 0;
+                if (iopsLimit != null && iopsLimit.longValue() > 0) {
+                    iopsLimitVal = iopsLimit.longValue();
+                }
+
+                long bandwidthLimitInKbpsVal = 0;
+                if (bandwidthLimitInKbps != null && bandwidthLimitInKbps.longValue() > 0) {
+                    bandwidthLimitInKbpsVal = bandwidthLimitInKbps.longValue();
+                }
+
+                response = post(
+                        "/instances/Volume::" + volumeId + "/action/setMappedSdcLimits",
+                        String.format("{\"sdcId\":\"%s\",\"bandwidthLimitInKbps\":\"%d\",\"iopsLimit\":\"%d\"}", sdcId, bandwidthLimitInKbpsVal, iopsLimitVal));
+                checkResponseOK(response);
+                return true;
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to map PowerFlex volume with limits due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean unmapVolumeFromSdc(final String volumeId, final String sdcId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sdcId), "Sdc Id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            if (isVolumeMappedToSdc(volumeId, sdcId)) {
+                response = post(
+                        "/instances/Volume::" + volumeId + "/action/removeMappedSdc",
+                        String.format("{\"sdcId\":\"%s\",\"skipApplianceValidation\":\"TRUE\"}", sdcId));
+                checkResponseOK(response);
+                return true;
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to unmap PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean unmapVolumeFromAllSdcs(final String volumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            Volume volume = getVolume(volumeId);
+            if (volume == null) {
+                return false;
+            }
+
+            List<SdcMappingInfo> mappedSdcList = volume.getMappedSdcList();
+            if (mappedSdcList == null || mappedSdcList.isEmpty()) {
+                return true;
+            }
+
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/removeMappedSdc",
+                    "{\"allSdcs\": \"\"}");
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to unmap PowerFlex volume from all SDCs due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean isVolumeMappedToSdc(final String volumeId, final String sdcId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sdcId), "Sdc Id cannot be null");
+
+        if (Strings.isNullOrEmpty(volumeId) || Strings.isNullOrEmpty(sdcId)) {
+            return false;
+        }
+
+        Volume volume = getVolume(volumeId);
+        if (volume == null) {
+            return false;
+        }
+
+        List<SdcMappingInfo> mappedSdcList = volume.getMappedSdcList();
+        if (mappedSdcList != null && !mappedSdcList.isEmpty()) {
+            for (SdcMappingInfo mappedSdc : mappedSdcList) {
+                if (sdcId.equalsIgnoreCase(mappedSdc.getSdcId())) {
+                    return true;
+                }
+            }
+        }
+
+        return false;
+    }
+
+    @Override
+    public boolean deleteVolume(final String volumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            try {
+                unmapVolumeFromAllSdcs(volumeId);
+            } catch (Exception ignored) {}
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/removeVolume",
+                    "{\"removeMode\":\"ONLY_ME\"}");
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to delete PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    @Override
+    public boolean migrateVolume(final String srcVolumeId, final String destPoolId, final int timeoutInSecs) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(srcVolumeId), "src volume id cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(destPoolId), "dest pool id cannot be null");
+        Preconditions.checkArgument(timeoutInSecs > 0, "timeout must be greater than 0");
+
+        try {
+            Volume volume = getVolume(srcVolumeId);
+            if (volume == null || Strings.isNullOrEmpty(volume.getVtreeId())) {
+                LOG.warn("Couldn't find the volume(-tree), can not migrate the volume " + srcVolumeId);
+                return false;
+            }
+
+            String srcPoolId = volume.getStoragePoolId();
+            LOG.debug("Migrating the volume: " + srcVolumeId + " on the src pool: " + srcPoolId + " to the dest pool: " + destPoolId +
+                    " in the same PowerFlex cluster");
+
+            HttpResponse response = null;
+            try {
+                response = post(
+                        "/instances/Volume::" + srcVolumeId + "/action/migrateVTree",
+                        String.format("{\"destSPId\":\"%s\"}", destPoolId));
+                checkResponseOK(response);
+            } catch (final IOException e) {
+                LOG.error("Unable to migrate PowerFlex volume due to: ", e);
+                checkResponseTimeOut(e);
+                throw e;
+            } finally {
+                if (response != null) {
+                    EntityUtils.consumeQuietly(response.getEntity());
+                }
+            }
+
+            LOG.debug("Wait until the migration is complete for the volume: " + srcVolumeId);
+            long migrationStartTime = System.currentTimeMillis();
+            boolean status = waitForVolumeMigrationToComplete(volume.getVtreeId(), timeoutInSecs);
+
+            // Check volume storage pool and migration status
+            // volume, v-tree, snapshot ids remains same after the migration
+            volume = getVolume(srcVolumeId);
+            if (volume == null || volume.getStoragePoolId() == null) {
+                LOG.warn("Couldn't get the volume: " + srcVolumeId + " details after migration");
+                return status;
+            } else {
+                String volumeOnPoolId = volume.getStoragePoolId();
+                // confirm whether the volume is on the dest storage pool or not
+                if (status && destPoolId.equalsIgnoreCase(volumeOnPoolId)) {
+                    LOG.debug("Migration success for the volume: " + srcVolumeId);
+                    return true;
+                } else {
+                    try {
+                        // Check and pause any migration activity on the volume
+                        status = false;
+                        VTreeMigrationInfo.MigrationStatus migrationStatus = getVolumeTreeMigrationStatus(volume.getVtreeId());
+                        if (migrationStatus != null && migrationStatus != VTreeMigrationInfo.MigrationStatus.NotInMigration) {
+                            long timeElapsedInSecs = (System.currentTimeMillis() - migrationStartTime) / 1000;
+                            int timeRemainingInSecs = (int) (timeoutInSecs - timeElapsedInSecs);
+                            if (timeRemainingInSecs > (timeoutInSecs / 2)) {
+                                // Try to pause gracefully (continue the migration) if atleast half of the time is remaining
+                                pauseVolumeMigration(srcVolumeId, false);
+                                status = waitForVolumeMigrationToComplete(volume.getVtreeId(), timeRemainingInSecs);
+                            }
+                        }
+
+                        if (!status) {
+                            rollbackVolumeMigration(srcVolumeId);
+                        }
+
+                        return status;
+                    } catch (Exception ex) {
+                        LOG.warn("Exception on pause/rollback migration of the volume: " + srcVolumeId + " - " + ex.getLocalizedMessage());
+                    }
+                }
+            }
+        } catch (final Exception e) {
+            LOG.error("Failed to migrate PowerFlex volume due to: " + e.getMessage(), e);
+            throw new CloudRuntimeException("Failed to migrate PowerFlex volume due to: " + e.getMessage());
+        }
+
+        LOG.debug("Migration failed for the volume: " + srcVolumeId);
+        return false;
+    }
+
+    private boolean waitForVolumeMigrationToComplete(final String volumeTreeId, int waitTimeoutInSecs) {
+        LOG.debug("Waiting for the migration to complete for the volume-tree " + volumeTreeId);
+        if (Strings.isNullOrEmpty(volumeTreeId)) {
+            LOG.warn("Invalid volume-tree id, unable to check the migration status of the volume-tree " + volumeTreeId);
+            return false;
+        }
+
+        int delayTimeInSecs = 3;
+        while (waitTimeoutInSecs > 0) {
+            try {
+                // Wait and try after few secs (reduce no. of client API calls to check the migration status) and return after migration is complete
+                Thread.sleep(delayTimeInSecs * 1000);
+
+                VTreeMigrationInfo.MigrationStatus migrationStatus = getVolumeTreeMigrationStatus(volumeTreeId);
+                if (migrationStatus != null && migrationStatus == VTreeMigrationInfo.MigrationStatus.NotInMigration) {
+                    LOG.debug("Migration completed for the volume-tree " + volumeTreeId);
+                    return true;
+                }
+            } catch (Exception ex) {
+                LOG.warn("Exception while checking for migration status of the volume-tree: " + volumeTreeId + " - " + ex.getLocalizedMessage());
+                // don't do anything
+            } finally {
+                waitTimeoutInSecs = waitTimeoutInSecs - delayTimeInSecs;
+            }
+        }
+
+        LOG.debug("Unable to complete the migration for the volume-tree " + volumeTreeId);
+        return false;
+    }
+
+    private VTreeMigrationInfo.MigrationStatus getVolumeTreeMigrationStatus(final String volumeTreeId) {
+        if (Strings.isNullOrEmpty(volumeTreeId)) {
+            LOG.warn("Invalid volume-tree id, unable to get the migration status of the volume-tree " + volumeTreeId);
+            return null;
+        }
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/VTree::" + volumeTreeId);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            VTree volumeTree = mapper.readValue(response.getEntity().getContent(), VTree.class);
+            if (volumeTree != null && volumeTree.getVTreeMigrationInfo() != null) {
+                return volumeTree.getVTreeMigrationInfo().getMigrationStatus();
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to migrate PowerFlex volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    private boolean rollbackVolumeMigration(final String srcVolumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(srcVolumeId), "src volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            Volume volume = getVolume(srcVolumeId);
+            VTreeMigrationInfo.MigrationStatus migrationStatus = getVolumeTreeMigrationStatus(volume.getVtreeId());
+            if (migrationStatus != null && migrationStatus == VTreeMigrationInfo.MigrationStatus.NotInMigration) {
+                LOG.debug("Volume: " + srcVolumeId + " is not migrating, no need to rollback");
+                return true;
+            }
+
+            pauseVolumeMigration(srcVolumeId, true); // Pause forcefully
+            // Wait few secs for volume migration to change to Paused state
+            boolean paused = false;
+            int retryCount = 3;
+            while (retryCount > 0) {
+                try {
+                    Thread.sleep(3000); // Try after few secs
+                    migrationStatus = getVolumeTreeMigrationStatus(volume.getVtreeId()); // Get updated migration status
+                    if (migrationStatus != null && migrationStatus == VTreeMigrationInfo.MigrationStatus.Paused) {
+                        LOG.debug("Migration for the volume: " + srcVolumeId + " paused");
+                        paused = true;
+                        break;
+                    }
+                } catch (Exception ex) {
+                    LOG.warn("Exception while checking for migration pause status of the volume: " + srcVolumeId + " - " + ex.getLocalizedMessage());
+                    // don't do anything
+                } finally {
+                    retryCount--;
+                }
+            }
+
+            if (paused) {
+                // Rollback migration to the src pool (should be quick)
+                response = post(
+                        "/instances/Volume::" + srcVolumeId + "/action/migrateVTree",
+                        String.format("{\"destSPId\":\"%s\"}", volume.getStoragePoolId()));
+                checkResponseOK(response);
+                return true;
+            } else {
+                LOG.warn("Migration for the volume: " + srcVolumeId + " didn't pause, couldn't rollback");
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to rollback volume migration due to: ", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    private boolean pauseVolumeMigration(final String volumeId, final boolean forced) {
+        if (Strings.isNullOrEmpty(volumeId)) {
+            LOG.warn("Invalid Volume Id, Unable to pause migration of the volume " + volumeId);
+            return false;
+        }
+
+        HttpResponse response = null;
+        try {
+            // When paused gracefully, all data currently being moved is allowed to complete the migration.
+            // When paused forcefully, migration of unfinished data is aborted and data is left at the source, if possible.
+            // Pausing forcefully carries a potential risk to data.
+            response = post(
+                    "/instances/Volume::" + volumeId + "/action/pauseVTreeMigration",
+                    String.format("{\"pauseType\":\"%s\"}", forced ? "Forcefully" : "Gracefully"));
+            checkResponseOK(response);
+            return true;
+        } catch (final IOException e) {
+            LOG.error("Failed to pause migration of the volume due to: ", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return false;
+    }
+
+    ///////////////////////////////////////////////////////
+    //////////////// StoragePool APIs /////////////////////
+    ///////////////////////////////////////////////////////
+
+    @Override
+    public List<StoragePool> listStoragePools() {
+        HttpResponse response = null;
+        try {
+            response = get("/types/StoragePool/instances");
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            StoragePool[] pools = mapper.readValue(response.getEntity().getContent(), StoragePool[].class);
+            return Arrays.asList(pools);
+        } catch (final IOException e) {
+            LOG.error("Failed to list PowerFlex storage pools due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return new ArrayList<>();
+    }
+
+    @Override
+    public StoragePool getStoragePool(String poolId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(poolId), "Storage pool id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/StoragePool::" + poolId);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            return mapper.readValue(response.getEntity().getContent(), StoragePool.class);
+        } catch (final IOException e) {
+            LOG.error("Failed to get storage pool due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public StoragePoolStatistics getStoragePoolStatistics(String poolId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(poolId), "Storage pool id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/StoragePool::" + poolId + "/relationships/Statistics");
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            return mapper.readValue(response.getEntity().getContent(), StoragePoolStatistics.class);
+        } catch (final IOException e) {
+            LOG.error("Failed to get storage pool due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public VolumeStatistics getVolumeStatistics(String volumeId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumeId), "Volume id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            Volume volume = getVolume(volumeId);
+            if (volume != null) {
+                String volumeTreeId = volume.getVtreeId();
+                if (!Strings.isNullOrEmpty(volumeTreeId)) {
+                    response = get("/instances/VTree::" + volumeTreeId + "/relationships/Statistics");
+                    checkResponseOK(response);
+                    ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+                    VolumeStatistics volumeStatistics = mapper.readValue(response.getEntity().getContent(), VolumeStatistics.class);
+                    if (volumeStatistics != null) {
+                        volumeStatistics.setAllocatedSizeInKb(volume.getSizeInKb());
+                        return volumeStatistics;
+                    }
+                }
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to get volume stats due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+
+        return null;
+    }
+
+    @Override
+    public String getSystemId(String protectionDomainId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(protectionDomainId), "Protection domain id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/ProtectionDomain::" + protectionDomainId);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            ProtectionDomain protectionDomain = mapper.readValue(response.getEntity().getContent(), ProtectionDomain.class);
+            if (protectionDomain != null) {
+                return protectionDomain.getSystemId();
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to get protection domain details due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public List<Volume> listVolumesInStoragePool(String poolId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(poolId), "Storage pool id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/StoragePool::" + poolId + "/relationships/Volume");
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            Volume[] volumes = mapper.readValue(response.getEntity().getContent(), Volume[].class);
+            return Arrays.asList(volumes);
+        } catch (final IOException e) {
+            LOG.error("Failed to list volumes in storage pool due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return new ArrayList<>();
+    }
+
+    ///////////////////////////////////////////////
+    //////////////// SDC APIs /////////////////////
+    ///////////////////////////////////////////////
+
+    @Override
+    public List<Sdc> listSdcs() {
+        HttpResponse response = null;
+        try {
+            response = get("/types/Sdc/instances");
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            Sdc[] sdcs = mapper.readValue(response.getEntity().getContent(), Sdc[].class);
+            return Arrays.asList(sdcs);
+        } catch (final IOException e) {
+            LOG.error("Failed to list SDCs due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return new ArrayList<>();
+    }
+
+    @Override
+    public Sdc getSdc(String sdcId) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(sdcId), "Sdc id cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = get("/instances/Sdc::" + sdcId);
+            checkResponseOK(response);
+            ObjectMapper mapper = new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
+            return mapper.readValue(response.getEntity().getContent(), Sdc.class);
+        } catch (final IOException e) {
+            LOG.error("Failed to get volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public Sdc getSdcByIp(String ipAddress) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(ipAddress), "IP address cannot be null");
+
+        HttpResponse response = null;
+        try {
+            response = post("/types/Sdc/instances/action/queryIdByKey", String.format("{\"ip\":\"%s\"}", ipAddress));
+            checkResponseOK(response);
+            String sdcId = EntityUtils.toString(response.getEntity());
+            if (!Strings.isNullOrEmpty(sdcId)) {
+                return getSdc(sdcId.replace("\"", ""));
+            }
+        } catch (final IOException e) {
+            LOG.error("Failed to get volume due to:", e);
+            checkResponseTimeOut(e);
+        } finally {
+            if (response != null) {
+                EntityUtils.consumeQuietly(response.getEntity());
+            }
+        }
+        return null;
+    }
+
+    @Override
+    public Sdc getConnectedSdcByIp(String ipAddress) {
+        Sdc sdc = getSdcByIp(ipAddress);
+        if (sdc != null && MDM_CONNECTED_STATE.equalsIgnoreCase(sdc.getMdmConnectionState())) {
+            return sdc;
+        }
+
+        return null;
+    }
+
+    @Override
+    public List<String> listConnectedSdcIps() {
+        List<String> sdcIps = new ArrayList<>();
+        List<Sdc> sdcs = listSdcs();
+        if(sdcs != null) {
+            for (Sdc sdc : sdcs) {
+                if (MDM_CONNECTED_STATE.equalsIgnoreCase(sdc.getMdmConnectionState())) {
+                    sdcIps.add(sdc.getSdcIp());
+                }
+            }
+        }
+
+        return sdcIps;
+    }
+
+    @Override
+    public boolean isSdcConnected(String ipAddress) {
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(ipAddress), "IP address cannot be null");
+
+        List<Sdc> sdcs = listSdcs();
+        if(sdcs != null) {
+            for (Sdc sdc : sdcs) {
+                if (ipAddress.equalsIgnoreCase(sdc.getSdcIp()) && MDM_CONNECTED_STATE.equalsIgnoreCase(sdc.getMdmConnectionState())) {
+                    return true;
+                }
+            }
+        }
+
+        return false;
+    }
+}
diff --git a/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java
new file mode 100644
index 0000000..f840bae
--- /dev/null
+++ b/plugins/storage/volume/scaleio/src/main/java/org/apache/cloudstack/storage/datastore/driver/ScaleIOPrimaryDataStoreDriver.java
@@ -0,0 +1,950 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.cloudstack.storage.datastore.driver;
+
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.inject.Inject;
+
+import org.apache.cloudstack.engine.subsystem.api.storage.ChapInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.CopyCommandResult;
+import org.apache.cloudstack.engine.subsystem.api.storage.CreateCmdResult;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataObject;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
+import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreCapabilities;
+import org.apache.cloudstack.engine.subsystem.api.storage.EndPoint;
+import org.apache.cloudstack.engine.subsystem.api.storage.EndPointSelector;
+import org.apache.cloudstack.engine.subsystem.api.storage.ObjectInDataStoreStateMachine;
+import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreDriver;
+import org.apache.cloudstack.engine.subsystem.api.storage.SnapshotInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.TemplateInfo;
+import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
+import org.apache.cloudstack.framework.async.AsyncCompletionCallback;
+import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
+import org.apache.cloudstack.storage.RemoteHostEndPoint;
+import org.apache.cloudstack.storage.command.CommandResult;
+import org.apache.cloudstack.storage.command.CopyCommand;
+import org.apache.cloudstack.storage.command.CreateObjectAnswer;
+import org.apache.cloudstack.storage.datastore.api.Sdc;
+import org.apache.cloudstack.storage.datastore.api.StoragePoolStatistics;
+import org.apache.cloudstack.storage.datastore.api.VolumeStatistics;
+import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClient;
+import org.apache.cloudstack.storage.datastore.client.ScaleIOGatewayClientConnectionPool;
+import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreDao;
+import org.apache.cloudstack.storage.datastore.db.SnapshotDataStoreVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailVO;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolDetailsDao;
+import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
+import org.apache.cloudstack.storage.datastore.util.ScaleIOUtil;
+import org.apache.cloudstack.storage.to.SnapshotObjectTO;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.log4j.Logger;
+
+import com.cloud.agent.api.Answer;
+import com.cloud.agent.api.to.DataObjectType;
+import com.cloud.agent.api.to.DataStoreTO;
+import com.cloud.agent.api.to.DataTO;
+import com.cloud.alert.AlertManager;
+import com.cloud.configuration.Config;
+import com.cloud.host.Host;
+import com.cloud.server.ManagementServerImpl;
+import com.cloud.storage.DataStoreRole;
+import com.cloud.storage.ResizeVolumePayload;
+import com.cloud.storage.SnapshotVO;
+import com.cloud.storage.Storage;
+import com.cloud.storage.StorageManager;
+import com.cloud.storage.StoragePool;
+import com.cloud.storage.VMTemplateStoragePoolVO;
+import com.cloud.storage.Volume;
+import com.cloud.storage.VolumeDetailVO;
+import com.cloud.storage.VolumeVO;
+import com.cloud.storage.dao.SnapshotDao;
+import com.cloud.storage.dao.VMTemplatePoolDao;
+import com.cloud.storage.dao.VolumeDao;
+import com.cloud.storage.dao.VolumeDetailsDao;
+import com.cloud.utils.NumbersUtil;
+import com.cloud.utils.Pair;
+import com.cloud.utils.exception.CloudRuntimeException;
+import com.cloud.vm.VirtualMachineManager;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
+
+public class ScaleIOPrimaryDataStoreDriver implements PrimaryDataStoreDriver {
+    private static final Logger LOGGER = Logger.getLogger(ScaleIOPrimaryDataStoreDriver.class);
+
+    @Inject
+    EndPointSelector selector;
+    @Inject
+    private PrimaryDataStoreDao storagePoolDao;
+    @Inject
+    private StoragePoolDetailsDao storagePoolDetailsDao;
+    @Inject
+    private VolumeDao volumeDao;
+    @Inject
+    private VolumeDetailsDao volumeDetailsDao;
+    @Inject
+    private VMTemplatePoolDao vmTemplatePoolDao;
+    @Inject
+    private SnapshotDataStoreDao snapshotDataStoreDao;
+    @Inject
+    protected SnapshotDao snapshotDao;
+    @Inject
+    private AlertManager alertMgr;
+    @Inject
+    private ConfigurationDao configDao;
+
+    public ScaleIOPrimaryDataStoreDriver() {
+
+    }
+
+    private ScaleIOGatewayClient getScaleIOClient(final Long storagePoolId) throws Exception {
+        return ScaleIOGatewayClientConnectionPool.getInstance().getClient(storagePoolId, storagePoolDetailsDao);
+    }
+
+    @Override
+    public boolean grantAccess(DataObject dataObject, Host host, DataStore dataStore) {
+        try {
+            if (DataObjectType.VOLUME.equals(dataObject.getType())) {
+                final VolumeVO volume = volumeDao.findById(dataObject.getId());
+                LOGGER.debug("Granting access for PowerFlex volume: " + volume.getPath());
+
+                Long bandwidthLimitInKbps = Long.valueOf(0); // Unlimited
+                // Check Bandwidht Limit parameter in volume details
+                final VolumeDetailVO bandwidthVolumeDetail = volumeDetailsDao.findDetail(volume.getId(), Volume.BANDWIDTH_LIMIT_IN_MBPS);
+                if (bandwidthVolumeDetail != null && bandwidthVolumeDetail.getValue() != null) {
+                    bandwidthLimitInKbps = Long.parseLong(bandwidthVolumeDetail.getValue()) * 1024;
+                }
+
+                Long iopsLimit = Long.valueOf(0); // Unlimited
+                // Check IOPS Limit parameter in volume details, else try MaxIOPS
+                final VolumeDetailVO iopsVolumeDetail = volumeDetailsDao.findDetail(volume.getId(), Volume.IOPS_LIMIT);
+                if (iopsVolumeDetail != null && iopsVolumeDetail.getValue() != null) {
+                    iopsLimit = Long.parseLong(iopsVolumeDetail.getValue());
+                } else if (volume.getMaxIops() != null) {
+                    iopsLimit = volume.getMaxIops();
+                }
+                if (iopsLimit > 0 && iopsLimit < ScaleIOUtil.MINIMUM_ALLOWED_IOPS_LIMIT) {
+                    iopsLimit = ScaleIOUtil.MINIMUM_ALLOWED_IOPS_LIMIT;
+                }
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    alertHostSdcDisconnection(host);
+                    throw new CloudRuntimeException("Unable to grant access to volume: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                return client.mapVolumeToSdcWithLimits(ScaleIOUtil.getVolumePath(volume.getPath()), sdc.getId(), iopsLimit, bandwidthLimitInKbps);
+            } else if (DataObjectType.TEMPLATE.equals(dataObject.getType())) {
+                final VMTemplateStoragePoolVO templatePoolRef = vmTemplatePoolDao.findByPoolTemplate(dataStore.getId(), dataObject.getId(), null);
+                LOGGER.debug("Granting access for PowerFlex template volume: " + templatePoolRef.getInstallPath());
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    alertHostSdcDisconnection(host);
+                    throw new CloudRuntimeException("Unable to grant access to template: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                return client.mapVolumeToSdc(ScaleIOUtil.getVolumePath(templatePoolRef.getInstallPath()), sdc.getId());
+            } else if (DataObjectType.SNAPSHOT.equals(dataObject.getType())) {
+                SnapshotInfo snapshot = (SnapshotInfo) dataObject;
+                LOGGER.debug("Granting access for PowerFlex volume snapshot: " + snapshot.getPath());
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    alertHostSdcDisconnection(host);
+                    throw new CloudRuntimeException("Unable to grant access to snapshot: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                return client.mapVolumeToSdc(ScaleIOUtil.getVolumePath(snapshot.getPath()), sdc.getId());
+            }
+
+            return false;
+        } catch (Exception e) {
+            throw new CloudRuntimeException(e);
+        }
+    }
+
+    @Override
+    public void revokeAccess(DataObject dataObject, Host host, DataStore dataStore) {
+        try {
+            if (DataObjectType.VOLUME.equals(dataObject.getType())) {
+                final VolumeVO volume = volumeDao.findById(dataObject.getId());
+                LOGGER.debug("Revoking access for PowerFlex volume: " + volume.getPath());
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    throw new CloudRuntimeException("Unable to revoke access for volume: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                client.unmapVolumeFromSdc(ScaleIOUtil.getVolumePath(volume.getPath()), sdc.getId());
+            } else if (DataObjectType.TEMPLATE.equals(dataObject.getType())) {
+                final VMTemplateStoragePoolVO templatePoolRef = vmTemplatePoolDao.findByPoolTemplate(dataStore.getId(), dataObject.getId(), null);
+                LOGGER.debug("Revoking access for PowerFlex template volume: " + templatePoolRef.getInstallPath());
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    throw new CloudRuntimeException("Unable to revoke access for template: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                client.unmapVolumeFromSdc(ScaleIOUtil.getVolumePath(templatePoolRef.getInstallPath()), sdc.getId());
+            } else if (DataObjectType.SNAPSHOT.equals(dataObject.getType())) {
+                SnapshotInfo snapshot = (SnapshotInfo) dataObject;
+                LOGGER.debug("Revoking access for PowerFlex volume snapshot: " + snapshot.getPath());
+
+                final ScaleIOGatewayClient client = getScaleIOClient(dataStore.getId());
+                final Sdc sdc = client.getConnectedSdcByIp(host.getPrivateIpAddress());
+                if (sdc == null) {
+                    throw new CloudRuntimeException("Unable to revoke access for snapshot: " + dataObject.getId() + ", no Sdc connected with host ip: " + host.getPrivateIpAddress());
+                }
+
+                client.unmapVolumeFromSdc(ScaleIOUtil.getVolumePath(snapshot.getPath()), sdc.getId());
+            }
+        } catch (Exception e) {
+            LOGGER.warn("Failed to revoke access due to: " + e.getMessage(), e);
+        }
+    }
+
+    @Override
+    public long getUsedBytes(StoragePool storagePool) {
+        long usedSpaceBytes = 0;
+        // Volumes
+        List<VolumeVO> volumes = volumeDao.findByPoolIdAndState(storagePool.getId(), Volume.State.Ready);
+        if (volumes != null) {
+            for (VolumeVO volume : volumes) {
+                usedSpaceBytes += volume.getSize();
+
+                long vmSnapshotChainSize = volume.getVmSnapshotChainSize() == null ? 0 : volume.getVmSnapshotChainSize();
+                usedSpaceBytes += vmSnapshotChainSize;
+            }
+        }
+
+        //Snapshots
+        List<SnapshotDataStoreVO> snapshots = snapshotDataStoreDao.listByStoreIdAndState(storagePool.getId(), ObjectInDataStoreStateMachine.State.Ready);
+        if (snapshots != null) {
+            for (SnapshotDataStoreVO snapshot : snapshots) {
+                usedSpaceBytes += snapshot.getSize();
+            }
+        }
+
+        // Templates
+        List<VMTemplateStoragePoolVO> templates = vmTemplatePoolDao.listByPoolIdAndState(storagePool.getId(), ObjectInDataStoreStateMachine.State.Ready);
+        if (templates != null) {
+            for (VMTemplateStoragePoolVO template : templates) {
+                usedSpaceBytes += template.getTemplateSize();
+            }
+        }
+
+        LOGGER.debug("Used/Allocated storage space (in bytes): " + String.valueOf(usedSpaceBytes));
+
+        return usedSpaceBytes;
+    }
+
+    @Override
+    public long getUsedIops(StoragePool storagePool) {
+        return 0;
+    }
+
+    @Override
+    public long getDataObjectSizeIncludingHypervisorSnapshotReserve(DataObject dataObject, StoragePool pool) {
+        return ((dataObject != null && dataObject.getSize() != null) ? dataObject.getSize() : 0);
+    }
+
+    @Override
+    public long getBytesRequiredForTemplate(TemplateInfo templateInfo, StoragePool storagePool) {
+        if (templateInfo == null || storagePool == null) {
+            return 0;
+        }
+
+        VMTemplateStoragePoolVO templatePoolRef = vmTemplatePoolDao.findByPoolTemplate(storagePool.getId(), templateInfo.getId(), null);
+        if (templatePoolRef != null) {
+            // Template exists on this primary storage, do not require additional space
+            return 0;
+        }
+
+        return getDataObjectSizeIncludingHypervisorSnapshotReserve(templateInfo, storagePool);
+    }
+
+    @Override
+    public Map<String, String> getCapabilities() {
+        Map<String, String> mapCapabilities = new HashMap<>();
+        mapCapabilities.put(DataStoreCapabilities.CAN_CREATE_VOLUME_FROM_VOLUME.toString(), Boolean.TRUE.toString());
+        mapCapabilities.put(DataStoreCapabilities.CAN_CREATE_VOLUME_FROM_SNAPSHOT.toString(), Boolean.TRUE.toString());
+        mapCapabilities.put(DataStoreCapabilities.CAN_REVERT_VOLUME_TO_SNAPSHOT.toString(), Boolean.TRUE.toString());
+        mapCapabilities.put(DataStoreCapabilities.STORAGE_SYSTEM_SNAPSHOT.toString(), Boolean.TRUE.toString());
+        return mapCapabilities;
+    }
+
+    @Override
+    public ChapInfo getChapInfo(DataObject dataObject) {
+        return null;
+    }
+
+    @Override
+    public DataTO getTO(DataObject data) {
+        return null;
+    }
+
+    @Override
+    public DataStoreTO getStoreTO(DataStore store) {
+        return null;
+    }
+
+    @Override
+    public void takeSnapshot(SnapshotInfo snapshotInfo, AsyncCompletionCallback<CreateCmdResult> callback) {
+        LOGGER.debug("Taking PowerFlex volume snapshot");
+
+        Preconditions.checkArgument(snapshotInfo != null, "snapshotInfo cannot be null");
+
+        VolumeInfo volumeInfo = snapshotInfo.getBaseVolume();
+        Preconditions.checkArgument(volumeInfo != null, "volumeInfo cannot be null");
+
+        VolumeVO volumeVO = volumeDao.findById(volumeInfo.getId());
+
+        long storagePoolId = volumeVO.getPoolId();
+        Preconditions.checkArgument(storagePoolId > 0, "storagePoolId should be > 0");
+
+        StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+        Preconditions.checkArgument(storagePool != null && storagePool.getHostAddress() != null, "storagePool and host address should not be null");
+
+        CreateCmdResult result;
+
+        try {
+            SnapshotObjectTO snapshotObjectTo = (SnapshotObjectTO)snapshotInfo.getTO();
+
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            final String scaleIOVolumeId = ScaleIOUtil.getVolumePath(volumeVO.getPath());
+            String snapshotName = String.format("%s-%s-%s-%s", ScaleIOUtil.SNAPSHOT_PREFIX, snapshotInfo.getId(),
+                    storagePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+
+            org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null;
+            scaleIOVolume = client.takeSnapshot(scaleIOVolumeId, snapshotName);
+
+            if (scaleIOVolume == null) {
+                throw new CloudRuntimeException("Failed to take snapshot on PowerFlex cluster");
+            }
+
+            snapshotObjectTo.setPath(ScaleIOUtil.updatedPathWithVolumeName(scaleIOVolume.getId(), snapshotName));
+            CreateObjectAnswer createObjectAnswer = new CreateObjectAnswer(snapshotObjectTo);
+            result = new CreateCmdResult(null, createObjectAnswer);
+            result.setResult(null);
+        } catch (Exception e) {
+            String errMsg = "Unable to take PowerFlex volume snapshot for volume: " + volumeInfo.getId() + " due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            result = new CreateCmdResult(null, new CreateObjectAnswer(e.toString()));
+            result.setResult(e.toString());
+        }
+
+        callback.complete(result);
+    }
+
+    @Override
+    public void revertSnapshot(SnapshotInfo snapshot, SnapshotInfo snapshotOnPrimaryStore, AsyncCompletionCallback<CommandResult> callback) {
+        LOGGER.debug("Reverting to PowerFlex volume snapshot");
+
+        Preconditions.checkArgument(snapshot != null, "snapshotInfo cannot be null");
+
+        VolumeInfo volumeInfo = snapshot.getBaseVolume();
+        Preconditions.checkArgument(volumeInfo != null, "volumeInfo cannot be null");
+
+        VolumeVO volumeVO = volumeDao.findById(volumeInfo.getId());
+
+        try {
+            if (volumeVO == null || volumeVO.getRemoved() != null) {
+                String errMsg = "The volume that the snapshot belongs to no longer exists.";
+                CommandResult commandResult = new CommandResult();
+                commandResult.setResult(errMsg);
+                callback.complete(commandResult);
+                return;
+            }
+
+            long storagePoolId = volumeVO.getPoolId();
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            String snapshotVolumeId = ScaleIOUtil.getVolumePath(snapshot.getPath());
+            final String destVolumeId = ScaleIOUtil.getVolumePath(volumeVO.getPath());
+            client.revertSnapshot(snapshotVolumeId, destVolumeId);
+
+            CommandResult commandResult = new CommandResult();
+            callback.complete(commandResult);
+        } catch (Exception ex) {
+            LOGGER.debug("Unable to revert to PowerFlex snapshot: " + snapshot.getId(), ex);
+            throw new CloudRuntimeException(ex.getMessage());
+        }
+    }
+
+    private String createVolume(VolumeInfo volumeInfo, long storagePoolId) {
+        LOGGER.debug("Creating PowerFlex volume");
+
+        StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+
+        Preconditions.checkArgument(volumeInfo != null, "volumeInfo cannot be null");
+        Preconditions.checkArgument(storagePoolId > 0, "storagePoolId should be > 0");
+        Preconditions.checkArgument(storagePool != null && storagePool.getHostAddress() != null, "storagePool and host address should not be null");
+
+        try {
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            final String scaleIOStoragePoolId = storagePool.getPath();
+            final Long sizeInBytes = volumeInfo.getSize();
+            final long sizeInGb = (long) Math.ceil(sizeInBytes / (1024.0 * 1024.0 * 1024.0));
+            final String scaleIOVolumeName = String.format("%s-%s-%s-%s", ScaleIOUtil.VOLUME_PREFIX, volumeInfo.getId(),
+                    storagePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+
+            org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null;
+            scaleIOVolume = client.createVolume(scaleIOVolumeName, scaleIOStoragePoolId, (int) sizeInGb, volumeInfo.getProvisioningType());
+
+            if (scaleIOVolume == null) {
+                throw new CloudRuntimeException("Failed to create volume on PowerFlex cluster");
+            }
+
+            VolumeVO volume = volumeDao.findById(volumeInfo.getId());
+            String volumePath = ScaleIOUtil.updatedPathWithVolumeName(scaleIOVolume.getId(), scaleIOVolumeName);
+            volume.set_iScsiName(volumePath);
+            volume.setPath(volumePath);
+            volume.setFolder(scaleIOVolume.getVtreeId());
+            volume.setSize(scaleIOVolume.getSizeInKb() * 1024);
+            volume.setPoolType(Storage.StoragePoolType.PowerFlex);
+            volume.setFormat(Storage.ImageFormat.RAW);
+            volume.setPoolId(storagePoolId);
+            volumeDao.update(volume.getId(), volume);
+
+            long capacityBytes = storagePool.getCapacityBytes();
+            long usedBytes = storagePool.getUsedBytes();
+            usedBytes += volume.getSize();
+            storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes);
+            storagePoolDao.update(storagePoolId, storagePool);
+
+            return volumePath;
+        } catch (Exception e) {
+            String errMsg = "Unable to create PowerFlex Volume due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            throw new CloudRuntimeException(errMsg, e);
+        }
+    }
+
+    private String  createTemplateVolume(TemplateInfo templateInfo, long storagePoolId) {
+        LOGGER.debug("Creating PowerFlex template volume");
+
+        StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+        Preconditions.checkArgument(templateInfo != null, "templateInfo cannot be null");
+        Preconditions.checkArgument(storagePoolId > 0, "storagePoolId should be > 0");
+        Preconditions.checkArgument(storagePool != null && storagePool.getHostAddress() != null, "storagePool and host address should not be null");
+
+        try {
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            final String scaleIOStoragePoolId = storagePool.getPath();
+            final Long sizeInBytes = templateInfo.getSize();
+            final long sizeInGb = (long) Math.ceil(sizeInBytes / (1024.0 * 1024.0 * 1024.0));
+            final String scaleIOVolumeName = String.format("%s-%s-%s-%s", ScaleIOUtil.TEMPLATE_PREFIX, templateInfo.getId(),
+                    storagePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+
+            org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null;
+            scaleIOVolume = client.createVolume(scaleIOVolumeName, scaleIOStoragePoolId, (int) sizeInGb, Storage.ProvisioningType.THIN);
+
+            if (scaleIOVolume == null) {
+                throw new CloudRuntimeException("Failed to create template volume on PowerFlex cluster");
+            }
+
+            VMTemplateStoragePoolVO templatePoolRef = vmTemplatePoolDao.findByPoolTemplate(storagePoolId, templateInfo.getId(), null);
+            String templatePath = ScaleIOUtil.updatedPathWithVolumeName(scaleIOVolume.getId(), scaleIOVolumeName);
+            templatePoolRef.setInstallPath(templatePath);
+            templatePoolRef.setLocalDownloadPath(scaleIOVolume.getId());
+            templatePoolRef.setTemplateSize(scaleIOVolume.getSizeInKb() * 1024);
+            vmTemplatePoolDao.update(templatePoolRef.getId(), templatePoolRef);
+
+            long capacityBytes = storagePool.getCapacityBytes();
+            long usedBytes = storagePool.getUsedBytes();
+            usedBytes += templatePoolRef.getTemplateSize();
+            storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes);
+            storagePoolDao.update(storagePoolId, storagePool);
+
+            return templatePath;
+        } catch (Exception e) {
+            String errMsg = "Unable to create PowerFlex template volume due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            throw new CloudRuntimeException(errMsg, e);
+        }
+    }
+
+    @Override
+    public void createAsync(DataStore dataStore, DataObject dataObject, AsyncCompletionCallback<CreateCmdResult> callback) {
+        String scaleIOVolumePath = null;
+        String errMsg = null;
+        try {
+            if (dataObject.getType() == DataObjectType.VOLUME) {
+                LOGGER.debug("createAsync - creating volume");
+                scaleIOVolumePath = createVolume((VolumeInfo) dataObject, dataStore.getId());
+            } else if (dataObject.getType() == DataObjectType.TEMPLATE) {
+                LOGGER.debug("createAsync - creating template");
+                scaleIOVolumePath = createTemplateVolume((TemplateInfo)dataObject, dataStore.getId());
+            } else {
+                errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to createAsync";
+                LOGGER.error(errMsg);
+            }
+        } catch (Exception ex) {
+            errMsg = ex.getMessage();
+            LOGGER.error(errMsg);
+            if (callback == null) {
+                throw ex;
+            }
+        }
+
+        if (callback != null) {
+            CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, new Answer(null, errMsg == null, errMsg));
+            result.setResult(errMsg);
+            callback.complete(result);
+        }
+    }
+
+    @Override
+    public void deleteAsync(DataStore dataStore, DataObject dataObject, AsyncCompletionCallback<CommandResult> callback) {
+        Preconditions.checkArgument(dataObject != null, "dataObject cannot be null");
+
+        long storagePoolId = dataStore.getId();
+        StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+        Preconditions.checkArgument(storagePoolId > 0, "storagePoolId should be > 0");
+        Preconditions.checkArgument(storagePool != null && storagePool.getHostAddress() != null, "storagePool and host address should not be null");
+
+        String errMsg = null;
+        String scaleIOVolumePath = null;
+        try {
+            boolean deleteResult = false;
+            if (dataObject.getType() == DataObjectType.VOLUME) {
+                LOGGER.debug("deleteAsync - deleting volume");
+                scaleIOVolumePath = ((VolumeInfo) dataObject).getPath();
+            } else if (dataObject.getType() == DataObjectType.SNAPSHOT) {
+                LOGGER.debug("deleteAsync - deleting snapshot");
+                scaleIOVolumePath = ((SnapshotInfo) dataObject).getPath();
+            } else if (dataObject.getType() == DataObjectType.TEMPLATE) {
+                LOGGER.debug("deleteAsync - deleting template");
+                scaleIOVolumePath = ((TemplateInfo) dataObject).getInstallPath();
+            } else {
+                errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to deleteAsync";
+                LOGGER.error(errMsg);
+                throw new CloudRuntimeException(errMsg);
+            }
+
+            try {
+                String scaleIOVolumeId = ScaleIOUtil.getVolumePath(scaleIOVolumePath);
+                final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+                deleteResult =  client.deleteVolume(scaleIOVolumeId);
+                if (!deleteResult) {
+                    errMsg = "Failed to delete PowerFlex volume with id: " + scaleIOVolumeId;
+                }
+
+                long usedBytes = storagePool.getUsedBytes();
+                usedBytes -= dataObject.getSize();
+                storagePool.setUsedBytes(usedBytes < 0 ? 0 : usedBytes);
+                storagePoolDao.update(storagePoolId, storagePool);
+            } catch (Exception e) {
+                errMsg = "Unable to delete PowerFlex volume: " + scaleIOVolumePath + " due to " + e.getMessage();
+                LOGGER.warn(errMsg);
+                throw new CloudRuntimeException(errMsg, e);
+            }
+        } catch (Exception ex) {
+            errMsg = ex.getMessage();
+            LOGGER.error(errMsg);
+            if (callback == null) {
+                throw ex;
+            }
+        }
+
+        if (callback != null) {
+            CommandResult result = new CommandResult();
+            result.setResult(errMsg);
+            callback.complete(result);
+        }
+    }
+
+    @Override
+    public void copyAsync(DataObject srcData, DataObject destData, AsyncCompletionCallback<CopyCommandResult> callback) {
+        copyAsync(srcData, destData, null, callback);
+    }
+
+    @Override
+    public void copyAsync(DataObject srcData, DataObject destData, Host destHost, AsyncCompletionCallback<CopyCommandResult> callback) {
+        Answer answer = null;
+        String errMsg = null;
+
+        try {
+            DataStore srcStore = srcData.getDataStore();
+            DataStore destStore = destData.getDataStore();
+            if (srcStore.getRole() == DataStoreRole.Primary && (destStore.getRole() == DataStoreRole.Primary && destData.getType() == DataObjectType.VOLUME)) {
+                if (srcData.getType() == DataObjectType.TEMPLATE) {
+                    answer = copyTemplateToVolume(srcData, destData, destHost);
+                    if (answer == null) {
+                        errMsg = "No answer for copying template to PowerFlex volume";
+                    } else if (!answer.getResult()) {
+                        errMsg = answer.getDetails();
+                    }
+                } else if (srcData.getType() == DataObjectType.VOLUME) {
+                    if (isSameScaleIOStorageInstance(srcStore, destStore)) {
+                        answer = migrateVolume(srcData, destData);
+                    } else {
+                        answer = copyVolume(srcData, destData, destHost);
+                    }
+
+                    if (answer == null) {
+                        errMsg = "No answer for migrate PowerFlex volume";
+                    } else if (!answer.getResult()) {
+                        errMsg = answer.getDetails();
+                    }
+                } else {
+                    errMsg = "Unsupported copy operation from src object: (" + srcData.getType() + ", " + srcData.getDataStore() + "), dest object: ("
+                            + destData.getType() + ", " + destData.getDataStore() + ")";
+                    LOGGER.warn(errMsg);
+                }
+            } else {
+                errMsg = "Unsupported copy operation";
+            }
+        } catch (Exception e) {
+            LOGGER.debug("Failed to copy due to " + e.getMessage(), e);
+            errMsg = e.toString();
+        }
+
+        CopyCommandResult result = new CopyCommandResult(null, answer);
+        result.setResult(errMsg);
+        callback.complete(result);
+    }
+
+    private Answer copyTemplateToVolume(DataObject srcData, DataObject destData, Host destHost) {
+        // Copy PowerFlex/ScaleIO template to volume
+        LOGGER.debug("Initiating copy from PowerFlex template volume on host " + destHost != null ? destHost.getId() : "");
+        int primaryStorageDownloadWait = StorageManager.PRIMARY_STORAGE_DOWNLOAD_WAIT.value();
+        CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), primaryStorageDownloadWait, VirtualMachineManager.ExecuteInSequence.value());
+
+        Answer answer = null;
+        EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore());
+        if (ep == null) {
+            String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?";
+            LOGGER.error(errorMsg);
+            answer = new Answer(cmd, false, errorMsg);
+        } else {
+            answer = ep.sendMessage(cmd);
+        }
+
+        return answer;
+    }
+
+    private Answer copyVolume(DataObject srcData, DataObject destData, Host destHost) {
+        // Copy PowerFlex/ScaleIO volume
+        LOGGER.debug("Initiating copy from PowerFlex volume on host " + destHost != null ? destHost.getId() : "");
+        String value = configDao.getValue(Config.CopyVolumeWait.key());
+        int copyVolumeWait = NumbersUtil.parseInt(value, Integer.parseInt(Config.CopyVolumeWait.getDefaultValue()));
+
+        CopyCommand cmd = new CopyCommand(srcData.getTO(), destData.getTO(), copyVolumeWait, VirtualMachineManager.ExecuteInSequence.value());
+
+        Answer answer = null;
+        EndPoint ep = destHost != null ? RemoteHostEndPoint.getHypervisorHostEndPoint(destHost) : selector.select(srcData.getDataStore());
+        if (ep == null) {
+            String errorMsg = "No remote endpoint to send command, check if host or ssvm is down?";
+            LOGGER.error(errorMsg);
+            answer = new Answer(cmd, false, errorMsg);
+        } else {
+            answer = ep.sendMessage(cmd);
+        }
+
+        return answer;
+    }
+
+    private Answer migrateVolume(DataObject srcData, DataObject destData) {
+        // Volume migration within same PowerFlex/ScaleIO cluster (with same System ID)
+        DataStore srcStore = srcData.getDataStore();
+        DataStore destStore = destData.getDataStore();
+        Answer answer = null;
+        try {
+            long srcPoolId = srcStore.getId();
+            long destPoolId = destStore.getId();
+
+            final ScaleIOGatewayClient client = getScaleIOClient(srcPoolId);
+            final String srcVolumePath = ((VolumeInfo) srcData).getPath();
+            final String srcVolumeId = ScaleIOUtil.getVolumePath(srcVolumePath);
+            final StoragePoolVO destStoragePool = storagePoolDao.findById(destPoolId);
+            final String destStoragePoolId = destStoragePool.getPath();
+            int migrationTimeout = StorageManager.KvmStorageOfflineMigrationWait.value();
+            boolean migrateStatus = client.migrateVolume(srcVolumeId, destStoragePoolId, migrationTimeout);
+            if (migrateStatus) {
+                String newVolumeName = String.format("%s-%s-%s-%s", ScaleIOUtil.VOLUME_PREFIX, destData.getId(),
+                        destStoragePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+                boolean renamed = client.renameVolume(srcVolumeId, newVolumeName);
+
+                if (srcData.getId() != destData.getId()) {
+                    VolumeVO destVolume = volumeDao.findById(destData.getId());
+                    // Volume Id in the PowerFlex/ScaleIO pool remains the same after the migration
+                    // Update PowerFlex volume name only after it is renamed, to maintain the consistency
+                    if (renamed) {
+                        String newVolumePath = ScaleIOUtil.updatedPathWithVolumeName(srcVolumeId, newVolumeName);
+                        destVolume.set_iScsiName(newVolumePath);
+                        destVolume.setPath(newVolumePath);
+                    } else {
+                        destVolume.set_iScsiName(srcVolumePath);
+                        destVolume.setPath(srcVolumePath);
+                    }
+                    volumeDao.update(destData.getId(), destVolume);
+
+                    VolumeVO srcVolume = volumeDao.findById(srcData.getId());
+                    srcVolume.set_iScsiName(null);
+                    srcVolume.setPath(null);
+                    srcVolume.setFolder(null);
+                    volumeDao.update(srcData.getId(), srcVolume);
+                } else {
+                    // Live migrate volume
+                    VolumeVO volume = volumeDao.findById(srcData.getId());
+                    Long oldPoolId = volume.getPoolId();
+                    volume.setPoolId(destPoolId);
+                    volume.setLastPoolId(oldPoolId);
+                    volumeDao.update(srcData.getId(), volume);
+                }
+
+                List<SnapshotVO> snapshots = snapshotDao.listByVolumeId(srcData.getId());
+                if (CollectionUtils.isNotEmpty(snapshots)) {
+                    for (SnapshotVO snapshot : snapshots) {
+                        SnapshotDataStoreVO snapshotStore = snapshotDataStoreDao.findBySnapshot(snapshot.getId(), DataStoreRole.Primary);
+                        if (snapshotStore == null) {
+                            continue;
+                        }
+
+                        String snapshotVolumeId = ScaleIOUtil.getVolumePath(snapshotStore.getInstallPath());
+                        String newSnapshotName = String.format("%s-%s-%s-%s", ScaleIOUtil.SNAPSHOT_PREFIX, snapshot.getId(),
+                                destStoragePool.getUuid().split("-")[0].substring(4), ManagementServerImpl.customCsIdentifier.value());
+                        renamed = client.renameVolume(snapshotVolumeId, newSnapshotName);
+
+                        snapshotStore.setDataStoreId(destPoolId);
+                        // Snapshot Id in the PowerFlex/ScaleIO pool remains the same after the migration
+                        // Update PowerFlex snapshot name only after it is renamed, to maintain the consistency
+                        if (renamed) {
+                            snapshotStore.setInstallPath(ScaleIOUtil.updatedPathWithVolumeName(snapshotVolumeId, newSnapshotName));
+                        }
+                        snapshotDataStoreDao.update(snapshotStore.getId(), snapshotStore);
+                    }
+                }
+
+                answer = new Answer(null, true, null);
+            } else {
+                String errorMsg = "Failed to migrate PowerFlex volume: " + srcData.getId() + " to storage pool " + destPoolId;
+                LOGGER.debug(errorMsg);
+                answer = new Answer(null, false, errorMsg);
+            }
+        } catch (Exception e) {
+            LOGGER.error("Failed to migrate PowerFlex volume: " + srcData.getId() + " due to: " + e.getMessage());
+            answer = new Answer(null, false, e.getMessage());
+        }
+
+        return answer;
+    }
+
+    private boolean isSameScaleIOStorageInstance(DataStore srcStore, DataStore destStore) {
+        long srcPoolId = srcStore.getId();
+        String srcPoolSystemId = null;
+        StoragePoolDetailVO srcPoolSystemIdDetail = storagePoolDetailsDao.findDetail(srcPoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID);
+        if (srcPoolSystemIdDetail != null) {
+            srcPoolSystemId = srcPoolSystemIdDetail.getValue();
+        }
+
+        long destPoolId = destStore.getId();
+        String destPoolSystemId = null;
+        StoragePoolDetailVO destPoolSystemIdDetail = storagePoolDetailsDao.findDetail(destPoolId, ScaleIOGatewayClient.STORAGE_POOL_SYSTEM_ID);
+        if (destPoolSystemIdDetail != null) {
+            destPoolSystemId = destPoolSystemIdDetail.getValue();
+        }
+
+        if (Strings.isNullOrEmpty(srcPoolSystemId) || Strings.isNullOrEmpty(destPoolSystemId)) {
+            throw new CloudRuntimeException("Failed to validate PowerFlex pools compatibility for migration as storage instance details are not available");
+        }
+
+        if (srcPoolSystemId.equals(destPoolSystemId)) {
+            return true;
+        }
+
+        return false;
+    }
+
+    @Override
+    public boolean canCopy(DataObject srcData, DataObject destData) {
+        DataStore srcStore = destData.getDataStore();
+        DataStore destStore = destData.getDataStore();
+        if ((srcStore.getRole() == DataStoreRole.Primary && (srcData.getType() == DataObjectType.TEMPLATE || srcData.getType() == DataObjectType.VOLUME))
+                && (destStore.getRole() == DataStoreRole.Primary && destData.getType() == DataObjectType.VOLUME)) {
+            StoragePoolVO srcPoolVO = storagePoolDao.findById(srcStore.getId());
+            StoragePoolVO destPoolVO = storagePoolDao.findById(destStore.getId());
+            if (srcPoolVO != null && srcPoolVO.getPoolType() == Storage.StoragePoolType.PowerFlex
+                    && destPoolVO != null && destPoolVO.getPoolType() == Storage.StoragePoolType.PowerFlex) {
+                return true;
+            }
+        }
+        return false;
+    }
+
+    private void resizeVolume(VolumeInfo volumeInfo) {
+        LOGGER.debug("Resizing PowerFlex volume");
+
+        Preconditions.checkArgument(volumeInfo != null, "volumeInfo cannot be null");
+
+        try {
+            String scaleIOVolumeId = ScaleIOUtil.getVolumePath(volumeInfo.getPath());
+            Long storagePoolId = volumeInfo.getPoolId();
+
+            ResizeVolumePayload payload = (ResizeVolumePayload)volumeInfo.getpayload();
+            long newSizeInBytes = payload.newSize != null ? payload.newSize : volumeInfo.getSize();
+            // Only increase size is allowed and size should be specified in granularity of 8 GB
+            if (newSizeInBytes <= volumeInfo.getSize()) {
+                throw new CloudRuntimeException("Only increase size is allowed for volume: " + volumeInfo.getName());
+            }
+
+            org.apache.cloudstack.storage.datastore.api.Volume scaleIOVolume = null;
+            long newSizeInGB = newSizeInBytes / (1024 * 1024 * 1024);
+            long newSizeIn8gbBoundary = (long) (Math.ceil(newSizeInGB / 8.0) * 8.0);
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePoolId);
+            scaleIOVolume = client.resizeVolume(scaleIOVolumeId, (int) newSizeIn8gbBoundary);
+            if (scaleIOVolume == null) {
+                throw new CloudRuntimeException("Failed to resize volume: " + volumeInfo.getName());
+            }
+
+            VolumeVO volume = volumeDao.findById(volumeInfo.getId());
+            long oldVolumeSize = volume.getSize();
+            volume.setSize(scaleIOVolume.getSizeInKb() * 1024);
+            volumeDao.update(volume.getId(), volume);
+
+            StoragePoolVO storagePool = storagePoolDao.findById(storagePoolId);
+            long capacityBytes = storagePool.getCapacityBytes();
+            long usedBytes = storagePool.getUsedBytes();
+
+            long newVolumeSize = volume.getSize();
+            usedBytes += newVolumeSize - oldVolumeSize;
+            storagePool.setUsedBytes(usedBytes > capacityBytes ? capacityBytes : usedBytes);
+            storagePoolDao.update(storagePoolId, storagePool);
+        } catch (Exception e) {
+            String errMsg = "Unable to resize PowerFlex volume: " + volumeInfo.getId() + " due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            throw new CloudRuntimeException(errMsg, e);
+        }
+    }
+
+    @Override
+    public void resize(DataObject dataObject, AsyncCompletionCallback<CreateCmdResult> callback) {
+        String scaleIOVolumePath = null;
+        String errMsg = null;
+        try {
+            if (dataObject.getType() == DataObjectType.VOLUME) {
+                scaleIOVolumePath = ((VolumeInfo) dataObject).getPath();
+                resizeVolume((VolumeInfo) dataObject);
+            } else {
+                errMsg = "Invalid DataObjectType (" + dataObject.getType() + ") passed to resize";
+            }
+        } catch (Exception ex) {
+            errMsg = ex.getMessage();
+            LOGGER.error(errMsg);
+            if (callback == null) {
+                throw ex;
+            }
+        }
+
+        if (callback != null) {
+            CreateCmdResult result = new CreateCmdResult(scaleIOVolumePath, new Answer(null, errMsg == null, errMsg));
+            result.setResult(errMsg);
+            callback.complete(result);
+        }
+    }
+
+    @Override
+    public void handleQualityOfServiceForVolumeMigration(VolumeInfo volumeInfo, QualityOfServiceState qualityOfServiceState) {
+    }
+
+    @Override
+    public boolean canProvideStorageStats() {
+        return true;
+    }
+
+    @Override
+    public Pair<Long, Long> getStorageStats(StoragePool storagePool) {
+        Preconditions.checkArgument(storagePool != null, "storagePool cannot be null");
+
+        try {
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePool.getId());
+            StoragePoolStatistics poolStatistics = client.getStoragePoolStatistics(storagePool.getPath());
+            if (poolStatistics != null && poolStatistics.getNetMaxCapacityInBytes() != null && poolStatistics.getNetUsedCapacityInBytes() != null) {
+                Long capacityBytes = poolStatistics.getNetMaxCapacityInBytes();
+                Long usedBytes = poolStatistics.getNetUsedCapacityInBytes();
+                return new Pair<Long, Long>(capacityBytes, usedBytes);
+            }
+        }  catch (Exception e) {
+            String errMsg = "Unable to get storage stats for the pool: " + storagePool.getId() + " due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            throw new CloudRuntimeException(errMsg, e);
+        }
+
+        return null;
+    }
+
+    @Override
+    public boolean canProvideVolumeStats() {
+        return true;
+    }
+
+    @Override
+    public Pair<Long, Long> getVolumeStats(StoragePool storagePool, String volumePath) {
+        Preconditions.checkArgument(storagePool != null, "storagePool cannot be null");
+        Preconditions.checkArgument(!Strings.isNullOrEmpty(volumePath), "volumePath cannot be null");
+
+        try {
+            final ScaleIOGatewayClient client = getScaleIOClient(storagePool.getId());
+            VolumeStatistics volumeStatistics = client.getVolumeStatistics(ScaleIOUtil.getVolumePath(volumePath));
+            if (volumeStatistics != null) {
+                Long provisionedSizeInBytes = volumeStatistics.getNetProvisionedAddressesInBytes();
+                Long allocatedSizeInBytes = volumeStatistics.getAllocatedSizeInBytes();
+                return new Pair<Long, Long>(provisionedSizeInBytes, allocatedSizeInBytes);
+            }
+        }  catch (Exception e) {
+            String errMsg = "Unable to get stats for the volume: " + volumePath + " in the pool: " + storagePool.getId() + " due to " + e.getMessage();
+            LOGGER.warn(errMsg);
+            throw new CloudRuntimeException(errMsg, e);
+        }
+
+        return null;
+    }
+
+    @Override
+    public boolean canHostAccessStoragePool(Host host, StoragePool pool) {
+        if (host == null || pool == null) {
+            return false;
+        }
+
+        try {
+            final ScaleIOGatewayClient client = getScaleIOClient(pool.getId());
... 5467 lines suppressed ...