You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by ro...@apache.org on 2018/05/21 09:21:18 UTC

[cloudstack] branch master updated (2b7d6cf -> 7c6777b)

This is an automated email from the ASF dual-hosted git repository.

rohit pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/cloudstack.git.


    from 2b7d6cf  Merge branch '4.11': Add option on if to VM HA power-on a OOB-shut-off-VM (#2473)
     add acc5fdc  CLOUDSTACK-10290: allow config drives on primary storage for KVM (#2651)
     new 7c6777b  Merge branch '4.11': allow config drives on primary storage for KVM (#2651)

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../network/element/UserDataServiceProvider.java   |   2 +-
 client/pom.xml                                     |   5 +
 .../agent/api/HandleConfigDriveIsoCommand.java     |  40 +--
 .../java/com/cloud/vm/VirtualMachineManager.java   |  11 +-
 .../com/cloud/vm/VirtualMachineManagerImpl.java    |   5 +-
 engine/pom.xml                                     |   1 +
 .../storage/configdrive}/pom.xml                   |  12 +-
 .../storage/configdrive/ConfigDrive.java           |  43 +--
 .../storage/configdrive/ConfigDriveBuilder.java    | 222 +++++++++++++
 .../configdrive/ConfigDriveBuilderTest.java        |  65 ++++
 packaging/centos7/cloud.spec                       |   2 +-
 plugins/hypervisors/kvm/pom.xml                    |   5 +
 .../kvm/resource/LibvirtComputingResource.java     |   5 +-
 .../LibvirtHandleConfigDriveCommandWrapper.java    |  79 +++++
 .../kvm/storage/KVMStoragePoolManager.java         |   2 +-
 server/pom.xml                                     |   5 +
 .../network/element/ConfigDriveNetworkElement.java | 316 ++++++++++---------
 .../main/java/com/cloud/vm/UserVmManagerImpl.java  |  48 +--
 .../element/ConfigDriveNetworkElementTest.java     | 113 ++-----
 services/secondary-storage/server/pom.xml          |   5 +
 .../resource/NfsSecondaryStorageResource.java      | 350 ++++-----------------
 21 files changed, 699 insertions(+), 637 deletions(-)
 copy {plugins/outofbandmanagement-drivers/ipmitool => engine/storage/configdrive}/pom.xml (86%)
 copy api/src/main/java/com/cloud/vm/NicIpAlias.java => engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDrive.java (56%)
 create mode 100644 engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilder.java
 create mode 100644 engine/storage/configdrive/src/test/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilderTest.java
 create mode 100644 plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java

-- 
To stop receiving notification emails like this one, please contact
rohit@apache.org.

[cloudstack] 01/01: Merge branch '4.11': allow config drives on primary storage for KVM (#2651)

Posted by ro...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

rohit pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/cloudstack.git

commit 7c6777b8d33fb0dbb1ecacb1fbe96883fc34ca4f
Merge: 2b7d6cf acc5fdc
Author: Rohit Yadav <ro...@shapeblue.com>
AuthorDate: Mon May 21 14:50:44 2018 +0530

    Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
    
    Signed-off-by: Rohit Yadav <ro...@shapeblue.com>

 .../network/element/UserDataServiceProvider.java   |   2 +-
 client/pom.xml                                     |   5 +
 .../agent/api/HandleConfigDriveIsoCommand.java     |  40 +--
 .../java/com/cloud/vm/VirtualMachineManager.java   |  11 +-
 .../com/cloud/vm/VirtualMachineManagerImpl.java    |   5 +-
 engine/pom.xml                                     |   1 +
 engine/storage/configdrive/pom.xml                 |  43 +++
 .../storage/configdrive/ConfigDrive.java           |  36 +++
 .../storage/configdrive/ConfigDriveBuilder.java    | 222 +++++++++++++
 .../configdrive/ConfigDriveBuilderTest.java        |  65 ++++
 packaging/centos7/cloud.spec                       |   2 +-
 plugins/hypervisors/kvm/pom.xml                    |   5 +
 .../kvm/resource/LibvirtComputingResource.java     |   5 +-
 .../LibvirtHandleConfigDriveCommandWrapper.java    |  79 +++++
 .../kvm/storage/KVMStoragePoolManager.java         |   2 +-
 server/pom.xml                                     |   5 +
 .../network/element/ConfigDriveNetworkElement.java | 316 ++++++++++---------
 .../main/java/com/cloud/vm/UserVmManagerImpl.java  |  48 +--
 .../element/ConfigDriveNetworkElementTest.java     | 113 ++-----
 services/secondary-storage/server/pom.xml          |   5 +
 .../resource/NfsSecondaryStorageResource.java      | 350 ++++-----------------
 21 files changed, 758 insertions(+), 602 deletions(-)

diff --cc engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
index ed718e0,0000000..542cb4e
mode 100755,000000..100755
--- a/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
+++ b/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java
@@@ -1,5079 -1,0 +1,5080 @@@
 +// Licensed to the Apacohe Software Foundation (ASF) under one
 +// or more contributor license agreements.  See the NOTICE file
 +// distributed with this work for additional information
 +// regarding copyright ownership.  The ASF licenses this file
 +// to you under the Apache License, Version 2.0 (the
 +// "License"); you may not use this file except in compliance
 +// with the License.  You may obtain a copy of the License at
 +//
 +//   http://www.apache.org/licenses/LICENSE-2.0
 +//
 +// Unless required by applicable law or agreed to in writing,
 +// software distributed under the License is distributed on an
 +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 +// KIND, either express or implied.  See the License for the
 +// specific language governing permissions and limitations
 +// under the License.
 +
 +package com.cloud.vm;
 +
 +import java.net.URI;
 +import java.sql.PreparedStatement;
 +import java.sql.ResultSet;
 +import java.sql.SQLException;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.Date;
 +import java.util.HashMap;
 +import java.util.LinkedHashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.TimeZone;
 +import java.util.UUID;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.ScheduledExecutorService;
 +import java.util.concurrent.TimeUnit;
 +
 +import javax.inject.Inject;
 +import javax.naming.ConfigurationException;
 +
 +import org.apache.cloudstack.affinity.dao.AffinityGroupVMMapDao;
 +import org.apache.cloudstack.ca.CAManager;
 +import org.apache.cloudstack.context.CallContext;
 +import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService;
 +import org.apache.cloudstack.engine.orchestration.service.VolumeOrchestrationService;
 +import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
 +import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStoreInfo;
 +import org.apache.cloudstack.engine.subsystem.api.storage.StoragePoolAllocator;
 +import org.apache.cloudstack.framework.ca.Certificate;
 +import org.apache.cloudstack.framework.config.ConfigDepot;
 +import org.apache.cloudstack.framework.config.ConfigKey;
 +import org.apache.cloudstack.framework.config.Configurable;
 +import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
 +import org.apache.cloudstack.framework.jobs.AsyncJob;
 +import org.apache.cloudstack.framework.jobs.AsyncJobExecutionContext;
 +import org.apache.cloudstack.framework.jobs.AsyncJobManager;
 +import org.apache.cloudstack.framework.jobs.Outcome;
 +import org.apache.cloudstack.framework.jobs.dao.VmWorkJobDao;
 +import org.apache.cloudstack.framework.jobs.impl.AsyncJobVO;
 +import org.apache.cloudstack.framework.jobs.impl.JobSerializerHelper;
 +import org.apache.cloudstack.framework.jobs.impl.OutcomeImpl;
 +import org.apache.cloudstack.framework.jobs.impl.VmWorkJobVO;
 +import org.apache.cloudstack.framework.messagebus.MessageBus;
 +import org.apache.cloudstack.framework.messagebus.MessageDispatcher;
 +import org.apache.cloudstack.framework.messagebus.MessageHandler;
 +import org.apache.cloudstack.jobs.JobInfo;
 +import org.apache.cloudstack.managed.context.ManagedContextRunnable;
 +import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
 +import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
 +import org.apache.cloudstack.storage.to.VolumeObjectTO;
 +import org.apache.cloudstack.utils.identity.ManagementServerNode;
 +import org.apache.commons.collections.CollectionUtils;
 +import org.apache.commons.collections.MapUtils;
 +import org.apache.log4j.Logger;
 +
 +import com.cloud.agent.AgentManager;
 +import com.cloud.agent.Listener;
 +import com.cloud.agent.api.AgentControlAnswer;
 +import com.cloud.agent.api.AgentControlCommand;
 +import com.cloud.agent.api.Answer;
 +import com.cloud.agent.api.AttachOrDettachConfigDriveCommand;
 +import com.cloud.agent.api.CheckVirtualMachineAnswer;
 +import com.cloud.agent.api.CheckVirtualMachineCommand;
 +import com.cloud.agent.api.ClusterVMMetaDataSyncAnswer;
 +import com.cloud.agent.api.ClusterVMMetaDataSyncCommand;
 +import com.cloud.agent.api.Command;
 +import com.cloud.agent.api.MigrateCommand;
 +import com.cloud.agent.api.ModifyTargetsCommand;
 +import com.cloud.agent.api.PingRoutingCommand;
 +import com.cloud.agent.api.PlugNicAnswer;
 +import com.cloud.agent.api.PlugNicCommand;
 +import com.cloud.agent.api.PrepareForMigrationCommand;
 +import com.cloud.agent.api.RebootAnswer;
 +import com.cloud.agent.api.RebootCommand;
 +import com.cloud.agent.api.ReplugNicAnswer;
 +import com.cloud.agent.api.ReplugNicCommand;
 +import com.cloud.agent.api.RestoreVMSnapshotAnswer;
 +import com.cloud.agent.api.RestoreVMSnapshotCommand;
 +import com.cloud.agent.api.ScaleVmCommand;
 +import com.cloud.agent.api.StartAnswer;
 +import com.cloud.agent.api.StartCommand;
 +import com.cloud.agent.api.StartupCommand;
 +import com.cloud.agent.api.StartupRoutingCommand;
 +import com.cloud.agent.api.StopAnswer;
 +import com.cloud.agent.api.StopCommand;
 +import com.cloud.agent.api.UnPlugNicAnswer;
 +import com.cloud.agent.api.UnPlugNicCommand;
 +import com.cloud.agent.api.UnregisterVMCommand;
 +import com.cloud.agent.api.routing.NetworkElementCommand;
 +import com.cloud.agent.api.to.DiskTO;
 +import com.cloud.agent.api.to.GPUDeviceTO;
 +import com.cloud.agent.api.to.NicTO;
 +import com.cloud.agent.api.to.VirtualMachineTO;
 +import com.cloud.agent.manager.Commands;
 +import com.cloud.agent.manager.allocator.HostAllocator;
 +import com.cloud.alert.AlertManager;
 +import com.cloud.capacity.CapacityManager;
 +import com.cloud.configuration.Config;
 +import com.cloud.dc.ClusterDetailsDao;
 +import com.cloud.dc.ClusterDetailsVO;
 +import com.cloud.dc.DataCenter;
 +import com.cloud.dc.DataCenterVO;
 +import com.cloud.dc.HostPodVO;
 +import com.cloud.dc.Pod;
 +import com.cloud.dc.dao.ClusterDao;
 +import com.cloud.dc.dao.DataCenterDao;
 +import com.cloud.dc.dao.HostPodDao;
 +import com.cloud.deploy.DataCenterDeployment;
 +import com.cloud.deploy.DeployDestination;
 +import com.cloud.deploy.DeploymentPlan;
 +import com.cloud.deploy.DeploymentPlanner;
 +import com.cloud.deploy.DeploymentPlanner.ExcludeList;
 +import com.cloud.deploy.DeploymentPlanningManager;
 +import com.cloud.domain.dao.DomainDao;
 +import com.cloud.event.EventTypes;
 +import com.cloud.event.UsageEventUtils;
 +import com.cloud.exception.AffinityConflictException;
 +import com.cloud.exception.AgentUnavailableException;
 +import com.cloud.exception.ConcurrentOperationException;
 +import com.cloud.exception.ConnectionException;
 +import com.cloud.exception.InsufficientAddressCapacityException;
 +import com.cloud.exception.InsufficientCapacityException;
 +import com.cloud.exception.InsufficientServerCapacityException;
 +import com.cloud.exception.InsufficientVirtualNetworkCapacityException;
 +import com.cloud.exception.InvalidParameterValueException;
 +import com.cloud.exception.OperationTimedoutException;
 +import com.cloud.exception.ResourceUnavailableException;
 +import com.cloud.exception.StorageUnavailableException;
 +import com.cloud.gpu.dao.VGPUTypesDao;
 +import com.cloud.ha.HighAvailabilityManager;
 +import com.cloud.ha.HighAvailabilityManager.WorkType;
 +import com.cloud.host.Host;
 +import com.cloud.host.HostVO;
 +import com.cloud.host.Status;
 +import com.cloud.host.dao.HostDao;
 +import com.cloud.hypervisor.Hypervisor.HypervisorType;
 +import com.cloud.hypervisor.HypervisorGuru;
 +import com.cloud.hypervisor.HypervisorGuruManager;
 +import com.cloud.network.Network;
 +import com.cloud.network.NetworkModel;
 +import com.cloud.network.dao.NetworkDao;
 +import com.cloud.network.dao.NetworkVO;
 +import com.cloud.network.router.VirtualRouter;
 +import com.cloud.network.rules.RulesManager;
 +import com.cloud.offering.DiskOffering;
 +import com.cloud.offering.DiskOfferingInfo;
 +import com.cloud.offering.ServiceOffering;
 +import com.cloud.org.Cluster;
 +import com.cloud.resource.ResourceManager;
 +import com.cloud.resource.ResourceState;
 +import com.cloud.service.ServiceOfferingVO;
 +import com.cloud.service.dao.ServiceOfferingDao;
 +import com.cloud.storage.DiskOfferingVO;
 +import com.cloud.storage.ScopeType;
 +import com.cloud.storage.Storage.ImageFormat;
 +import com.cloud.storage.StoragePool;
 +import com.cloud.storage.VMTemplateVO;
 +import com.cloud.storage.Volume;
 +import com.cloud.storage.Volume.Type;
 +import com.cloud.storage.VolumeVO;
 +import com.cloud.storage.dao.DiskOfferingDao;
 +import com.cloud.storage.dao.GuestOSCategoryDao;
 +import com.cloud.storage.dao.GuestOSDao;
 +import com.cloud.storage.dao.StoragePoolHostDao;
 +import com.cloud.storage.dao.VMTemplateDao;
 +import com.cloud.storage.dao.VolumeDao;
 +import com.cloud.template.VirtualMachineTemplate;
 +import com.cloud.user.Account;
 +import com.cloud.user.User;
 +import com.cloud.utils.DateUtil;
 +import com.cloud.utils.Journal;
 +import com.cloud.utils.Pair;
 +import com.cloud.utils.Predicate;
 +import com.cloud.utils.ReflectionUse;
 +import com.cloud.utils.StringUtils;
 +import com.cloud.utils.Ternary;
 +import com.cloud.utils.component.ManagerBase;
 +import com.cloud.utils.concurrency.NamedThreadFactory;
 +import com.cloud.utils.db.DB;
 +import com.cloud.utils.db.EntityManager;
 +import com.cloud.utils.db.GlobalLock;
 +import com.cloud.utils.db.Transaction;
 +import com.cloud.utils.db.TransactionCallbackWithException;
 +import com.cloud.utils.db.TransactionCallbackWithExceptionNoReturn;
 +import com.cloud.utils.db.TransactionLegacy;
 +import com.cloud.utils.db.TransactionStatus;
 +import com.cloud.utils.exception.CloudRuntimeException;
 +import com.cloud.utils.exception.ExecutionException;
 +import com.cloud.utils.fsm.NoTransitionException;
 +import com.cloud.utils.fsm.StateMachine2;
 +import com.cloud.vm.ItWorkVO.Step;
 +import com.cloud.vm.VirtualMachine.Event;
 +import com.cloud.vm.VirtualMachine.PowerState;
 +import com.cloud.vm.VirtualMachine.State;
 +import com.cloud.vm.dao.NicDao;
 +import com.cloud.vm.dao.UserVmDao;
 +import com.cloud.vm.dao.UserVmDetailsDao;
 +import com.cloud.vm.dao.VMInstanceDao;
 +import com.cloud.vm.snapshot.VMSnapshotManager;
 +import com.cloud.vm.snapshot.VMSnapshotVO;
 +import com.cloud.vm.snapshot.dao.VMSnapshotDao;
 +import com.google.common.base.Strings;
 +
 +public class VirtualMachineManagerImpl extends ManagerBase implements VirtualMachineManager, VmWorkJobHandler, Listener, Configurable {
 +    private static final Logger s_logger = Logger.getLogger(VirtualMachineManagerImpl.class);
 +
 +    public static final String VM_WORK_JOB_HANDLER = VirtualMachineManagerImpl.class.getSimpleName();
 +
 +    private static final String VM_SYNC_ALERT_SUBJECT = "VM state sync alert";
 +
 +    @Inject
 +    DataStoreManager dataStoreMgr;
 +    @Inject
 +    protected NetworkOrchestrationService _networkMgr;
 +    @Inject
 +    protected NetworkModel _networkModel;
 +    @Inject
 +    protected AgentManager _agentMgr;
 +    @Inject
 +    protected VMInstanceDao _vmDao;
 +    @Inject
 +    protected ServiceOfferingDao _offeringDao;
 +    @Inject
 +    protected DiskOfferingDao _diskOfferingDao;
 +    @Inject
 +    protected VMTemplateDao _templateDao;
 +    @Inject
 +    protected DomainDao _domainDao;
 +    @Inject
 +    protected ItWorkDao _workDao;
 +    @Inject
 +    protected UserVmDao _userVmDao;
 +    @Inject
 +    protected UserVmService _userVmService;
 +    @Inject
 +    protected CapacityManager _capacityMgr;
 +    @Inject
 +    protected NicDao _nicsDao;
 +    @Inject
 +    protected HostDao _hostDao;
 +    @Inject
 +    protected AlertManager _alertMgr;
 +    @Inject
 +    protected GuestOSCategoryDao _guestOsCategoryDao;
 +    @Inject
 +    protected GuestOSDao _guestOsDao;
 +    @Inject
 +    protected VolumeDao _volsDao;
 +    @Inject
 +    protected HighAvailabilityManager _haMgr;
 +    @Inject
 +    protected HostPodDao _podDao;
 +    @Inject
 +    protected DataCenterDao _dcDao;
 +    @Inject
 +    protected ClusterDao _clusterDao;
 +    @Inject
 +    protected PrimaryDataStoreDao _storagePoolDao;
 +    @Inject
 +    protected HypervisorGuruManager _hvGuruMgr;
 +    @Inject
 +    protected NetworkDao _networkDao;
 +    @Inject
 +    protected StoragePoolHostDao _poolHostDao;
 +    @Inject
 +    protected VMSnapshotDao _vmSnapshotDao;
 +    @Inject
 +    protected RulesManager rulesMgr;
 +    @Inject
 +    protected AffinityGroupVMMapDao _affinityGroupVMMapDao;
 +    @Inject
 +    protected VGPUTypesDao _vgpuTypesDao;
 +    @Inject
 +    protected EntityManager _entityMgr;
 +    @Inject
 +    protected GuestOSCategoryDao _guestOSCategoryDao;
 +    @Inject
 +    protected GuestOSDao _guestOSDao = null;
 +    @Inject
 +    protected UserVmDetailsDao _vmDetailsDao;
 +    @Inject
 +    protected ServiceOfferingDao _serviceOfferingDao = null;
 +    @Inject
 +    protected CAManager caManager;
 +
 +    @Inject
 +    ConfigDepot _configDepot;
 +
 +    protected List<HostAllocator> hostAllocators;
 +
 +    public List<HostAllocator> getHostAllocators() {
 +        return hostAllocators;
 +    }
 +
 +    public void setHostAllocators(final List<HostAllocator> hostAllocators) {
 +        this.hostAllocators = hostAllocators;
 +    }
 +
 +    protected List<StoragePoolAllocator> _storagePoolAllocators;
 +
 +    @Inject
 +    protected ResourceManager _resourceMgr;
 +
 +    @Inject
 +    protected VMSnapshotManager _vmSnapshotMgr = null;
 +    @Inject
 +    protected ClusterDetailsDao _clusterDetailsDao;
 +    @Inject
 +    protected UserVmDetailsDao _uservmDetailsDao;
 +
 +    @Inject
 +    protected ConfigurationDao _configDao;
 +    @Inject
 +    VolumeOrchestrationService volumeMgr;
 +
 +    @Inject
 +    DeploymentPlanningManager _dpMgr;
 +
 +    @Inject
 +    protected MessageBus _messageBus;
 +    @Inject
 +    protected VirtualMachinePowerStateSync _syncMgr;
 +    @Inject
 +    protected VmWorkJobDao _workJobDao;
 +    @Inject
 +    protected AsyncJobManager _jobMgr;
 +
 +    VmWorkJobHandlerProxy _jobHandlerProxy = new VmWorkJobHandlerProxy(this);
 +
 +    Map<VirtualMachine.Type, VirtualMachineGuru> _vmGurus = new HashMap<VirtualMachine.Type, VirtualMachineGuru>();
 +    protected StateMachine2<State, VirtualMachine.Event, VirtualMachine> _stateMachine;
 +
 +    static final ConfigKey<Integer> StartRetry = new ConfigKey<Integer>("Advanced", Integer.class, "start.retry", "10",
 +            "Number of times to retry create and start commands", true);
 +    static final ConfigKey<Integer> VmOpWaitInterval = new ConfigKey<Integer>("Advanced", Integer.class, "vm.op.wait.interval", "120",
 +            "Time (in seconds) to wait before checking if a previous operation has succeeded", true);
 +
 +    static final ConfigKey<Integer> VmOpLockStateRetry = new ConfigKey<Integer>("Advanced", Integer.class, "vm.op.lock.state.retry", "5",
 +            "Times to retry locking the state of a VM for operations, -1 means forever", true);
 +    static final ConfigKey<Long> VmOpCleanupInterval = new ConfigKey<Long>("Advanced", Long.class, "vm.op.cleanup.interval", "86400",
 +            "Interval to run the thread that cleans up the vm operations (in seconds)", false);
 +    static final ConfigKey<Long> VmOpCleanupWait = new ConfigKey<Long>("Advanced", Long.class, "vm.op.cleanup.wait", "3600",
 +            "Time (in seconds) to wait before cleanuping up any vm work items", true);
 +    static final ConfigKey<Long> VmOpCancelInterval = new ConfigKey<Long>("Advanced", Long.class, "vm.op.cancel.interval", "3600",
 +            "Time (in seconds) to wait before cancelling a operation", false);
 +    static final ConfigKey<Boolean> VmDestroyForcestop = new ConfigKey<Boolean>("Advanced", Boolean.class, "vm.destroy.forcestop", "false",
 +            "On destroy, force-stop takes this value ", true);
 +    static final ConfigKey<Integer> ClusterDeltaSyncInterval = new ConfigKey<Integer>("Advanced", Integer.class, "sync.interval", "60",
 +            "Cluster Delta sync interval in seconds",
 +            false);
 +    static final ConfigKey<Integer> ClusterVMMetaDataSyncInterval = new ConfigKey<Integer>("Advanced", Integer.class, "vmmetadata.sync.interval", "180", "Cluster VM metadata sync interval in seconds",
 +            false);
 +
 +    static final ConfigKey<Long> VmJobCheckInterval = new ConfigKey<Long>("Advanced",
 +            Long.class, "vm.job.check.interval", "3000",
 +            "Interval in milliseconds to check if the job is complete", false);
 +    static final ConfigKey<Long> VmJobTimeout = new ConfigKey<Long>("Advanced",
 +            Long.class, "vm.job.timeout", "600000",
 +            "Time in milliseconds to wait before attempting to cancel a job", false);
 +    static final ConfigKey<Integer> VmJobStateReportInterval = new ConfigKey<Integer>("Advanced",
 +            Integer.class, "vm.job.report.interval", "60",
 +            "Interval to send application level pings to make sure the connection is still working", false);
 +
 +    static final ConfigKey<Boolean> HaVmRestartHostUp = new ConfigKey<Boolean>("Advanced", Boolean.class, "ha.vm.restart.hostup", "true",
 +            "If an out-of-band stop of a VM is detected and its host is up, then power on the VM", true);
 +
 +    ScheduledExecutorService _executor = null;
 +
 +    protected long _nodeId;
 +
 +    @Override
 +    public void registerGuru(final VirtualMachine.Type type, final VirtualMachineGuru guru) {
 +        synchronized (_vmGurus) {
 +            _vmGurus.put(type, guru);
 +        }
 +    }
 +
 +    @Override
 +    @DB
 +    public void allocate(final String vmInstanceName, final VirtualMachineTemplate template, final ServiceOffering serviceOffering,
 +            final DiskOfferingInfo rootDiskOfferingInfo, final List<DiskOfferingInfo> dataDiskOfferings,
 +            final LinkedHashMap<? extends Network, List<? extends NicProfile>> auxiliaryNetworks, final DeploymentPlan plan, final HypervisorType hyperType, final Map<String, Map<Integer, String>> extraDhcpOptions, final Map<Long, DiskOffering> datadiskTemplateToDiskOfferingMap)
 +                    throws InsufficientCapacityException {
 +
 +        final VMInstanceVO vm = _vmDao.findVMByInstanceName(vmInstanceName);
 +        final Account owner = _entityMgr.findById(Account.class, vm.getAccountId());
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Allocating entries for VM: " + vm);
 +        }
 +
 +        vm.setDataCenterId(plan.getDataCenterId());
 +        if (plan.getPodId() != null) {
 +            vm.setPodIdToDeployIn(plan.getPodId());
 +        }
 +        assert plan.getClusterId() == null && plan.getPoolId() == null : "We currently don't support cluster and pool preset yet";
 +        final VMInstanceVO vmFinal = _vmDao.persist(vm);
 +
 +        final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vmFinal, template, serviceOffering, null, null);
 +
 +        Transaction.execute(new TransactionCallbackWithExceptionNoReturn<InsufficientCapacityException>() {
 +            @Override
 +            public void doInTransactionWithoutResult(final TransactionStatus status) throws InsufficientCapacityException {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Allocating nics for " + vmFinal);
 +                }
 +
 +                try {
 +                    if (!vmProfile.getBootArgs().contains("ExternalLoadBalancerVm")) {
 +                        _networkMgr.allocate(vmProfile, auxiliaryNetworks, extraDhcpOptions);
 +                    }
 +                } catch (final ConcurrentOperationException e) {
 +                    throw new CloudRuntimeException("Concurrent operation while trying to allocate resources for the VM", e);
 +                }
 +
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Allocating disks for " + vmFinal);
 +                }
 +
 +                if (template.getFormat() == ImageFormat.ISO) {
 +                    volumeMgr.allocateRawVolume(Type.ROOT, "ROOT-" + vmFinal.getId(), rootDiskOfferingInfo.getDiskOffering(), rootDiskOfferingInfo.getSize(),
 +                            rootDiskOfferingInfo.getMinIops(), rootDiskOfferingInfo.getMaxIops(), vmFinal, template, owner, null);
 +                } else if (template.getFormat() == ImageFormat.BAREMETAL) {
 +                    // Do nothing
 +                } else {
 +                    volumeMgr.allocateTemplatedVolume(Type.ROOT, "ROOT-" + vmFinal.getId(), rootDiskOfferingInfo.getDiskOffering(), rootDiskOfferingInfo.getSize(),
 +                            rootDiskOfferingInfo.getMinIops(), rootDiskOfferingInfo.getMaxIops(), template, vmFinal, owner);
 +                }
 +
 +                if (dataDiskOfferings != null) {
 +                    for (final DiskOfferingInfo dataDiskOfferingInfo : dataDiskOfferings) {
 +                        volumeMgr.allocateRawVolume(Type.DATADISK, "DATA-" + vmFinal.getId(), dataDiskOfferingInfo.getDiskOffering(), dataDiskOfferingInfo.getSize(),
 +                                dataDiskOfferingInfo.getMinIops(), dataDiskOfferingInfo.getMaxIops(), vmFinal, template, owner, null);
 +                    }
 +                }
 +                if (datadiskTemplateToDiskOfferingMap != null && !datadiskTemplateToDiskOfferingMap.isEmpty()) {
 +                    int diskNumber = 1;
 +                    for (Entry<Long, DiskOffering> dataDiskTemplateToDiskOfferingMap : datadiskTemplateToDiskOfferingMap.entrySet()) {
 +                        DiskOffering diskOffering = dataDiskTemplateToDiskOfferingMap.getValue();
 +                        long diskOfferingSize = diskOffering.getDiskSize() / (1024 * 1024 * 1024);
 +                        VMTemplateVO dataDiskTemplate = _templateDao.findById(dataDiskTemplateToDiskOfferingMap.getKey());
 +                        volumeMgr.allocateRawVolume(Type.DATADISK, "DATA-" + vmFinal.getId() + "-" + String.valueOf(diskNumber), diskOffering, diskOfferingSize, null, null,
 +                                vmFinal, dataDiskTemplate, owner, Long.valueOf(diskNumber));
 +                        diskNumber++;
 +                    }
 +                }
 +            }
 +        });
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Allocation completed for VM: " + vmFinal);
 +        }
 +    }
 +
 +    @Override
 +    public void allocate(final String vmInstanceName, final VirtualMachineTemplate template, final ServiceOffering serviceOffering,
 +            final LinkedHashMap<? extends Network, List<? extends NicProfile>> networks, final DeploymentPlan plan, final HypervisorType hyperType) throws InsufficientCapacityException {
 +        allocate(vmInstanceName, template, serviceOffering, new DiskOfferingInfo(serviceOffering), new ArrayList<DiskOfferingInfo>(), networks, plan, hyperType, null, null);
 +    }
 +
 +    private VirtualMachineGuru getVmGuru(final VirtualMachine vm) {
 +        if(vm != null) {
 +            return _vmGurus.get(vm.getType());
 +        }
 +        return null;
 +    }
 +
 +    @Override
 +    public void expunge(final String vmUuid) throws ResourceUnavailableException {
 +        try {
 +            advanceExpunge(vmUuid);
 +        } catch (final OperationTimedoutException e) {
 +            throw new CloudRuntimeException("Operation timed out", e);
 +        } catch (final ConcurrentOperationException e) {
 +            throw new CloudRuntimeException("Concurrent operation ", e);
 +        }
 +    }
 +
 +    @Override
 +    public void advanceExpunge(final String vmUuid) throws ResourceUnavailableException, OperationTimedoutException, ConcurrentOperationException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        advanceExpunge(vm);
 +    }
 +
 +    protected void advanceExpunge(VMInstanceVO vm) throws ResourceUnavailableException, OperationTimedoutException, ConcurrentOperationException {
 +        if (vm == null || vm.getRemoved() != null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Unable to find vm or vm is destroyed: " + vm);
 +            }
 +            return;
 +        }
 +
 +        advanceStop(vm.getUuid(), false);
 +        vm = _vmDao.findByUuid(vm.getUuid());
 +
 +        try {
 +            if (!stateTransitTo(vm, VirtualMachine.Event.ExpungeOperation, vm.getHostId())) {
 +                s_logger.debug("Unable to destroy the vm because it is not in the correct state: " + vm);
 +                throw new CloudRuntimeException("Unable to destroy " + vm);
 +
 +            }
 +        } catch (final NoTransitionException e) {
 +            s_logger.debug("Unable to destroy the vm because it is not in the correct state: " + vm);
 +            throw new CloudRuntimeException("Unable to destroy " + vm, e);
 +        }
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Destroying vm " + vm);
 +        }
 +
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vm.getHypervisorType());
 +
 +        s_logger.debug("Cleaning up NICS");
 +        final List<Command> nicExpungeCommands = hvGuru.finalizeExpungeNics(vm, profile.getNics());
 +        _networkMgr.cleanupNics(profile);
 +
 +        s_logger.debug("Cleaning up hypervisor data structures (ex. SRs in XenServer) for managed storage");
 +
 +        final List<Command> volumeExpungeCommands = hvGuru.finalizeExpungeVolumes(vm);
 +
 +        final Long hostId = vm.getHostId() != null ? vm.getHostId() : vm.getLastHostId();
 +
 +        List<Map<String, String>> targets = getTargets(hostId, vm.getId());
 +
 +        if (volumeExpungeCommands != null && volumeExpungeCommands.size() > 0 && hostId != null) {
 +            final Commands cmds = new Commands(Command.OnError.Stop);
 +
 +            for (final Command volumeExpungeCommand : volumeExpungeCommands) {
 +                cmds.addCommand(volumeExpungeCommand);
 +            }
 +
 +            _agentMgr.send(hostId, cmds);
 +
 +            if (!cmds.isSuccessful()) {
 +                for (final Answer answer : cmds.getAnswers()) {
 +                    if (!answer.getResult()) {
 +                        s_logger.warn("Failed to expunge vm due to: " + answer.getDetails());
 +
 +                        throw new CloudRuntimeException("Unable to expunge " + vm + " due to " + answer.getDetails());
 +                    }
 +                }
 +            }
 +        }
 +
 +        if (hostId != null) {
 +            volumeMgr.revokeAccess(vm.getId(), hostId);
 +        }
 +
 +        // Clean up volumes based on the vm's instance id
 +        volumeMgr.cleanupVolumes(vm.getId());
 +
 +        if (hostId != null && CollectionUtils.isNotEmpty(targets)) {
 +            removeDynamicTargets(hostId, targets);
 +        }
 +
 +        final VirtualMachineGuru guru = getVmGuru(vm);
 +        guru.finalizeExpunge(vm);
 +        //remove the overcommit detials from the uservm details
 +        _uservmDetailsDao.removeDetails(vm.getId());
 +
 +        // send hypervisor-dependent commands before removing
 +        final List<Command> finalizeExpungeCommands = hvGuru.finalizeExpunge(vm);
 +        if (finalizeExpungeCommands != null && finalizeExpungeCommands.size() > 0) {
 +            if (hostId != null) {
 +                final Commands cmds = new Commands(Command.OnError.Stop);
 +                for (final Command command : finalizeExpungeCommands) {
 +                    cmds.addCommand(command);
 +                }
 +                if (nicExpungeCommands != null) {
 +                    for (final Command command : nicExpungeCommands) {
 +                        cmds.addCommand(command);
 +                    }
 +                }
 +                _agentMgr.send(hostId, cmds);
 +                if (!cmds.isSuccessful()) {
 +                    for (final Answer answer : cmds.getAnswers()) {
 +                        if (!answer.getResult()) {
 +                            s_logger.warn("Failed to expunge vm due to: " + answer.getDetails());
 +                            throw new CloudRuntimeException("Unable to expunge " + vm + " due to " + answer.getDetails());
 +                        }
 +                    }
 +                }
 +            }
 +        }
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Expunged " + vm);
 +        }
 +
 +    }
 +
 +    private List<Map<String, String>> getTargets(Long hostId, long vmId) {
 +        List<Map<String, String>> targets = new ArrayList<>();
 +
 +        HostVO hostVO = _hostDao.findById(hostId);
 +
 +        if (hostVO == null || hostVO.getHypervisorType() != HypervisorType.VMware) {
 +            return targets;
 +        }
 +
 +        List<VolumeVO> volumes = _volsDao.findByInstance(vmId);
 +
 +        if (CollectionUtils.isEmpty(volumes)) {
 +            return targets;
 +        }
 +
 +        for (VolumeVO volume : volumes) {
 +            StoragePoolVO storagePoolVO = _storagePoolDao.findById(volume.getPoolId());
 +
 +            if (storagePoolVO != null && storagePoolVO.isManaged()) {
 +                Map<String, String> target = new HashMap<>();
 +
 +                target.put(ModifyTargetsCommand.STORAGE_HOST, storagePoolVO.getHostAddress());
 +                target.put(ModifyTargetsCommand.STORAGE_PORT, String.valueOf(storagePoolVO.getPort()));
 +                target.put(ModifyTargetsCommand.IQN, volume.get_iScsiName());
 +
 +                targets.add(target);
 +            }
 +        }
 +
 +        return targets;
 +    }
 +
 +    private void removeDynamicTargets(long hostId, List<Map<String, String>> targets) {
 +        ModifyTargetsCommand cmd = new ModifyTargetsCommand();
 +
 +        cmd.setTargets(targets);
 +        cmd.setApplyToAllHostsInCluster(true);
 +        cmd.setAdd(false);
 +        cmd.setTargetTypeToRemove(ModifyTargetsCommand.TargetTypeToRemove.DYNAMIC);
 +
 +        sendModifyTargetsCommand(cmd, hostId);
 +    }
 +
 +    private void sendModifyTargetsCommand(ModifyTargetsCommand cmd, long hostId) {
 +        Answer answer = _agentMgr.easySend(hostId, cmd);
 +
 +        if (answer == null) {
 +            String msg = "Unable to get an answer to the modify targets command";
 +
 +            s_logger.warn(msg);
 +        }
 +        else if (!answer.getResult()) {
 +            String msg = "Unable to modify target on the following host: " + hostId;
 +
 +            s_logger.warn(msg);
 +        }
 +    }
 +
 +    @Override
 +    public boolean start() {
 +        // TODO, initial delay is hardcoded
 +        _executor.scheduleAtFixedRate(new CleanupTask(), 5, VmJobStateReportInterval.value(), TimeUnit.SECONDS);
 +        _executor.scheduleAtFixedRate(new TransitionTask(),  VmOpCleanupInterval.value(), VmOpCleanupInterval.value(), TimeUnit.SECONDS);
 +        cancelWorkItems(_nodeId);
 +
 +        volumeMgr.cleanupStorageJobs();
 +        // cleanup left over place holder works
 +        _workJobDao.expungeLeftoverWorkJobs(ManagementServerNode.getManagementServerId());
 +        return true;
 +    }
 +
 +    @Override
 +    public boolean stop() {
 +        return true;
 +    }
 +
 +    @Override
 +    public boolean configure(final String name, final Map<String, Object> xmlParams) throws ConfigurationException {
 +        ReservationContextImpl.init(_entityMgr);
 +        VirtualMachineProfileImpl.init(_entityMgr);
 +        VmWorkMigrate.init(_entityMgr);
 +
 +        _executor = Executors.newScheduledThreadPool(1, new NamedThreadFactory("Vm-Operations-Cleanup"));
 +        _nodeId = ManagementServerNode.getManagementServerId();
 +
 +        _agentMgr.registerForHostEvents(this, true, true, true);
 +
 +        _messageBus.subscribe(VirtualMachineManager.Topics.VM_POWER_STATE, MessageDispatcher.getDispatcher(this));
 +
 +        return true;
 +    }
 +
 +    protected VirtualMachineManagerImpl() {
 +        setStateMachine();
 +    }
 +
 +    @Override
 +    public void start(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params) {
 +        start(vmUuid, params, null, null);
 +    }
 +
 +    @Override
 +    public void start(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params, final DeploymentPlan planToDeploy, final DeploymentPlanner planner) {
 +        try {
 +            advanceStart(vmUuid, params, planToDeploy, planner);
 +        } catch (final ConcurrentOperationException e) {
 +            throw new CloudRuntimeException("Unable to start a VM due to concurrent operation", e).add(VirtualMachine.class, vmUuid);
 +        } catch (final InsufficientCapacityException e) {
 +            throw new CloudRuntimeException("Unable to start a VM due to insufficient capacity", e).add(VirtualMachine.class, vmUuid);
 +        } catch (final ResourceUnavailableException e) {
 +            if(e.getScope() != null && e.getScope().equals(VirtualRouter.class)){
 +                throw new CloudRuntimeException("Network is unavailable. Please contact administrator", e).add(VirtualMachine.class, vmUuid);
 +            }
 +            throw new CloudRuntimeException("Unable to start a VM due to unavailable resources", e).add(VirtualMachine.class, vmUuid);
 +        }
 +
 +    }
 +
 +    protected boolean checkWorkItems(final VMInstanceVO vm, final State state) throws ConcurrentOperationException {
 +        while (true) {
 +            final ItWorkVO vo = _workDao.findByOutstandingWork(vm.getId(), state);
 +            if (vo == null) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Unable to find work for VM: " + vm + " and state: " + state);
 +                }
 +                return true;
 +            }
 +
 +            if (vo.getStep() == Step.Done) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Work for " + vm + " is " + vo.getStep());
 +                }
 +                return true;
 +            }
 +
 +            // also check DB to get latest VM state to detect vm update from concurrent process before idle waiting to get an early exit
 +            final VMInstanceVO instance = _vmDao.findById(vm.getId());
 +            if (instance != null && instance.getState() == State.Running) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("VM is already started in DB: " + vm);
 +                }
 +                return true;
 +            }
 +
 +            if (vo.getSecondsTaskIsInactive() > VmOpCancelInterval.value()) {
 +                s_logger.warn("The task item for vm " + vm + " has been inactive for " + vo.getSecondsTaskIsInactive());
 +                return false;
 +            }
 +
 +            try {
 +                Thread.sleep(VmOpWaitInterval.value()*1000);
 +            } catch (final InterruptedException e) {
 +                s_logger.info("Waiting for " + vm + " but is interrupted");
 +                throw new ConcurrentOperationException("Waiting for " + vm + " but is interrupted");
 +            }
 +            s_logger.debug("Waiting some more to make sure there's no activity on " + vm);
 +        }
 +
 +    }
 +
 +    @DB
 +    protected Ternary<VMInstanceVO, ReservationContext, ItWorkVO> changeToStartState(final VirtualMachineGuru vmGuru, final VMInstanceVO vm, final User caller,
 +            final Account account) throws ConcurrentOperationException {
 +        final long vmId = vm.getId();
 +
 +        ItWorkVO work = new ItWorkVO(UUID.randomUUID().toString(), _nodeId, State.Starting, vm.getType(), vm.getId());
 +        int retry = VmOpLockStateRetry.value();
 +        while (retry-- != 0) {
 +            try {
 +                final ItWorkVO workFinal = work;
 +                final Ternary<VMInstanceVO, ReservationContext, ItWorkVO> result =
 +                        Transaction.execute(new TransactionCallbackWithException<Ternary<VMInstanceVO, ReservationContext, ItWorkVO>, NoTransitionException>() {
 +                            @Override
 +                            public Ternary<VMInstanceVO, ReservationContext, ItWorkVO> doInTransaction(final TransactionStatus status) throws NoTransitionException {
 +                                final Journal journal = new Journal.LogJournal("Creating " + vm, s_logger);
 +                                final ItWorkVO work = _workDao.persist(workFinal);
 +                                final ReservationContextImpl context = new ReservationContextImpl(work.getId(), journal, caller, account);
 +
 +                                if (stateTransitTo(vm, Event.StartRequested, null, work.getId())) {
 +                                    if (s_logger.isDebugEnabled()) {
 +                                        s_logger.debug("Successfully transitioned to start state for " + vm + " reservation id = " + work.getId());
 +                                    }
 +                                    return new Ternary<VMInstanceVO, ReservationContext, ItWorkVO>(vm, context, work);
 +                                }
 +
 +                                return new Ternary<VMInstanceVO, ReservationContext, ItWorkVO>(null, null, work);
 +                            }
 +                        });
 +
 +                work = result.third();
 +                if (result.first() != null) {
 +                    return result;
 +                }
 +            } catch (final NoTransitionException e) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Unable to transition into Starting state due to " + e.getMessage());
 +                }
 +            }
 +
 +            final VMInstanceVO instance = _vmDao.findById(vmId);
 +            if (instance == null) {
 +                throw new ConcurrentOperationException("Unable to acquire lock on " + vm);
 +            }
 +
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Determining why we're unable to update the state to Starting for " + instance + ".  Retry=" + retry);
 +            }
 +
 +            final State state = instance.getState();
 +            if (state == State.Running) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("VM is already started: " + vm);
 +                }
 +                return null;
 +            }
 +
 +            if (state.isTransitional()) {
 +                if (!checkWorkItems(vm, state)) {
 +                    throw new ConcurrentOperationException("There are concurrent operations on " + vm);
 +                } else {
 +                    continue;
 +                }
 +            }
 +
 +            if (state != State.Stopped) {
 +                s_logger.debug("VM " + vm + " is not in a state to be started: " + state);
 +                return null;
 +            }
 +        }
 +
 +        throw new ConcurrentOperationException("Unable to change the state of " + vm);
 +    }
 +
 +    protected <T extends VMInstanceVO> boolean changeState(final T vm, final Event event, final Long hostId, final ItWorkVO work, final Step step) throws NoTransitionException {
 +        // FIXME: We should do this better.
 +        Step previousStep = null;
 +        if (work != null) {
 +            previousStep = work.getStep();
 +            _workDao.updateStep(work, step);
 +        }
 +        boolean result = false;
 +        try {
 +            result = stateTransitTo(vm, event, hostId);
 +            return result;
 +        } finally {
 +            if (!result && work != null) {
 +                _workDao.updateStep(work, previousStep);
 +            }
 +        }
 +    }
 +
 +    protected boolean areAffinityGroupsAssociated(final VirtualMachineProfile vmProfile) {
 +        final VirtualMachine vm = vmProfile.getVirtualMachine();
 +        final long vmGroupCount = _affinityGroupVMMapDao.countAffinityGroupsForVm(vm.getId());
 +
 +        if (vmGroupCount > 0) {
 +            return true;
 +        }
 +        return false;
 +    }
 +
 +    @Override
 +    public void advanceStart(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params, final DeploymentPlanner planner)
 +            throws InsufficientCapacityException, ConcurrentOperationException, ResourceUnavailableException {
 +        advanceStart(vmUuid, params, null, planner);
 +    }
 +
 +    @Override
 +    public void advanceStart(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params, final DeploymentPlan planToDeploy, final DeploymentPlanner planner)
 +            throws InsufficientCapacityException, ConcurrentOperationException, ResourceUnavailableException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if ( jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateStart(vmUuid, params, planToDeploy, planner);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = startVmThroughJobQueue(vmUuid, params, planToDeploy, planner);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof InsufficientCapacityException) {
 +                    throw (InsufficientCapacityException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void setupAgentSecurity(final Host vmHost, final Map<String, String> sshAccessDetails, final VirtualMachine vm) throws AgentUnavailableException, OperationTimedoutException {
 +        final String csr = caManager.generateKeyStoreAndCsr(vmHost, sshAccessDetails);
 +        if (!Strings.isNullOrEmpty(csr)) {
 +            final Map<String, String> ipAddressDetails = new HashMap<>(sshAccessDetails);
 +            ipAddressDetails.remove(NetworkElementCommand.ROUTER_NAME);
 +            final Certificate certificate = caManager.issueCertificate(csr, Arrays.asList(vm.getHostName(), vm.getInstanceName()),
 +                    new ArrayList<>(ipAddressDetails.values()), CAManager.CertValidityPeriod.value(), null);
 +            final boolean result = caManager.deployCertificate(vmHost, certificate, false, sshAccessDetails);
 +            if (!result) {
 +                s_logger.error("Failed to setup certificate for system vm: " + vm.getInstanceName());
 +            }
 +        } else {
 +            s_logger.error("Failed to setup keystore and generate CSR for system vm: " + vm.getInstanceName());
 +        }
 +    }
 +
 +    @Override
 +    public void orchestrateStart(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params, final DeploymentPlan planToDeploy, final DeploymentPlanner planner)
 +            throws InsufficientCapacityException, ConcurrentOperationException, ResourceUnavailableException {
 +
 +        final CallContext cctxt = CallContext.current();
 +        final Account account = cctxt.getCallingAccount();
 +        final User caller = cctxt.getCallingUser();
 +
 +        VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +
 +        final Ternary<VMInstanceVO, ReservationContext, ItWorkVO> start = changeToStartState(vmGuru, vm, caller, account);
 +        if (start == null) {
 +            return;
 +        }
 +
 +        vm = start.first();
 +        final ReservationContext ctx = start.second();
 +        ItWorkVO work = start.third();
 +
 +        VMInstanceVO startedVm = null;
 +        final ServiceOfferingVO offering = _offeringDao.findById(vm.getId(), vm.getServiceOfferingId());
 +        final VirtualMachineTemplate template = _entityMgr.findByIdIncludingRemoved(VirtualMachineTemplate.class, vm.getTemplateId());
 +
 +        DataCenterDeployment plan = new DataCenterDeployment(vm.getDataCenterId(), vm.getPodIdToDeployIn(), null, null, null, null, ctx);
 +        if (planToDeploy != null && planToDeploy.getDataCenterId() != 0) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("advanceStart: DeploymentPlan is provided, using dcId:" + planToDeploy.getDataCenterId() + ", podId: " + planToDeploy.getPodId() +
 +                        ", clusterId: " + planToDeploy.getClusterId() + ", hostId: " + planToDeploy.getHostId() + ", poolId: " + planToDeploy.getPoolId());
 +            }
 +            plan =
 +                    new DataCenterDeployment(planToDeploy.getDataCenterId(), planToDeploy.getPodId(), planToDeploy.getClusterId(), planToDeploy.getHostId(),
 +                            planToDeploy.getPoolId(), planToDeploy.getPhysicalNetworkId(), ctx);
 +        }
 +
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vm.getHypervisorType());
 +
 +        boolean canRetry = true;
 +        ExcludeList avoids = null;
 +        try {
 +            final Journal journal = start.second().getJournal();
 +
 +            if (planToDeploy != null) {
 +                avoids = planToDeploy.getAvoids();
 +            }
 +            if (avoids == null) {
 +                avoids = new ExcludeList();
 +            }
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Deploy avoids pods: " + avoids.getPodsToAvoid() + ", clusters: " + avoids.getClustersToAvoid() + ", hosts: " + avoids.getHostsToAvoid());
 +            }
 +
 +            boolean planChangedByVolume = false;
 +            boolean reuseVolume = true;
 +            final DataCenterDeployment originalPlan = plan;
 +
 +            int retry = StartRetry.value();
 +            while (retry-- != 0) { // It's != so that it can match -1.
 +
 +                if (reuseVolume) {
 +                    // edit plan if this vm's ROOT volume is in READY state already
 +                    final List<VolumeVO> vols = _volsDao.findReadyRootVolumesByInstance(vm.getId());
 +                    for (final VolumeVO vol : vols) {
 +                        // make sure if the templateId is unchanged. If it is changed,
 +                        // let planner
 +                        // reassign pool for the volume even if it ready.
 +                        final Long volTemplateId = vol.getTemplateId();
 +                        if (volTemplateId != null && volTemplateId.longValue() != template.getId()) {
 +                            if (s_logger.isDebugEnabled()) {
 +                                s_logger.debug(vol + " of " + vm + " is READY, but template ids don't match, let the planner reassign a new pool");
 +                            }
 +                            continue;
 +                        }
 +
 +                        final StoragePool pool = (StoragePool)dataStoreMgr.getPrimaryDataStore(vol.getPoolId());
 +                        if (!pool.isInMaintenance()) {
 +                            if (s_logger.isDebugEnabled()) {
 +                                s_logger.debug("Root volume is ready, need to place VM in volume's cluster");
 +                            }
 +                            final long rootVolDcId = pool.getDataCenterId();
 +                            final Long rootVolPodId = pool.getPodId();
 +                            final Long rootVolClusterId = pool.getClusterId();
 +                            if (planToDeploy != null && planToDeploy.getDataCenterId() != 0) {
 +                                final Long clusterIdSpecified = planToDeploy.getClusterId();
 +                                if (clusterIdSpecified != null && rootVolClusterId != null) {
 +                                    if (rootVolClusterId.longValue() != clusterIdSpecified.longValue()) {
 +                                        // cannot satisfy the plan passed in to the
 +                                        // planner
 +                                        if (s_logger.isDebugEnabled()) {
 +                                            s_logger.debug("Cannot satisfy the deployment plan passed in since the ready Root volume is in different cluster. volume's cluster: " +
 +                                                    rootVolClusterId + ", cluster specified: " + clusterIdSpecified);
 +                                        }
 +                                        throw new ResourceUnavailableException(
 +                                                "Root volume is ready in different cluster, Deployment plan provided cannot be satisfied, unable to create a deployment for " +
 +                                                        vm, Cluster.class, clusterIdSpecified);
 +                                    }
 +                                }
 +                                plan =
 +                                        new DataCenterDeployment(planToDeploy.getDataCenterId(), planToDeploy.getPodId(), planToDeploy.getClusterId(),
 +                                                planToDeploy.getHostId(), vol.getPoolId(), null, ctx);
 +                            } else {
 +                                plan = new DataCenterDeployment(rootVolDcId, rootVolPodId, rootVolClusterId, null, vol.getPoolId(), null, ctx);
 +                                if (s_logger.isDebugEnabled()) {
 +                                    s_logger.debug(vol + " is READY, changing deployment plan to use this pool's dcId: " + rootVolDcId + " , podId: " + rootVolPodId +
 +                                            " , and clusterId: " + rootVolClusterId);
 +                                }
 +                                planChangedByVolume = true;
 +                            }
 +                        }
 +                    }
 +                }
 +
 +                final Account owner = _entityMgr.findById(Account.class, vm.getAccountId());
 +                final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vm, template, offering, owner, params);
 +                DeployDestination dest = null;
 +                try {
 +                    dest = _dpMgr.planDeployment(vmProfile, plan, avoids, planner);
 +                } catch (final AffinityConflictException e2) {
 +                    s_logger.warn("Unable to create deployment, affinity rules associted to the VM conflict", e2);
 +                    throw new CloudRuntimeException("Unable to create deployment, affinity rules associted to the VM conflict");
 +
 +                }
 +
 +                if (dest == null) {
 +                    if (planChangedByVolume) {
 +                        plan = originalPlan;
 +                        planChangedByVolume = false;
 +                        //do not enter volume reuse for next retry, since we want to look for resources outside the volume's cluster
 +                        reuseVolume = false;
 +                        continue;
 +                    }
 +                    throw new InsufficientServerCapacityException("Unable to create a deployment for " + vmProfile, DataCenter.class, plan.getDataCenterId(),
 +                            areAffinityGroupsAssociated(vmProfile));
 +                }
 +
 +                if (dest != null) {
 +                    avoids.addHost(dest.getHost().getId());
 +                    journal.record("Deployment found ", vmProfile, dest);
 +                }
 +
 +                long destHostId = dest.getHost().getId();
 +                vm.setPodIdToDeployIn(dest.getPod().getId());
 +                final Long cluster_id = dest.getCluster().getId();
 +                final ClusterDetailsVO cluster_detail_cpu = _clusterDetailsDao.findDetail(cluster_id, "cpuOvercommitRatio");
 +                final ClusterDetailsVO cluster_detail_ram = _clusterDetailsDao.findDetail(cluster_id, "memoryOvercommitRatio");
 +                //storing the value of overcommit in the vm_details table for doing a capacity check in case the cluster overcommit ratio is changed.
 +                if (_uservmDetailsDao.findDetail(vm.getId(), "cpuOvercommitRatio") == null &&
 +                        (Float.parseFloat(cluster_detail_cpu.getValue()) > 1f || Float.parseFloat(cluster_detail_ram.getValue()) > 1f)) {
 +                    _uservmDetailsDao.addDetail(vm.getId(), "cpuOvercommitRatio", cluster_detail_cpu.getValue(), true);
 +                    _uservmDetailsDao.addDetail(vm.getId(), "memoryOvercommitRatio", cluster_detail_ram.getValue(), true);
 +                } else if (_uservmDetailsDao.findDetail(vm.getId(), "cpuOvercommitRatio") != null) {
 +                    _uservmDetailsDao.addDetail(vm.getId(), "cpuOvercommitRatio", cluster_detail_cpu.getValue(), true);
 +                    _uservmDetailsDao.addDetail(vm.getId(), "memoryOvercommitRatio", cluster_detail_ram.getValue(), true);
 +                }
 +
 +                vmProfile.setCpuOvercommitRatio(Float.parseFloat(cluster_detail_cpu.getValue()));
 +                vmProfile.setMemoryOvercommitRatio(Float.parseFloat(cluster_detail_ram.getValue()));
 +                StartAnswer startAnswer = null;
 +
 +                try {
 +                    if (!changeState(vm, Event.OperationRetry, destHostId, work, Step.Prepare)) {
 +                        throw new ConcurrentOperationException("Unable to update the state of the Virtual Machine "+vm.getUuid()+" oldstate: "+vm.getState()+ "Event :"+Event.OperationRetry);
 +                    }
 +                } catch (final NoTransitionException e1) {
 +                    throw new ConcurrentOperationException(e1.getMessage());
 +                }
 +
 +                try {
-                     _networkMgr.prepare(vmProfile, new DeployDestination(dest.getDataCenter(), dest.getPod(), null, null), ctx);
++                    _networkMgr.prepare(vmProfile, new DeployDestination(dest.getDataCenter(), dest.getPod(), null, null, dest.getStorageForDisks()), ctx);
 +                    if (vm.getHypervisorType() != HypervisorType.BareMetal) {
 +                        volumeMgr.prepare(vmProfile, dest);
 +                    }
++
 +                    //since StorageMgr succeeded in volume creation, reuse Volume for further tries until current cluster has capacity
 +                    if (!reuseVolume) {
 +                        reuseVolume = true;
 +                    }
 +
 +                    Commands cmds = null;
 +                    vmGuru.finalizeVirtualMachineProfile(vmProfile, dest, ctx);
 +
 +                    final VirtualMachineTO vmTO = hvGuru.implement(vmProfile);
 +
 +                    handlePath(vmTO.getDisks(), vm.getHypervisorType());
 +
 +                    cmds = new Commands(Command.OnError.Stop);
 +
 +                    cmds.addCommand(new StartCommand(vmTO, dest.getHost(), getExecuteInSequence(vm.getHypervisorType())));
 +
 +                    vmGuru.finalizeDeployment(cmds, vmProfile, dest, ctx);
 +
 +                    work = _workDao.findById(work.getId());
 +                    if (work == null || work.getStep() != Step.Prepare) {
 +                        throw new ConcurrentOperationException("Work steps have been changed: " + work);
 +                    }
 +
 +                    _workDao.updateStep(work, Step.Starting);
 +
 +                    _agentMgr.send(destHostId, cmds);
 +
 +                    _workDao.updateStep(work, Step.Started);
 +
 +                    startAnswer = cmds.getAnswer(StartAnswer.class);
 +                    if (startAnswer != null && startAnswer.getResult()) {
 +                        handlePath(vmTO.getDisks(), startAnswer.getIqnToData());
 +
 +                        final String host_guid = startAnswer.getHost_guid();
 +
 +                        if (host_guid != null) {
 +                            final HostVO finalHost = _resourceMgr.findHostByGuid(host_guid);
 +                            if (finalHost == null) {
 +                                throw new CloudRuntimeException("Host Guid " + host_guid + " doesn't exist in DB, something went wrong while processing start answer: "+startAnswer);
 +                            }
 +                            destHostId = finalHost.getId();
 +                        }
 +                        if (vmGuru.finalizeStart(vmProfile, destHostId, cmds, ctx)) {
 +                            syncDiskChainChange(startAnswer);
 +
 +                            if (!changeState(vm, Event.OperationSucceeded, destHostId, work, Step.Done)) {
 +                                s_logger.error("Unable to transition to a new state. VM uuid: "+vm.getUuid()+    "VM oldstate:"+vm.getState()+"Event:"+Event.OperationSucceeded);
 +                                throw new ConcurrentOperationException("Failed to deploy VM"+ vm.getUuid());
 +                            }
 +
 +                            // Update GPU device capacity
 +                            final GPUDeviceTO gpuDevice = startAnswer.getVirtualMachine().getGpuDevice();
 +                            if (gpuDevice != null) {
 +                                _resourceMgr.updateGPUDetails(destHostId, gpuDevice.getGroupDetails());
 +                            }
 +
 +                            // Remove the information on whether it was a deploy vm request.The deployvm=true information
 +                            // is set only when the vm is being deployed. When a vm is started from a stop state the
 +                            // information isn't set,
 +                            if (_uservmDetailsDao.findDetail(vm.getId(), "deployvm") != null) {
 +                                _uservmDetailsDao.removeDetail(vm.getId(), "deployvm");
 +                            }
 +
 +                            startedVm = vm;
 +                            if (s_logger.isDebugEnabled()) {
 +                                s_logger.debug("Start completed for VM " + vm);
 +                            }
 +                            final Host vmHost = _hostDao.findById(destHostId);
 +                            if (vmHost != null && (VirtualMachine.Type.ConsoleProxy.equals(vm.getType()) ||
 +                                    VirtualMachine.Type.SecondaryStorageVm.equals(vm.getType())) && caManager.canProvisionCertificates()) {
 +                                final Map<String, String> sshAccessDetails = _networkMgr.getSystemVMAccessDetails(vm);
 +                                for (int retries = 3; retries > 0; retries--) {
 +                                    try {
 +                                        setupAgentSecurity(vmHost, sshAccessDetails, vm);
 +                                        return;
 +                                    } catch (final Exception e) {
 +                                        s_logger.error("Retrying after catching exception while trying to secure agent for systemvm id=" + vm.getId(), e);
 +                                    }
 +                                }
 +                                throw new CloudRuntimeException("Failed to setup and secure agent for systemvm id=" + vm.getId());
 +                            }
 +                            return;
 +                        } else {
 +                            if (s_logger.isDebugEnabled()) {
 +                                s_logger.info("The guru did not like the answers so stopping " + vm);
 +                            }
 +                            StopCommand stopCmd = new StopCommand(vm, getExecuteInSequence(vm.getHypervisorType()), false);
 +                            stopCmd.setControlIp(getControlNicIpForVM(vm));
 +                            final StopCommand cmd = stopCmd;
 +                            final Answer answer = _agentMgr.easySend(destHostId, cmd);
 +                            if (answer != null && answer instanceof StopAnswer) {
 +                                final StopAnswer stopAns = (StopAnswer)answer;
 +                                if (vm.getType() == VirtualMachine.Type.User) {
 +                                    final String platform = stopAns.getPlatform();
 +                                    if (platform != null) {
 +                                        final Map<String,String> vmmetadata = new HashMap<String,String>();
 +                                        vmmetadata.put(vm.getInstanceName(), platform);
 +                                        syncVMMetaData(vmmetadata);
 +                                    }
 +                                }
 +                            }
 +
 +                            if (answer == null || !answer.getResult()) {
 +                                s_logger.warn("Unable to stop " + vm + " due to " + (answer != null ? answer.getDetails() : "no answers"));
 +                                _haMgr.scheduleStop(vm, destHostId, WorkType.ForceStop);
 +                                throw new ExecutionException("Unable to stop this VM, "+vm.getUuid()+" so we are unable to retry the start operation");
 +                            }
 +                            throw new ExecutionException("Unable to start  VM:"+vm.getUuid()+" due to error in finalizeStart, not retrying");
 +                        }
 +                    }
 +                    s_logger.info("Unable to start VM on " + dest.getHost() + " due to " + (startAnswer == null ? " no start answer" : startAnswer.getDetails()));
 +                    if (startAnswer != null && startAnswer.getContextParam("stopRetry") != null) {
 +                        break;
 +                    }
 +
 +                } catch (OperationTimedoutException e) {
 +                    s_logger.debug("Unable to send the start command to host " + dest.getHost()+" failed to start VM: "+vm.getUuid());
 +                    if (e.isActive()) {
 +                        _haMgr.scheduleStop(vm, destHostId, WorkType.CheckStop);
 +                    }
 +                    canRetry = false;
 +                    throw new AgentUnavailableException("Unable to start " + vm.getHostName(), destHostId, e);
 +                } catch (final ResourceUnavailableException e) {
 +                    s_logger.info("Unable to contact resource.", e);
 +                    if (!avoids.add(e)) {
 +                        if (e.getScope() == Volume.class || e.getScope() == Nic.class) {
 +                            throw e;
 +                        } else {
 +                            s_logger.warn("unexpected ResourceUnavailableException : " + e.getScope().getName(), e);
 +                            throw e;
 +                        }
 +                    }
 +                } catch (final InsufficientCapacityException e) {
 +                    s_logger.info("Insufficient capacity ", e);
 +                    if (!avoids.add(e)) {
 +                        if (e.getScope() == Volume.class || e.getScope() == Nic.class) {
 +                            throw e;
 +                        } else {
 +                            s_logger.warn("unexpected InsufficientCapacityException : " + e.getScope().getName(), e);
 +                        }
 +                    }
 +                } catch (final ExecutionException e) {
 +                    s_logger.error("Failed to start instance " + vm, e);
 +                    throw new AgentUnavailableException("Unable to start instance due to " + e.getMessage(), destHostId, e);
 +                } catch (final NoTransitionException e) {
 +                    s_logger.error("Failed to start instance " + vm, e);
 +                    throw new AgentUnavailableException("Unable to start instance due to " + e.getMessage(), destHostId, e);
 +                } finally {
 +                    if (startedVm == null && canRetry) {
 +                        final Step prevStep = work.getStep();
 +                        _workDao.updateStep(work, Step.Release);
 +                        // If previous step was started/ing && we got a valid answer
 +                        if ((prevStep == Step.Started || prevStep == Step.Starting) && startAnswer != null && startAnswer.getResult()) {  //TODO check the response of cleanup and record it in DB for retry
 +                            cleanup(vmGuru, vmProfile, work, Event.OperationFailed, false);
 +                        } else {
 +                            //if step is not starting/started, send cleanup command with force=true
 +                            cleanup(vmGuru, vmProfile, work, Event.OperationFailed, true);
 +                        }
 +                    }
 +                }
 +            }
 +        } finally {
 +            if (startedVm == null) {
 +                if (canRetry) {
 +                    try {
 +                        changeState(vm, Event.OperationFailed, null, work, Step.Done);
 +                    } catch (final NoTransitionException e) {
 +                        throw new ConcurrentOperationException(e.getMessage());
 +                    }
 +                }
 +            }
 +
 +            if (planToDeploy != null) {
 +                planToDeploy.setAvoids(avoids);
 +            }
 +        }
 +
 +        if (startedVm == null) {
 +            throw new CloudRuntimeException("Unable to start instance '" + vm.getHostName() + "' (" + vm.getUuid() + "), see management server log for details");
 +        }
 +    }
 +
 +    // for managed storage on KVM, need to make sure the path field of the volume in question is populated with the IQN
 +    private void handlePath(final DiskTO[] disks, final HypervisorType hypervisorType) {
 +        if (hypervisorType != HypervisorType.KVM) {
 +            return;
 +        }
 +
 +        if (disks != null) {
 +            for (final DiskTO disk : disks) {
 +                final Map<String, String> details = disk.getDetails();
 +                final boolean isManaged = details != null && Boolean.parseBoolean(details.get(DiskTO.MANAGED));
 +
 +                if (isManaged && disk.getPath() == null) {
 +                    final Long volumeId = disk.getData().getId();
 +                    final VolumeVO volume = _volsDao.findById(volumeId);
 +
 +                    disk.setPath(volume.get_iScsiName());
 +
 +                    if (disk.getData() instanceof VolumeObjectTO) {
 +                        final VolumeObjectTO volTo = (VolumeObjectTO)disk.getData();
 +
 +                        volTo.setPath(volume.get_iScsiName());
 +                    }
 +
 +                    volume.setPath(volume.get_iScsiName());
 +
 +                    _volsDao.update(volumeId, volume);
 +                }
 +            }
 +        }
 +    }
 +
 +    // for managed storage on XenServer and VMware, need to update the DB with a path if the VDI/VMDK file was newly created
 +    private void handlePath(final DiskTO[] disks, final Map<String, Map<String, String>> iqnToData) {
 +        if (disks != null && iqnToData != null) {
 +            for (final DiskTO disk : disks) {
 +                final Map<String, String> details = disk.getDetails();
 +                final boolean isManaged = details != null && Boolean.parseBoolean(details.get(DiskTO.MANAGED));
 +
 +                if (isManaged) {
 +                    final Long volumeId = disk.getData().getId();
 +                    final VolumeVO volume = _volsDao.findById(volumeId);
 +                    final String iScsiName = volume.get_iScsiName();
 +
 +                    boolean update = false;
 +
 +                    final Map<String, String> data = iqnToData.get(iScsiName);
 +
 +                    if (data != null) {
 +                        final String path = data.get(StartAnswer.PATH);
 +
 +                        if (path != null) {
 +                            volume.setPath(path);
 +
 +                            update = true;
 +                        }
 +
 +                        final String imageFormat = data.get(StartAnswer.IMAGE_FORMAT);
 +
 +                        if (imageFormat != null) {
 +                            volume.setFormat(ImageFormat.valueOf(imageFormat));
 +
 +                            update = true;
 +                        }
 +
 +                        if (update) {
 +                            _volsDao.update(volumeId, volume);
 +                        }
 +                    }
 +                }
 +            }
 +        }
 +    }
 +
 +    private void syncDiskChainChange(final StartAnswer answer) {
 +        final VirtualMachineTO vmSpec = answer.getVirtualMachine();
 +
 +        for (final DiskTO disk : vmSpec.getDisks()) {
 +            if (disk.getType() != Volume.Type.ISO) {
 +                final VolumeObjectTO vol = (VolumeObjectTO)disk.getData();
 +                final VolumeVO volume = _volsDao.findById(vol.getId());
 +
 +                // Use getPath() from VolumeVO to get a fresh copy of what's in the DB.
 +                // Before doing this, in a certain situation, getPath() from VolumeObjectTO
 +                // returned null instead of an actual path (because it was out of date with the DB).
 +                if(vol.getPath() != null) {
 +                    volumeMgr.updateVolumeDiskChain(vol.getId(), vol.getPath(), vol.getChainInfo());
 +                } else {
 +                    volumeMgr.updateVolumeDiskChain(vol.getId(), volume.getPath(), vol.getChainInfo());
 +                }
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public void stop(final String vmUuid) throws ResourceUnavailableException {
 +        try {
 +            advanceStop(vmUuid, false);
 +        } catch (final OperationTimedoutException e) {
 +            throw new AgentUnavailableException("Unable to stop vm because the operation to stop timed out", e.getAgentId(), e);
 +        } catch (final ConcurrentOperationException e) {
 +            throw new CloudRuntimeException("Unable to stop vm because of a concurrent operation", e);
 +        }
 +
 +    }
 +
 +    @Override
 +    public void stopForced(String vmUuid) throws ResourceUnavailableException {
 +        try {
 +            advanceStop(vmUuid, true);
 +        } catch (final OperationTimedoutException e) {
 +            throw new AgentUnavailableException("Unable to stop vm because the operation to stop timed out", e.getAgentId(), e);
 +        } catch (final ConcurrentOperationException e) {
 +            throw new CloudRuntimeException("Unable to stop vm because of a concurrent operation", e);
 +        }
 +    }
 +
 +    @Override
 +    public boolean getExecuteInSequence(final HypervisorType hypervisorType) {
 +        if (HypervisorType.KVM == hypervisorType || HypervisorType.XenServer == hypervisorType || HypervisorType.Hyperv == hypervisorType || HypervisorType.LXC == hypervisorType) {
 +            return false;
 +        } else if (HypervisorType.VMware == hypervisorType) {
 +            final Boolean fullClone = HypervisorGuru.VmwareFullClone.value();
 +            return fullClone;
 +        } else {
 +            return ExecuteInSequence.value();
 +        }
 +    }
 +
 +    private List<Map<String, String>> getVolumesToDisconnect(VirtualMachine vm) {
 +        List<Map<String, String>> volumesToDisconnect = new ArrayList<>();
 +
 +        List<VolumeVO> volumes = _volsDao.findByInstance(vm.getId());
 +
 +        if (CollectionUtils.isEmpty(volumes)) {
 +            return volumesToDisconnect;
 +        }
 +
 +        for (VolumeVO volume : volumes) {
 +            StoragePoolVO storagePool = _storagePoolDao.findById(volume.getPoolId());
 +
 +            if (storagePool != null && storagePool.isManaged()) {
 +                Map<String, String> info = new HashMap<>(3);
 +
 +                info.put(DiskTO.STORAGE_HOST, storagePool.getHostAddress());
 +                info.put(DiskTO.STORAGE_PORT, String.valueOf(storagePool.getPort()));
 +                info.put(DiskTO.IQN, volume.get_iScsiName());
 +
 +                volumesToDisconnect.add(info);
 +            }
 +        }
 +
 +        return volumesToDisconnect;
 +    }
 +
 +    protected boolean sendStop(final VirtualMachineGuru guru, final VirtualMachineProfile profile, final boolean force, final boolean checkBeforeCleanup) {
 +        final VirtualMachine vm = profile.getVirtualMachine();
 +        StopCommand stpCmd = new StopCommand(vm, getExecuteInSequence(vm.getHypervisorType()), checkBeforeCleanup);
 +        stpCmd.setControlIp(getControlNicIpForVM(vm));
 +        stpCmd.setVolumesToDisconnect(getVolumesToDisconnect(vm));
 +        final StopCommand stop = stpCmd;
 +        try {
 +            Answer answer = null;
 +            if(vm.getHostId() != null) {
 +                answer = _agentMgr.send(vm.getHostId(), stop);
 +            }
 +            if (answer != null && answer instanceof StopAnswer) {
 +                final StopAnswer stopAns = (StopAnswer)answer;
 +                if (vm.getType() == VirtualMachine.Type.User) {
 +                    final String platform = stopAns.getPlatform();
 +                    if (platform != null) {
 +                        final UserVmVO userVm = _userVmDao.findById(vm.getId());
 +                        _userVmDao.loadDetails(userVm);
 +                        userVm.setDetail("platform", platform);
 +                        _userVmDao.saveDetails(userVm);
 +                    }
 +                }
 +
 +                final GPUDeviceTO gpuDevice = stop.getGpuDevice();
 +                if (gpuDevice != null) {
 +                    _resourceMgr.updateGPUDetails(vm.getHostId(), gpuDevice.getGroupDetails());
 +                }
 +                if (!answer.getResult()) {
 +                    final String details = answer.getDetails();
 +                    s_logger.debug("Unable to stop VM due to " + details);
 +                    return false;
 +                }
 +
 +                guru.finalizeStop(profile, answer);
 +            } else {
 +                s_logger.error("Invalid answer received in response to a StopCommand for " + vm.getInstanceName());
 +                return false;
 +            }
 +
 +        } catch (final AgentUnavailableException e) {
 +            if (!force) {
 +                return false;
 +            }
 +        } catch (final OperationTimedoutException e) {
 +            if (!force) {
 +                return false;
 +            }
 +        }
 +
 +        return true;
 +    }
 +
 +    protected boolean cleanup(final VirtualMachineGuru guru, final VirtualMachineProfile profile, final ItWorkVO work, final Event event, final boolean cleanUpEvenIfUnableToStop) {
 +        final VirtualMachine vm = profile.getVirtualMachine();
 +        final State state = vm.getState();
 +        s_logger.debug("Cleaning up resources for the vm " + vm + " in " + state + " state");
 +        try {
 +            if (state == State.Starting) {
 +                if (work != null) {
 +                    final Step step = work.getStep();
 +                    if (step == Step.Starting && !cleanUpEvenIfUnableToStop) {
 +                        s_logger.warn("Unable to cleanup vm " + vm + "; work state is incorrect: " + step);
 +                        return false;
 +                    }
 +
 +                    if (step == Step.Started || step == Step.Starting || step == Step.Release) {
 +                        if (vm.getHostId() != null) {
 +                            if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                                s_logger.warn("Failed to stop vm " + vm + " in " + State.Starting + " state as a part of cleanup process");
 +                                return false;
 +                            }
 +                        }
 +                    }
 +
 +                    if (step != Step.Release && step != Step.Prepare && step != Step.Started && step != Step.Starting) {
 +                        s_logger.debug("Cleanup is not needed for vm " + vm + "; work state is incorrect: " + step);
 +                        return true;
 +                    }
 +                } else {
 +                    if (vm.getHostId() != null) {
 +                        if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                            s_logger.warn("Failed to stop vm " + vm + " in " + State.Starting + " state as a part of cleanup process");
 +                            return false;
 +                        }
 +                    }
 +                }
 +
 +            } else if (state == State.Stopping) {
 +                if (vm.getHostId() != null) {
 +                    if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                        s_logger.warn("Failed to stop vm " + vm + " in " + State.Stopping + " state as a part of cleanup process");
 +                        return false;
 +                    }
 +                }
 +            } else if (state == State.Migrating) {
 +                if (vm.getHostId() != null) {
 +                    if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                        s_logger.warn("Failed to stop vm " + vm + " in " + State.Migrating + " state as a part of cleanup process");
 +                        return false;
 +                    }
 +                }
 +                if (vm.getLastHostId() != null) {
 +                    if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                        s_logger.warn("Failed to stop vm " + vm + " in " + State.Migrating + " state as a part of cleanup process");
 +                        return false;
 +                    }
 +                }
 +            } else if (state == State.Running) {
 +                if (!sendStop(guru, profile, cleanUpEvenIfUnableToStop, false)) {
 +                    s_logger.warn("Failed to stop vm " + vm + " in " + State.Running + " state as a part of cleanup process");
 +                    return false;
 +                }
 +            }
 +        } finally {
 +            try {
 +                _networkMgr.release(profile, cleanUpEvenIfUnableToStop);
 +                s_logger.debug("Successfully released network resources for the vm " + vm);
 +            } catch (final Exception e) {
 +                s_logger.warn("Unable to release some network resources.", e);
 +            }
 +
 +            volumeMgr.release(profile);
 +            s_logger.debug("Successfully cleanued up resources for the vm " + vm + " in " + state + " state");
 +        }
 +
 +        return true;
 +    }
 +
 +    @Override
 +    public void advanceStop(final String vmUuid, final boolean cleanUpEvenIfUnableToStop)
 +            throws AgentUnavailableException, OperationTimedoutException, ConcurrentOperationException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateStop(vmUuid, cleanUpEvenIfUnableToStop);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +
 +        } else {
 +            final Outcome<VirtualMachine> outcome = stopVmThroughJobQueue(vmUuid, cleanUpEvenIfUnableToStop);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof AgentUnavailableException) {
 +                    throw (AgentUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof OperationTimedoutException) {
 +                    throw (OperationTimedoutException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateStop(final String vmUuid, final boolean cleanUpEvenIfUnableToStop) throws AgentUnavailableException, OperationTimedoutException, ConcurrentOperationException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        advanceStop(vm, cleanUpEvenIfUnableToStop);
 +    }
 +
 +    private void advanceStop(final VMInstanceVO vm, final boolean cleanUpEvenIfUnableToStop) throws AgentUnavailableException, OperationTimedoutException,
 +    ConcurrentOperationException {
 +        final State state = vm.getState();
 +        if (state == State.Stopped) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("VM is already stopped: " + vm);
 +            }
 +            return;
 +        }
 +
 +        if (state == State.Destroyed || state == State.Expunging || state == State.Error) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Stopped called on " + vm + " but the state is " + state);
 +            }
 +            return;
 +        }
 +        // grab outstanding work item if any
 +        final ItWorkVO work = _workDao.findByOutstandingWork(vm.getId(), vm.getState());
 +        if (work != null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Found an outstanding work item for this vm " + vm + " with state:" + vm.getState() + ", work id:" + work.getId());
 +            }
 +        }
 +        final Long hostId = vm.getHostId();
 +        if (hostId == null) {
 +            if (!cleanUpEvenIfUnableToStop) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("HostId is null but this is not a forced stop, cannot stop vm " + vm + " with state:" + vm.getState());
 +                }
 +                throw new CloudRuntimeException("Unable to stop " + vm);
 +            }
 +            try {
 +                stateTransitTo(vm, Event.AgentReportStopped, null, null);
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn(e.getMessage());
 +            }
 +            // mark outstanding work item if any as done
 +            if (work != null) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Updating work item to Done, id:" + work.getId());
 +                }
 +                work.setStep(Step.Done);
 +                _workDao.update(work.getId(), work);
 +            }
 +            return;
 +        } else {
 +            HostVO host = _hostDao.findById(hostId);
 +            if (!cleanUpEvenIfUnableToStop && vm.getState() == State.Running && host.getResourceState() == ResourceState.PrepareForMaintenance) {
 +                s_logger.debug("Host is in PrepareForMaintenance state - Stop VM operation on the VM id: " + vm.getId() + " is not allowed");
 +                throw new CloudRuntimeException("Stop VM operation on the VM id: " + vm.getId() + " is not allowed as host is preparing for maintenance mode");
 +            }
 +        }
 +
 +        final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +
 +        try {
 +            if (!stateTransitTo(vm, Event.StopRequested, vm.getHostId())) {
 +                throw new ConcurrentOperationException("VM is being operated on.");
 +            }
 +        } catch (final NoTransitionException e1) {
 +            if (!cleanUpEvenIfUnableToStop) {
 +                throw new CloudRuntimeException("We cannot stop " + vm + " when it is in state " + vm.getState());
 +            }
 +            final boolean doCleanup = true;
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Unable to transition the state but we're moving on because it's forced stop");
 +            }
 +
 +            if (doCleanup) {
 +                if (cleanup(vmGuru, new VirtualMachineProfileImpl(vm), work, Event.StopRequested, cleanUpEvenIfUnableToStop)) {
 +                    try {
 +                        if (s_logger.isDebugEnabled() && work != null) {
 +                            s_logger.debug("Updating work item to Done, id:" + work.getId());
 +                        }
 +                        if (!changeState(vm, Event.AgentReportStopped, null, work, Step.Done)) {
 +                            throw new CloudRuntimeException("Unable to stop " + vm);
 +                        }
 +
 +                    } catch (final NoTransitionException e) {
 +                        s_logger.warn("Unable to cleanup " + vm);
 +                        throw new CloudRuntimeException("Unable to stop " + vm, e);
 +                    }
 +                } else {
 +                    if (s_logger.isDebugEnabled()) {
 +                        s_logger.debug("Failed to cleanup VM: " + vm);
 +                    }
 +                    throw new CloudRuntimeException("Failed to cleanup " + vm + " , current state " + vm.getState());
 +                }
 +            }
 +        }
 +
 +        if (vm.getState() != State.Stopping) {
 +            throw new CloudRuntimeException("We cannot proceed with stop VM " + vm + " since it is not in 'Stopping' state, current state: " + vm.getState());
 +        }
 +
 +        vmGuru.prepareStop(profile);
 +
 +        final StopCommand stop = new StopCommand(vm, getExecuteInSequence(vm.getHypervisorType()), false, cleanUpEvenIfUnableToStop);
 +        stop.setControlIp(getControlNicIpForVM(vm));
 +
 +        boolean stopped = false;
 +        Answer answer = null;
 +        try {
 +            answer = _agentMgr.send(vm.getHostId(), stop);
 +            if (answer != null) {
 +                if (answer instanceof StopAnswer) {
 +                    final StopAnswer stopAns = (StopAnswer)answer;
 +                    if (vm.getType() == VirtualMachine.Type.User) {
 +                        final String platform = stopAns.getPlatform();
 +                        if (platform != null) {
 +                            final UserVmVO userVm = _userVmDao.findById(vm.getId());
 +                            _userVmDao.loadDetails(userVm);
 +                            userVm.setDetail("platform", platform);
 +                            _userVmDao.saveDetails(userVm);
 +                        }
 +                    }
 +                }
 +                stopped = answer.getResult();
 +                if (!stopped) {
 +                    throw new CloudRuntimeException("Unable to stop the virtual machine due to " + answer.getDetails());
 +                }
 +                vmGuru.finalizeStop(profile, answer);
 +                final GPUDeviceTO gpuDevice = stop.getGpuDevice();
 +                if (gpuDevice != null) {
 +                    _resourceMgr.updateGPUDetails(vm.getHostId(), gpuDevice.getGroupDetails());
 +                }
 +            } else {
 +                throw new CloudRuntimeException("Invalid answer received in response to a StopCommand on " + vm.instanceName);
 +            }
 +
 +        } catch (final AgentUnavailableException e) {
 +            s_logger.warn("Unable to stop vm, agent unavailable: " + e.toString());
 +        } catch (final OperationTimedoutException e) {
 +            s_logger.warn("Unable to stop vm, operation timed out: " + e.toString());
 +        } finally {
 +            if (!stopped) {
 +                if (!cleanUpEvenIfUnableToStop) {
 +                    s_logger.warn("Unable to stop vm " + vm);
 +                    try {
 +                        stateTransitTo(vm, Event.OperationFailed, vm.getHostId());
 +                    } catch (final NoTransitionException e) {
 +                        s_logger.warn("Unable to transition the state " + vm);
 +                    }
 +                    throw new CloudRuntimeException("Unable to stop " + vm);
 +                } else {
 +                    s_logger.warn("Unable to actually stop " + vm + " but continue with release because it's a force stop");
 +                    vmGuru.finalizeStop(profile, answer);
 +                }
 +            }
 +        }
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug(vm + " is stopped on the host.  Proceeding to release resource held.");
 +        }
 +
 +        try {
 +            _networkMgr.release(profile, cleanUpEvenIfUnableToStop);
 +            s_logger.debug("Successfully released network resources for the vm " + vm);
 +        } catch (final Exception e) {
 +            s_logger.warn("Unable to release some network resources.", e);
 +        }
 +
 +        try {
 +            if (vm.getHypervisorType() != HypervisorType.BareMetal) {
 +                volumeMgr.release(profile);
 +                s_logger.debug("Successfully released storage resources for the vm " + vm);
 +            }
 +        } catch (final Exception e) {
 +            s_logger.warn("Unable to release storage resources.", e);
 +        }
 +
 +        try {
 +            if (work != null) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Updating the outstanding work item to Done, id:" + work.getId());
 +                }
 +                work.setStep(Step.Done);
 +                _workDao.update(work.getId(), work);
 +            }
 +
 +            if (!stateTransitTo(vm, Event.OperationSucceeded, null)) {
 +                throw new CloudRuntimeException("unable to stop " + vm);
 +            }
 +        } catch (final NoTransitionException e) {
 +            s_logger.warn(e.getMessage());
 +            throw new CloudRuntimeException("Unable to stop " + vm);
 +        }
 +    }
 +
 +    private void setStateMachine() {
 +        _stateMachine = VirtualMachine.State.getStateMachine();
 +    }
 +
 +    protected boolean stateTransitTo(final VMInstanceVO vm, final VirtualMachine.Event e, final Long hostId, final String reservationId) throws NoTransitionException {
 +        // if there are active vm snapshots task, state change is not allowed
 +
 +        // Disable this hacking thing, VM snapshot task need to be managed by its orchestartion flow istelf instead of
 +        // hacking it here at general VM manager
 +        /*
 +                if (_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())) {
 +                    s_logger.error("State transit with event: " + e + " failed due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
 +                    return false;
 +                }
 +         */
 +        vm.setReservationId(reservationId);
 +        return _stateMachine.transitTo(vm, e, new Pair<Long, Long>(vm.getHostId(), hostId), _vmDao);
 +    }
 +
 +    @Override
 +    public boolean stateTransitTo(final VirtualMachine vm1, final VirtualMachine.Event e, final Long hostId) throws NoTransitionException {
 +        final VMInstanceVO vm = (VMInstanceVO)vm1;
 +
 +        /*
 +         *  Remove the hacking logic here.
 +                // if there are active vm snapshots task, state change is not allowed
 +                if (_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())) {
 +                    s_logger.error("State transit with event: " + e + " failed due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
 +                    return false;
 +                }
 +         */
 +
 +        final State oldState = vm.getState();
 +        if (oldState == State.Starting) {
 +            if (e == Event.OperationSucceeded) {
 +                vm.setLastHostId(hostId);
 +            }
 +        } else if (oldState == State.Stopping) {
 +            if (e == Event.OperationSucceeded) {
 +                vm.setLastHostId(vm.getHostId());
 +            }
 +        }
 +        return _stateMachine.transitTo(vm, e, new Pair<Long, Long>(vm.getHostId(), hostId), _vmDao);
 +    }
 +
 +    @Override
 +    public void destroy(final String vmUuid, final boolean expunge) throws AgentUnavailableException, OperationTimedoutException, ConcurrentOperationException {
 +        VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        if (vm == null || vm.getState() == State.Destroyed || vm.getState() == State.Expunging || vm.getRemoved() != null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Unable to find vm or vm is destroyed: " + vm);
 +            }
 +            return;
 +        }
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Destroying vm " + vm + ", expunge flag " + (expunge ? "on" : "off"));
 +        }
 +
 +        advanceStop(vmUuid, VmDestroyForcestop.value());
 +
 +        deleteVMSnapshots(vm, expunge);
 +
 +        Transaction.execute(new TransactionCallbackWithExceptionNoReturn<CloudRuntimeException>() {
 +            @Override
 +            public void doInTransactionWithoutResult(final TransactionStatus status) throws CloudRuntimeException {
 +                VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +                try {
 +                    if (!stateTransitTo(vm, VirtualMachine.Event.DestroyRequested, vm.getHostId())) {
 +                        s_logger.debug("Unable to destroy the vm because it is not in the correct state: " + vm);
 +                        throw new CloudRuntimeException("Unable to destroy " + vm);
 +                    }
 +                } catch (final NoTransitionException e) {
 +                    s_logger.debug(e.getMessage());
 +                    throw new CloudRuntimeException("Unable to destroy " + vm, e);
 +                }
 +            }
 +        });
 +    }
 +
 +    /**
 +     * Delete vm snapshots depending on vm's hypervisor type. For Vmware, vm snapshots removal is delegated to vm cleanup thread
 +     * to reduce tasks sent to hypervisor (one tasks to delete vm snapshots and vm itself
 +     * instead of one task for each vm snapshot plus another for the vm)
 +     * @param vm vm
 +     * @param expunge indicates if vm should be expunged
 +     */
 +    private void deleteVMSnapshots(VMInstanceVO vm, boolean expunge) {
 +        if (! vm.getHypervisorType().equals(HypervisorType.VMware)) {
 +            if (!_vmSnapshotMgr.deleteAllVMSnapshots(vm.getId(), null)) {
 +                s_logger.debug("Unable to delete all snapshots for " + vm);
 +                throw new CloudRuntimeException("Unable to delete vm snapshots for " + vm);
 +            }
 +        }
 +        else {
 +            if (expunge) {
 +                _vmSnapshotMgr.deleteVMSnapshotsFromDB(vm.getId());
 +            }
 +        }
 +    }
 +
 +    protected boolean checkVmOnHost(final VirtualMachine vm, final long hostId) throws AgentUnavailableException, OperationTimedoutException {
 +        final Answer answer = _agentMgr.send(hostId, new CheckVirtualMachineCommand(vm.getInstanceName()));
 +        if (answer == null || !answer.getResult()) {
 +            return false;
 +        }
 +        if (answer instanceof CheckVirtualMachineAnswer) {
 +            final CheckVirtualMachineAnswer vmAnswer = (CheckVirtualMachineAnswer)answer;
 +            if (vmAnswer.getState() == PowerState.PowerOff) {
 +                return false;
 +            }
 +        }
 +
 +        UserVmVO userVm = _userVmDao.findById(vm.getId());
 +        if (userVm != null) {
 +            List<VMSnapshotVO> vmSnapshots = _vmSnapshotDao.findByVm(vm.getId());
 +            RestoreVMSnapshotCommand command = _vmSnapshotMgr.createRestoreCommand(userVm, vmSnapshots);
 +            if (command != null) {
 +                RestoreVMSnapshotAnswer restoreVMSnapshotAnswer = (RestoreVMSnapshotAnswer) _agentMgr.send(hostId, command);
 +                if (restoreVMSnapshotAnswer == null || !restoreVMSnapshotAnswer.getResult()) {
 +                    s_logger.warn("Unable to restore the vm snapshot from image file after live migration of vm with vmsnapshots: " + restoreVMSnapshotAnswer.getDetails());
 +                }
 +            }
 +        }
 +
 +        return true;
 +    }
 +
 +    @Override
 +    public void storageMigration(final String vmUuid, final StoragePool destPool) {
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateStorageMigration(vmUuid, destPool);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = migrateVmStorageThroughJobQueue(vmUuid, destPool);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateStorageMigration(final String vmUuid, final StoragePool destPool) {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        if (destPool == null) {
 +            throw new CloudRuntimeException("Unable to migrate vm: missing destination storage pool");
 +        }
 +
 +        try {
 +            stateTransitTo(vm, VirtualMachine.Event.StorageMigrationRequested, null);
 +        } catch (final NoTransitionException e) {
 +            s_logger.debug("Unable to migrate vm: " + e.toString());
 +            throw new CloudRuntimeException("Unable to migrate vm: " + e.toString());
 +        }
 +
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +        boolean migrationResult = false;
 +        try {
 +            migrationResult = volumeMgr.storageMigration(profile, destPool);
 +
 +            if (migrationResult) {
 +                //if the vm is migrated to different pod in basic mode, need to reallocate ip
 +
 +                if (destPool.getPodId() != null && !destPool.getPodId().equals(vm.getPodIdToDeployIn())) {
 +                    final DataCenterDeployment plan = new DataCenterDeployment(vm.getDataCenterId(), destPool.getPodId(), null, null, null, null);
 +                    final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vm, null, null, null, null);
 +                    _networkMgr.reallocate(vmProfile, plan);
 +                }
 +
 +                //when start the vm next time, don;'t look at last_host_id, only choose the host based on volume/storage pool
 +                vm.setLastHostId(null);
 +                vm.setPodIdToDeployIn(destPool.getPodId());
 +
 +                // If VM was cold migrated between clusters belonging to two different VMware DCs,
 +                // unregister the VM from the source host and cleanup the associated VM files.
 +                if (vm.getHypervisorType().equals(HypervisorType.VMware)) {
 +                    Long srcClusterId = null;
 +                    Long srcHostId = vm.getHostId() != null ? vm.getHostId() : vm.getLastHostId();
 +                    if (srcHostId != null) {
 +                        HostVO srcHost = _hostDao.findById(srcHostId);
 +                        srcClusterId = srcHost.getClusterId();
 +                    }
 +
 +                    final Long destClusterId = destPool.getClusterId();
 +                    if (srcClusterId != null && destClusterId != null && ! srcClusterId.equals(destClusterId)) {
 +                        final String srcDcName = _clusterDetailsDao.getVmwareDcName(srcClusterId);
 +                        final String destDcName = _clusterDetailsDao.getVmwareDcName(destClusterId);
 +                        if (srcDcName != null && destDcName != null && !srcDcName.equals(destDcName)) {
 +                            s_logger.debug("Since VM's storage was successfully migrated across VMware Datacenters, unregistering VM: " + vm.getInstanceName() +
 +                                    " from source host: " + srcHostId);
 +                            final UnregisterVMCommand uvc = new UnregisterVMCommand(vm.getInstanceName());
 +                            uvc.setCleanupVmFiles(true);
 +                            try {
 +                                _agentMgr.send(srcHostId, uvc);
 +                            } catch (final AgentUnavailableException | OperationTimedoutException e) {
 +                                throw new CloudRuntimeException("Failed to unregister VM: " + vm.getInstanceName() + " from source host: " + srcHostId +
 +                                        " after successfully migrating VM's storage across VMware Datacenters");
 +                            }
 +                        }
 +                    }
 +                }
 +
 +            } else {
 +                s_logger.debug("Storage migration failed");
 +            }
 +        } catch (final ConcurrentOperationException e) {
 +            s_logger.debug("Failed to migration: " + e.toString());
 +            throw new CloudRuntimeException("Failed to migration: " + e.toString());
 +        } catch (final InsufficientVirtualNetworkCapacityException e) {
 +            s_logger.debug("Failed to migration: " + e.toString());
 +            throw new CloudRuntimeException("Failed to migration: " + e.toString());
 +        } catch (final InsufficientAddressCapacityException e) {
 +            s_logger.debug("Failed to migration: " + e.toString());
 +            throw new CloudRuntimeException("Failed to migration: " + e.toString());
 +        } catch (final InsufficientCapacityException e) {
 +            s_logger.debug("Failed to migration: " + e.toString());
 +            throw new CloudRuntimeException("Failed to migration: " + e.toString());
 +        } catch (final StorageUnavailableException e) {
 +            s_logger.debug("Failed to migration: " + e.toString());
 +            throw new CloudRuntimeException("Failed to migration: " + e.toString());
 +        } finally {
 +            try {
 +                stateTransitTo(vm, VirtualMachine.Event.AgentReportStopped, null);
 +            } catch (final NoTransitionException e) {
 +                s_logger.debug("Failed to change vm state: " + e.toString());
 +                throw new CloudRuntimeException("Failed to change vm state: " + e.toString());
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public void migrate(final String vmUuid, final long srcHostId, final DeployDestination dest)
 +            throws ResourceUnavailableException, ConcurrentOperationException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateMigrate(vmUuid, srcHostId, dest);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = migrateVmThroughJobQueue(vmUuid, srcHostId, dest);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +
 +            }
 +        }
 +    }
 +
 +    private void orchestrateMigrate(final String vmUuid, final long srcHostId, final DeployDestination dest) throws ResourceUnavailableException, ConcurrentOperationException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        if (vm == null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Unable to find the vm " + vmUuid);
 +            }
 +            throw new CloudRuntimeException("Unable to find a virtual machine with id " + vmUuid);
 +        }
 +        migrate(vm, srcHostId, dest);
 +    }
 +
 +    protected void migrate(final VMInstanceVO vm, final long srcHostId, final DeployDestination dest) throws ResourceUnavailableException, ConcurrentOperationException {
 +        s_logger.info("Migrating " + vm + " to " + dest);
 +
 +        final long dstHostId = dest.getHost().getId();
 +        final Host fromHost = _hostDao.findById(srcHostId);
 +        if (fromHost == null) {
 +            s_logger.info("Unable to find the host to migrate from: " + srcHostId);
 +            throw new CloudRuntimeException("Unable to find the host to migrate from: " + srcHostId);
 +        }
 +
 +        if (fromHost.getClusterId().longValue() != dest.getCluster().getId()) {
 +            final List<VolumeVO> volumes = _volsDao.findCreatedByInstance(vm.getId());
 +            for (final VolumeVO volume : volumes) {
 +                if (!_storagePoolDao.findById(volume.getPoolId()).getScope().equals(ScopeType.ZONE)) {
 +                    s_logger.info("Source and destination host are not in same cluster and all volumes are not on zone wide primary store, unable to migrate to host: "
 +                            + dest.getHost().getId());
 +                    throw new CloudRuntimeException(
 +                            "Source and destination host are not in same cluster and all volumes are not on zone wide primary store, unable to migrate to host: "
 +                                    + dest.getHost().getId());
 +                }
 +            }
 +        }
 +
 +        final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +
 +        if (vm.getState() != State.Running) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("VM is not Running, unable to migrate the vm " + vm);
 +            }
 +            throw new CloudRuntimeException("VM is not Running, unable to migrate the vm currently " + vm + " , current state: " + vm.getState().toString());
 +        }
 +
 +        AlertManager.AlertType alertType = AlertManager.AlertType.ALERT_TYPE_USERVM_MIGRATE;
 +        if (VirtualMachine.Type.DomainRouter.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER_MIGRATE;
 +        } else if (VirtualMachine.Type.ConsoleProxy.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_CONSOLE_PROXY_MIGRATE;
 +        }
 +
 +        final VirtualMachineProfile vmSrc = new VirtualMachineProfileImpl(vm);
 +        for (final NicProfile nic : _networkMgr.getNicProfiles(vm)) {
 +            vmSrc.addNic(nic);
 +        }
 +
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, _offeringDao.findById(vm.getId(), vm.getServiceOfferingId()), null, null);
 +        _networkMgr.prepareNicForMigration(profile, dest);
 +        volumeMgr.prepareForMigration(profile, dest);
 +        profile.setConfigDriveLabel(VmConfigDriveLabel.value());
 +
 +        final VirtualMachineTO to = toVmTO(profile);
 +        final PrepareForMigrationCommand pfmc = new PrepareForMigrationCommand(to);
 +
 +        ItWorkVO work = new ItWorkVO(UUID.randomUUID().toString(), _nodeId, State.Migrating, vm.getType(), vm.getId());
 +        work.setStep(Step.Prepare);
 +        work.setResourceType(ItWorkVO.ResourceType.Host);
 +        work.setResourceId(dstHostId);
 +        work = _workDao.persist(work);
 +
 +        Answer pfma = null;
 +        try {
 +            pfma = _agentMgr.send(dstHostId, pfmc);
 +            if (pfma == null || !pfma.getResult()) {
 +                final String details = pfma != null ? pfma.getDetails() : "null answer returned";
 +                final String msg = "Unable to prepare for migration due to " + details;
 +                pfma = null;
 +                throw new AgentUnavailableException(msg, dstHostId);
 +            }
 +        } catch (final OperationTimedoutException e1) {
 +            throw new AgentUnavailableException("Operation timed out", dstHostId);
 +        } finally {
 +            if (pfma == null) {
 +                _networkMgr.rollbackNicForMigration(vmSrc, profile);
 +                work.setStep(Step.Done);
 +                _workDao.update(work.getId(), work);
 +            }
 +        }
 +
 +        vm.setLastHostId(srcHostId);
 +        try {
 +            if (vm == null || vm.getHostId() == null || vm.getHostId() != srcHostId || !changeState(vm, Event.MigrationRequested, dstHostId, work, Step.Migrating)) {
 +                _networkMgr.rollbackNicForMigration(vmSrc, profile);
 +                s_logger.info("Migration cancelled because state has changed: " + vm);
 +                throw new ConcurrentOperationException("Migration cancelled because state has changed: " + vm);
 +            }
 +        } catch (final NoTransitionException e1) {
 +            _networkMgr.rollbackNicForMigration(vmSrc, profile);
 +            s_logger.info("Migration cancelled because " + e1.getMessage());
 +            throw new ConcurrentOperationException("Migration cancelled because " + e1.getMessage());
 +        }
 +
 +        boolean migrated = false;
 +        try {
 +            final boolean isWindows = _guestOsCategoryDao.findById(_guestOsDao.findById(vm.getGuestOSId()).getCategoryId()).getName().equalsIgnoreCase("Windows");
 +            final MigrateCommand mc = new MigrateCommand(vm.getInstanceName(), dest.getHost().getPrivateIpAddress(), isWindows, to, getExecuteInSequence(vm.getHypervisorType()));
 +
 +            String autoConvergence = _configDao.getValue(Config.KvmAutoConvergence.toString());
 +            boolean kvmAutoConvergence = Boolean.parseBoolean(autoConvergence);
 +
 +            mc.setAutoConvergence(kvmAutoConvergence);
 +
 +            mc.setHostGuid(dest.getHost().getGuid());
 +
 +            try {
 +                final Answer ma = _agentMgr.send(vm.getLastHostId(), mc);
 +                if (ma == null || !ma.getResult()) {
 +                    final String details = ma != null ? ma.getDetails() : "null answer returned";
 +                    throw new CloudRuntimeException(details);
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                if (e.isActive()) {
 +                    s_logger.warn("Active migration command so scheduling a restart for " + vm);
 +                    _haMgr.scheduleRestart(vm, true);
 +                }
 +                throw new AgentUnavailableException("Operation timed out on migrating " + vm, dstHostId);
 +            }
 +
 +            try {
 +                if (!changeState(vm, VirtualMachine.Event.OperationSucceeded, dstHostId, work, Step.Started)) {
 +                    throw new ConcurrentOperationException("Unable to change the state for " + vm);
 +                }
 +            } catch (final NoTransitionException e1) {
 +                throw new ConcurrentOperationException("Unable to change state due to " + e1.getMessage());
 +            }
 +
 +            try {
 +                if (!checkVmOnHost(vm, dstHostId)) {
 +                    s_logger.error("Unable to complete migration for " + vm);
 +                    try {
 +                        _agentMgr.send(srcHostId, new Commands(cleanup(vm)), null);
 +                    } catch (final AgentUnavailableException e) {
 +                        s_logger.error("AgentUnavailableException while cleanup on source host: " + srcHostId);
 +                    }
 +                    cleanup(vmGuru, new VirtualMachineProfileImpl(vm), work, Event.AgentReportStopped, true);
 +                    throw new CloudRuntimeException("Unable to complete migration for " + vm);
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                s_logger.debug("Error while checking the vm " + vm + " on host " + dstHostId, e);
 +            }
 +
 +            migrated = true;
 +        } finally {
 +            if (!migrated) {
 +                s_logger.info("Migration was unsuccessful.  Cleaning up: " + vm);
 +                _networkMgr.rollbackNicForMigration(vmSrc, profile);
 +
 +                _alertMgr.sendAlert(alertType, fromHost.getDataCenterId(), fromHost.getPodId(),
 +                        "Unable to migrate vm " + vm.getInstanceName() + " from host " + fromHost.getName() + " in zone " + dest.getDataCenter().getName() + " and pod " +
 +                                dest.getPod().getName(), "Migrate Command failed.  Please check logs.");
 +                try {
 +                    _agentMgr.send(dstHostId, new Commands(cleanup(vm)), null);
 +                } catch (final AgentUnavailableException ae) {
 +                    s_logger.info("Looks like the destination Host is unavailable for cleanup");
 +                }
 +
 +                try {
 +                    stateTransitTo(vm, Event.OperationFailed, srcHostId);
 +                } catch (final NoTransitionException e) {
 +                    s_logger.warn(e.getMessage());
 +                }
 +            } else {
 +                _networkMgr.commitNicForMigration(vmSrc, profile);
 +            }
 +
 +            work.setStep(Step.Done);
 +            _workDao.update(work.getId(), work);
 +        }
 +    }
 +
 +    /**
 +     * Create the mapping of volumes and storage pools. If the user did not enter a mapping on her/his own, we create one using {@link #getDefaultMappingOfVolumesAndStoragePoolForMigration(VirtualMachineProfile, Host)}.
 +     * If the user provided a mapping, we use whatever the user has provided (check the method {@link #createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile, Host, Map)}).
 +     */
 +    private Map<Volume, StoragePool> getPoolListForVolumesForMigration(VirtualMachineProfile profile, Host targetHost, Map<Long, Long> volumeToPool) {
 +        if (MapUtils.isEmpty(volumeToPool)) {
 +            return getDefaultMappingOfVolumesAndStoragePoolForMigration(profile, targetHost);
 +        }
 +
 +        return createMappingVolumeAndStoragePoolEnteredByUser(profile, targetHost, volumeToPool);
 +    }
 +
 +    /**
 +     * We create the mapping of volumes and storage pool to migrate the VMs according to the information sent by the user.
 +     */
 +    private Map<Volume, StoragePool> createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, Host host, Map<Long, Long> volumeToPool) {
 +        Map<Volume, StoragePool> volumeToPoolObjectMap = new HashMap<Volume, StoragePool>();
 +        for(Long volumeId: volumeToPool.keySet()) {
 +            VolumeVO volume = _volsDao.findById(volumeId);
 +
 +            Long poolId = volumeToPool.get(volumeId);
 +            StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
 +            StoragePoolVO currentPool = _storagePoolDao.findById(volume.getPoolId());
 +
 +            if (_poolHostDao.findByPoolHost(targetPool.getId(), host.getId()) == null) {
 +                throw new CloudRuntimeException(String.format("Cannot migrate the volume [%s] to the storage pool [%s] while migrating VM [%s] to target host [%s]. The host does not have access to the storage pool entered.", volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
 +            }
 +            if (currentPool.getId() == targetPool.getId()) {
 +                s_logger.info(String.format("The volume [%s] is already allocated in storage pool [%s].", volume.getUuid(), targetPool.getUuid()));
 +            }
 +            volumeToPoolObjectMap.put(volume, targetPool);
 +        }
 +        return volumeToPoolObjectMap;
 +    }
 +
 +    /**
 +     * We create the default mapping of volumes and storage pools for the migration of the VM to the target host.
 +     * If the current storage pool of one of the volumes is using local storage in the host, it then needs to be migrated to a local storage in the target host.
 +     * Otherwise, we do not need to migrate, and the volume can be kept in its current storage pool.
 +     */
 +    private Map<Volume, StoragePool> getDefaultMappingOfVolumesAndStoragePoolForMigration(VirtualMachineProfile profile, Host targetHost) {
 +        Map<Volume, StoragePool> volumeToPoolObjectMap = new HashMap<Volume, StoragePool>();
 +        List<VolumeVO> allVolumes = _volsDao.findUsableVolumesForInstance(profile.getId());
 +        for (VolumeVO volume : allVolumes) {
 +            StoragePoolVO currentPool = _storagePoolDao.findById(volume.getPoolId());
 +            if (ScopeType.HOST.equals(currentPool.getScope())) {
 +                createVolumeToStoragePoolMappingIfNeeded(profile, targetHost, volumeToPoolObjectMap, volume, currentPool);
 +            } else {
 +                volumeToPoolObjectMap.put(volume, currentPool);
 +            }
 +        }
 +        return volumeToPoolObjectMap;
 +    }
 +
 +    /**
 +     * We will add a mapping of volume to storage pool if needed. The conditions to add a mapping are the following:
 +     * <ul>
 +     *  <li> The current storage pool where the volume is allocated can be accessed by the target host
 +     *  <li> If not storage pool is found to allocate the volume we throw an exception.
 +     * </ul>
 +     *
 +     */
 +    private void createVolumeToStoragePoolMappingIfNeeded(VirtualMachineProfile profile, Host targetHost, Map<Volume, StoragePool> volumeToPoolObjectMap, VolumeVO volume, StoragePoolVO currentPool) {
 +        List<StoragePool> poolList = getCandidateStoragePoolsToMigrateLocalVolume(profile, targetHost, volume);
 +
 +        Collections.shuffle(poolList);
 +        boolean canTargetHostAccessVolumeStoragePool = false;
 +        for (StoragePool storagePool : poolList) {
 +            if (storagePool.getId() == currentPool.getId()) {
 +                canTargetHostAccessVolumeStoragePool = true;
 +                break;
 +            }
 +
 +        }
 +        if(!canTargetHostAccessVolumeStoragePool && CollectionUtils.isEmpty(poolList)) {
 +            throw new CloudRuntimeException(String.format("There is not storage pools avaliable at the target host [%s] to migrate volume [%s]", targetHost.getUuid(), volume.getUuid()));
 +        }
 +        if (!canTargetHostAccessVolumeStoragePool) {
 +            volumeToPoolObjectMap.put(volume, _storagePoolDao.findByUuid(poolList.get(0).getUuid()));
 +        }
 +        if (!canTargetHostAccessVolumeStoragePool && !volumeToPoolObjectMap.containsKey(volume)) {
 +            throw new CloudRuntimeException(String.format("Cannot find a storage pool which is available for volume [%s] while migrating virtual machine [%s] to host [%s]", volume.getUuid(),
 +                    profile.getUuid(), targetHost.getUuid()));
 +        }
 +    }
 +
 +    /**
 +     * We use {@link StoragePoolAllocator} objects to find local storage pools connected to the targetHost where we would be able to allocate the given volume.
 +     */
 +    private List<StoragePool> getCandidateStoragePoolsToMigrateLocalVolume(VirtualMachineProfile profile, Host targetHost, VolumeVO volume) {
 +        List<StoragePool> poolList = new ArrayList<>();
 +
 +        DiskOfferingVO diskOffering = _diskOfferingDao.findById(volume.getDiskOfferingId());
 +        DiskProfile diskProfile = new DiskProfile(volume, diskOffering, profile.getHypervisorType());
 +        DataCenterDeployment plan = new DataCenterDeployment(targetHost.getDataCenterId(), targetHost.getPodId(), targetHost.getClusterId(), targetHost.getId(), null, null);
 +        ExcludeList avoid = new ExcludeList();
 +
 +        StoragePoolVO volumeStoragePool = _storagePoolDao.findById(volume.getPoolId());
 +        if (volumeStoragePool.isLocal()) {
 +            diskProfile.setUseLocalStorage(true);
 +        }
 +        for (StoragePoolAllocator allocator : _storagePoolAllocators) {
 +            List<StoragePool> poolListFromAllocator = allocator.allocateToPool(diskProfile, profile, plan, avoid, StoragePoolAllocator.RETURN_UPTO_ALL);
 +            if (CollectionUtils.isEmpty(poolListFromAllocator)) {
 +                continue;
 +            }
 +            for (StoragePool pool : poolListFromAllocator) {
 +                if (pool.isLocal()) {
 +                    poolList.add(pool);
 +                }
 +            }
 +        }
 +        return poolList;
 +    }
 +
 +    private <T extends VMInstanceVO> void moveVmToMigratingState(final T vm, final Long hostId, final ItWorkVO work) throws ConcurrentOperationException {
 +        // Put the vm in migrating state.
 +        try {
 +            if (!changeState(vm, Event.MigrationRequested, hostId, work, Step.Migrating)) {
 +                s_logger.info("Migration cancelled because state has changed: " + vm);
 +                throw new ConcurrentOperationException("Migration cancelled because state has changed: " + vm);
 +            }
 +        } catch (final NoTransitionException e) {
 +            s_logger.info("Migration cancelled because " + e.getMessage());
 +            throw new ConcurrentOperationException("Migration cancelled because " + e.getMessage());
 +        }
 +    }
 +
 +    private <T extends VMInstanceVO> void moveVmOutofMigratingStateOnSuccess(final T vm, final Long hostId, final ItWorkVO work) throws ConcurrentOperationException {
 +        // Put the vm in running state.
 +        try {
 +            if (!changeState(vm, Event.OperationSucceeded, hostId, work, Step.Started)) {
 +                s_logger.error("Unable to change the state for " + vm);
 +                throw new ConcurrentOperationException("Unable to change the state for " + vm);
 +            }
 +        } catch (final NoTransitionException e) {
 +            s_logger.error("Unable to change state due to " + e.getMessage());
 +            throw new ConcurrentOperationException("Unable to change state due to " + e.getMessage());
 +        }
 +    }
 +
 +    @Override
 +    public void migrateWithStorage(final String vmUuid, final long srcHostId, final long destHostId, final Map<Long, Long> volumeToPool)
 +            throws ResourceUnavailableException, ConcurrentOperationException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateMigrateWithStorage(vmUuid, srcHostId, destHostId, volumeToPool);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +
 +        } else {
 +            final Outcome<VirtualMachine> outcome = migrateVmWithStorageThroughJobQueue(vmUuid, srcHostId, destHostId, volumeToPool);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobException = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobException != null) {
 +                if (jobException instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobException;
 +                } else if (jobException instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobException;
 +                } else if (jobException instanceof RuntimeException) {
 +                    throw (RuntimeException)jobException;
 +                } else if (jobException instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobException);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateMigrateWithStorage(final String vmUuid, final long srcHostId, final long destHostId, final Map<Long, Long> volumeToPool) throws ResourceUnavailableException,
 +    ConcurrentOperationException {
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final HostVO srcHost = _hostDao.findById(srcHostId);
 +        final HostVO destHost = _hostDao.findById(destHostId);
 +        final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +
 +        final DataCenterVO dc = _dcDao.findById(destHost.getDataCenterId());
 +        final HostPodVO pod = _podDao.findById(destHost.getPodId());
 +        final Cluster cluster = _clusterDao.findById(destHost.getClusterId());
 +        final DeployDestination destination = new DeployDestination(dc, pod, cluster, destHost);
 +
 +        // Create a map of which volume should go in which storage pool.
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +        final Map<Volume, StoragePool> volumeToPoolMap = getPoolListForVolumesForMigration(profile, destHost, volumeToPool);
 +
 +        // If none of the volumes have to be migrated, fail the call. Administrator needs to make a call for migrating
 +        // a vm and not migrating a vm with storage.
 +        if (volumeToPoolMap == null || volumeToPoolMap.isEmpty()) {
 +            throw new InvalidParameterValueException("Migration of the vm " + vm + "from host " + srcHost + " to destination host " + destHost +
 +                    " doesn't involve migrating the volumes.");
 +        }
 +
 +        AlertManager.AlertType alertType = AlertManager.AlertType.ALERT_TYPE_USERVM_MIGRATE;
 +        if (VirtualMachine.Type.DomainRouter.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER_MIGRATE;
 +        } else if (VirtualMachine.Type.ConsoleProxy.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_CONSOLE_PROXY_MIGRATE;
 +        }
 +
 +        _networkMgr.prepareNicForMigration(profile, destination);
 +        volumeMgr.prepareForMigration(profile, destination);
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vm.getHypervisorType());
 +        final VirtualMachineTO to = hvGuru.implement(profile);
 +
 +        ItWorkVO work = new ItWorkVO(UUID.randomUUID().toString(), _nodeId, State.Migrating, vm.getType(), vm.getId());
 +        work.setStep(Step.Prepare);
 +        work.setResourceType(ItWorkVO.ResourceType.Host);
 +        work.setResourceId(destHostId);
 +        work = _workDao.persist(work);
 +
 +
 +        // Put the vm in migrating state.
 +        vm.setLastHostId(srcHostId);
 +        vm.setPodIdToDeployIn(destHost.getPodId());
 +        moveVmToMigratingState(vm, destHostId, work);
 +
 +        boolean migrated = false;
 +        try {
 +
 +            // config drive: Detach the config drive at source host
 +            // After migration successful attach the config drive in destination host
 +            // On migration failure VM will be stopped, So configIso will be deleted
 +
 +            Nic defaultNic = _networkModel.getDefaultNic(vm.getId());
 +
 +            List<String[]> vmData = null;
 +            if (defaultNic != null) {
 +                UserVmVO userVm = _userVmDao.findById(vm.getId());
 +                Map<String, String> details = _vmDetailsDao.listDetailsKeyPairs(vm.getId());
 +                userVm.setDetails(details);
 +
 +                Network network = _networkModel.getNetwork(defaultNic.getNetworkId());
 +                if (_networkModel.isSharedNetworkWithoutServices(network.getId())) {
 +                    final String serviceOffering = _serviceOfferingDao.findByIdIncludingRemoved(vm.getId(), vm.getServiceOfferingId()).getDisplayText();
 +                    boolean isWindows = _guestOSCategoryDao.findById(_guestOSDao.findById(vm.getGuestOSId()).getCategoryId()).getName().equalsIgnoreCase("Windows");
 +
 +                    vmData = _networkModel.generateVmData(userVm.getUserData(), serviceOffering, vm.getDataCenterId(), vm.getInstanceName(), vm.getHostName(), vm.getId(),
 +                            vm.getUuid(), defaultNic.getMacAddress(), userVm.getDetail("SSH.PublicKey"), (String) profile.getParameter(VirtualMachineProfile.Param.VmPassword), isWindows);
 +                    String vmName = vm.getInstanceName();
 +                    String configDriveIsoRootFolder = "/tmp";
 +                    String isoFile = configDriveIsoRootFolder + "/" + vmName + "/configDrive/" + vmName + ".iso";
 +                    profile.setVmData(vmData);
 +                    profile.setConfigDriveLabel(VmConfigDriveLabel.value());
 +                    profile.setConfigDriveIsoRootFolder(configDriveIsoRootFolder);
 +                    profile.setConfigDriveIsoFile(isoFile);
 +
 +                    // At source host detach the config drive iso.
 +                    AttachOrDettachConfigDriveCommand dettachCommand = new AttachOrDettachConfigDriveCommand(vm.getInstanceName(), vmData, VmConfigDriveLabel.value(), false);
 +                    try {
 +                        _agentMgr.send(srcHost.getId(), dettachCommand);
 +                        s_logger.debug("Deleted config drive ISO for  vm " + vm.getInstanceName() + " In host " + srcHost);
 +                    } catch (OperationTimedoutException e) {
 +                        s_logger.debug("TIme out occured while exeuting command AttachOrDettachConfigDrive " + e.getMessage());
 +
 +                    }
 +
 +                }
 +            }
 +
 +            // Migrate the vm and its volume.
 +            volumeMgr.migrateVolumes(vm, to, srcHost, destHost, volumeToPoolMap);
 +
 +            // Put the vm back to running state.
 +            moveVmOutofMigratingStateOnSuccess(vm, destHost.getId(), work);
 +
 +            try {
 +                if (!checkVmOnHost(vm, destHostId)) {
 +                    s_logger.error("Vm not found on destination host. Unable to complete migration for " + vm);
 +                    try {
 +                        _agentMgr.send(srcHostId, new Commands(cleanup(vm.getInstanceName())), null);
 +                    } catch (final AgentUnavailableException e) {
 +                        s_logger.error("AgentUnavailableException while cleanup on source host: " + srcHostId);
 +                    }
 +                    cleanup(vmGuru, new VirtualMachineProfileImpl(vm), work, Event.AgentReportStopped, true);
 +                    throw new CloudRuntimeException("VM not found on desintation host. Unable to complete migration for " + vm);
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                s_logger.warn("Error while checking the vm " + vm + " is on host " + destHost, e);
 +            }
 +
 +            migrated = true;
 +        } finally {
 +            if (!migrated) {
 +                s_logger.info("Migration was unsuccessful.  Cleaning up: " + vm);
 +                _alertMgr.sendAlert(alertType, srcHost.getDataCenterId(), srcHost.getPodId(),
 +                        "Unable to migrate vm " + vm.getInstanceName() + " from host " + srcHost.getName() + " in zone " + dc.getName() + " and pod " + dc.getName(),
 +                        "Migrate Command failed.  Please check logs.");
 +                try {
 +                    _agentMgr.send(destHostId, new Commands(cleanup(vm.getInstanceName())), null);
 +                    vm.setPodIdToDeployIn(srcHost.getPodId());
 +                    stateTransitTo(vm, Event.OperationFailed, srcHostId);
 +                } catch (final AgentUnavailableException e) {
 +                    s_logger.warn("Looks like the destination Host is unavailable for cleanup.", e);
 +                } catch (final NoTransitionException e) {
 +                    s_logger.error("Error while transitioning vm from migrating to running state.", e);
 +                }
 +            }
 +
 +            work.setStep(Step.Done);
 +            _workDao.update(work.getId(), work);
 +        }
 +    }
 +
 +    @Override
 +    public VirtualMachineTO toVmTO(final VirtualMachineProfile profile) {
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(profile.getVirtualMachine().getHypervisorType());
 +        final VirtualMachineTO to = hvGuru.implement(profile);
 +        return to;
 +    }
 +
 +    protected void cancelWorkItems(final long nodeId) {
 +        final GlobalLock scanLock = GlobalLock.getInternLock("vmmgr.cancel.workitem");
 +
 +        try {
 +            if (scanLock.lock(3)) {
 +                try {
 +                    final List<ItWorkVO> works = _workDao.listWorkInProgressFor(nodeId);
 +                    for (final ItWorkVO work : works) {
 +                        s_logger.info("Handling unfinished work item: " + work);
 +                        try {
 +                            final VMInstanceVO vm = _vmDao.findById(work.getInstanceId());
 +                            if (vm != null) {
 +                                if (work.getType() == State.Starting) {
 +                                    _haMgr.scheduleRestart(vm, true);
 +                                    work.setManagementServerId(_nodeId);
 +                                    work.setStep(Step.Done);
 +                                    _workDao.update(work.getId(), work);
 +                                } else if (work.getType() == State.Stopping) {
 +                                    _haMgr.scheduleStop(vm, vm.getHostId(), WorkType.CheckStop);
 +                                    work.setManagementServerId(_nodeId);
 +                                    work.setStep(Step.Done);
 +                                    _workDao.update(work.getId(), work);
 +                                } else if (work.getType() == State.Migrating) {
 +                                    _haMgr.scheduleMigration(vm);
 +                                    work.setStep(Step.Done);
 +                                    _workDao.update(work.getId(), work);
 +                                }
 +                            }
 +                        } catch (final Exception e) {
 +                            s_logger.error("Error while handling " + work, e);
 +                        }
 +                    }
 +                } finally {
 +                    scanLock.unlock();
 +                }
 +            }
 +        } finally {
 +            scanLock.releaseRef();
 +        }
 +    }
 +
 +    @Override
 +    public void migrateAway(final String vmUuid, final long srcHostId) throws InsufficientServerCapacityException {
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                try {
 +                    orchestrateMigrateAway(vmUuid, srcHostId, null);
 +                } catch (final InsufficientServerCapacityException e) {
 +                    s_logger.warn("Failed to deploy vm " + vmUuid + " with original planner, sending HAPlanner");
 +                    orchestrateMigrateAway(vmUuid, srcHostId, _haMgr.getHAPlanner());
 +                }
 +            } finally {
 +                _workJobDao.expunge(placeHolder.getId());
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = migrateVmAwayThroughJobQueue(vmUuid, srcHostId);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobException = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobException != null) {
 +                if (jobException instanceof InsufficientServerCapacityException) {
 +                    throw (InsufficientServerCapacityException)jobException;
 +                } else if (jobException instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobException;
 +                } else if (jobException instanceof RuntimeException) {
 +                    throw (RuntimeException)jobException;
 +                } else if (jobException instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobException);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateMigrateAway(final String vmUuid, final long srcHostId, final DeploymentPlanner planner) throws InsufficientServerCapacityException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        if (vm == null) {
 +            s_logger.debug("Unable to find a VM for " + vmUuid);
 +            throw new CloudRuntimeException("Unable to find " + vmUuid);
 +        }
 +
 +        ServiceOfferingVO offeringVO = _offeringDao.findById(vm.getId(), vm.getServiceOfferingId());
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm, null, offeringVO, null, null);
 +
 +        final Long hostId = vm.getHostId();
 +        if (hostId == null) {
 +            s_logger.debug("Unable to migrate because the VM doesn't have a host id: " + vm);
 +            throw new CloudRuntimeException("Unable to migrate " + vmUuid);
 +        }
 +
 +        final Host host = _hostDao.findById(hostId);
 +        Long poolId = null;
 +        final List<VolumeVO> vols = _volsDao.findReadyRootVolumesByInstance(vm.getId());
 +        for (final VolumeVO rootVolumeOfVm : vols) {
 +            final StoragePoolVO rootDiskPool = _storagePoolDao.findById(rootVolumeOfVm.getPoolId());
 +            if (rootDiskPool != null) {
 +                poolId = rootDiskPool.getId();
 +            }
 +        }
 +
 +        final DataCenterDeployment plan = new DataCenterDeployment(host.getDataCenterId(), host.getPodId(), host.getClusterId(), null, poolId, null);
 +        final ExcludeList excludes = new ExcludeList();
 +        excludes.addHost(hostId);
 +
 +        DeployDestination dest = null;
 +        while (true) {
 +
 +            try {
 +                dest = _dpMgr.planDeployment(profile, plan, excludes, planner);
 +            } catch (final AffinityConflictException e2) {
 +                s_logger.warn("Unable to create deployment, affinity rules associted to the VM conflict", e2);
 +                throw new CloudRuntimeException("Unable to create deployment, affinity rules associted to the VM conflict");
 +            }
 +
 +            if (dest != null) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Found destination " + dest + " for migrating to.");
 +                }
 +            } else {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Unable to find destination for migrating the vm " + profile);
 +                }
 +                throw new InsufficientServerCapacityException("Unable to find a server to migrate to.", host.getClusterId());
 +            }
 +
 +            excludes.addHost(dest.getHost().getId());
 +            try {
 +                migrate(vm, srcHostId, dest);
 +                return;
 +            } catch (final ResourceUnavailableException e) {
 +                s_logger.debug("Unable to migrate to unavailable " + dest);
 +            } catch (final ConcurrentOperationException e) {
 +                s_logger.debug("Unable to migrate VM due to: " + e.getMessage());
 +            }
 +
 +            try {
 +                advanceStop(vmUuid, true);
 +                throw new CloudRuntimeException("Unable to migrate " + vm);
 +            } catch (final ResourceUnavailableException e) {
 +                s_logger.debug("Unable to stop VM due to " + e.getMessage());
 +                throw new CloudRuntimeException("Unable to migrate " + vm);
 +            } catch (final ConcurrentOperationException e) {
 +                s_logger.debug("Unable to stop VM due to " + e.getMessage());
 +                throw new CloudRuntimeException("Unable to migrate " + vm);
 +            } catch (final OperationTimedoutException e) {
 +                s_logger.debug("Unable to stop VM due to " + e.getMessage());
 +                throw new CloudRuntimeException("Unable to migrate " + vm);
 +            }
 +        }
 +    }
 +
 +    protected class CleanupTask extends ManagedContextRunnable {
 +        @Override
 +        protected void runInContext() {
 +            s_logger.trace("VM Operation Thread Running");
 +            try {
 +                _workDao.cleanup(VmOpCleanupWait.value());
 +                final Date cutDate = new Date(DateUtil.currentGMTTime().getTime() - VmOpCleanupInterval.value() * 1000);
 +                _workJobDao.expungeCompletedWorkJobs(cutDate);
 +            } catch (final Exception e) {
 +                s_logger.error("VM Operations failed due to ", e);
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public boolean isVirtualMachineUpgradable(final VirtualMachine vm, final ServiceOffering offering) {
 +        boolean isMachineUpgradable = true;
 +        for (final HostAllocator allocator : hostAllocators) {
 +            isMachineUpgradable = allocator.isVirtualMachineUpgradable(vm, offering);
 +            if (isMachineUpgradable) {
 +                continue;
 +            } else {
 +                break;
 +            }
 +        }
 +
 +        return isMachineUpgradable;
 +    }
 +
 +    @Override
 +    public void reboot(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params) throws InsufficientCapacityException, ResourceUnavailableException {
 +        try {
 +            advanceReboot(vmUuid, params);
 +        } catch (final ConcurrentOperationException e) {
 +            throw new CloudRuntimeException("Unable to reboot a VM due to concurrent operation", e);
 +        }
 +    }
 +
 +    @Override
 +    public void advanceReboot(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params)
 +            throws InsufficientCapacityException, ConcurrentOperationException, ResourceUnavailableException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if ( jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateReboot(vmUuid, params);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = rebootVmThroughJobQueue(vmUuid, params);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof InsufficientCapacityException) {
 +                    throw (InsufficientCapacityException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateReboot(final String vmUuid, final Map<VirtualMachineProfile.Param, Object> params) throws InsufficientCapacityException, ConcurrentOperationException,
 +    ResourceUnavailableException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        // if there are active vm snapshots task, state change is not allowed
 +        if(_vmSnapshotMgr.hasActiveVMSnapshotTasks(vm.getId())){
 +            s_logger.error("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
 +            throw new CloudRuntimeException("Unable to reboot VM " + vm + " due to: " + vm.getInstanceName() + " has active VM snapshots tasks");
 +        }
 +        final DataCenter dc = _entityMgr.findById(DataCenter.class, vm.getDataCenterId());
 +        final Host host = _hostDao.findById(vm.getHostId());
 +        if (host == null) {
 +            // Should findById throw an Exception is the host is not found?
 +            throw new CloudRuntimeException("Unable to retrieve host with id " + vm.getHostId());
 +        }
 +        final Cluster cluster = _entityMgr.findById(Cluster.class, host.getClusterId());
 +        final Pod pod = _entityMgr.findById(Pod.class, host.getPodId());
 +        final DeployDestination dest = new DeployDestination(dc, pod, cluster, host);
 +
 +        try {
 +
 +            final Commands cmds = new Commands(Command.OnError.Stop);
 +            cmds.addCommand(new RebootCommand(vm.getInstanceName(), getExecuteInSequence(vm.getHypervisorType())));
 +            _agentMgr.send(host.getId(), cmds);
 +
 +            final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
 +            if (rebootAnswer != null && rebootAnswer.getResult()) {
 +                return;
 +            }
 +            s_logger.info("Unable to reboot VM " + vm + " on " + dest.getHost() + " due to " + (rebootAnswer == null ? " no reboot answer" : rebootAnswer.getDetails()));
 +        } catch (final OperationTimedoutException e) {
 +            s_logger.warn("Unable to send the reboot command to host " + dest.getHost() + " for the vm " + vm + " due to operation timeout", e);
 +            throw new CloudRuntimeException("Failed to reboot the vm on host " + dest.getHost());
 +        }
 +    }
 +
 +    public Command cleanup(final VirtualMachine vm) {
 +        StopCommand cmd = new StopCommand(vm, getExecuteInSequence(vm.getHypervisorType()), false);
 +        cmd.setControlIp(getControlNicIpForVM(vm));
 +        return cmd;
 +    }
 +
 +    private String getControlNicIpForVM(VirtualMachine vm) {
 +        if (vm.getType() == VirtualMachine.Type.ConsoleProxy || vm.getType() == VirtualMachine.Type.SecondaryStorageVm) {
 +            NicVO nic = _nicsDao.getControlNicForVM(vm.getId());
 +            return nic.getIPv4Address();
 +        } else if (vm.getType() == VirtualMachine.Type.DomainRouter) {
 +            return vm.getPrivateIpAddress();
 +        } else {
 +            return null;
 +        }
 +    }
 +    public Command cleanup(final String vmName) {
 +        VirtualMachine vm = _vmDao.findVMByInstanceName(vmName);
 +
 +        StopCommand cmd = new StopCommand(vmName, getExecuteInSequence(null), false);
 +        cmd.setControlIp(getControlNicIpForVM(vm));
 +        return cmd;
 +    }
 +
 +
 +    // this is XenServer specific
 +    public void syncVMMetaData(final Map<String, String> vmMetadatum) {
 +        if (vmMetadatum == null || vmMetadatum.isEmpty()) {
 +            return;
 +        }
 +        List<Pair<Pair<String, VirtualMachine.Type>, Pair<Long, String>>> vmDetails = _userVmDao.getVmsDetailByNames(vmMetadatum.keySet(), "platform");
 +        for (final Map.Entry<String, String> entry : vmMetadatum.entrySet()) {
 +            final String name = entry.getKey();
 +            final String platform = entry.getValue();
 +            if (platform == null || platform.isEmpty()) {
 +                continue;
 +            }
 +
 +            boolean found = false;
 +            for(Pair<Pair<String, VirtualMachine.Type>, Pair<Long, String>> vmDetail : vmDetails ) {
 +                Pair<String, VirtualMachine.Type> vmNameTypePair = vmDetail.first();
 +                if(vmNameTypePair.first().equals(name)) {
 +                    found = true;
 +                    if(vmNameTypePair.second() == VirtualMachine.Type.User) {
 +                        Pair<Long, String> detailPair = vmDetail.second();
 +                        String platformDetail = detailPair.second();
 +
 +                        if (platformDetail != null && platformDetail.equals(platform)) {
 +                            break;
 +                        }
 +                        updateVmMetaData(detailPair.first(), platform);
 +                    }
 +                    break;
 +                }
 +            }
 +
 +            if(!found) {
 +                VMInstanceVO vm = _vmDao.findVMByInstanceName(name);
 +                if(vm != null && vm.getType() == VirtualMachine.Type.User) {
 +                    updateVmMetaData(vm.getId(), platform);
 +                }
 +            }
 +        }
 +    }
 +
 +    // this is XenServer specific
 +    private void updateVmMetaData(Long vmId, String platform) {
 +        UserVmVO userVm = _userVmDao.findById(vmId);
 +        _userVmDao.loadDetails(userVm);
 +        if ( userVm.details.containsKey("timeoffset")) {
 +            userVm.details.remove("timeoffset");
 +        }
 +        userVm.setDetail("platform",  platform);
 +        String pvdriver = "xenserver56";
 +        if ( platform.contains("device_id")) {
 +            pvdriver = "xenserver61";
 +        }
 +        if (!userVm.details.containsKey("hypervisortoolsversion") || !userVm.details.get("hypervisortoolsversion").equals(pvdriver)) {
 +            userVm.setDetail("hypervisortoolsversion", pvdriver);
 +        }
 +        _userVmDao.saveDetails(userVm);
 +    }
 +
 +    @Override
 +    public boolean isRecurring() {
 +        return true;
 +    }
 +
 +    @Override
 +    public boolean processAnswers(final long agentId, final long seq, final Answer[] answers) {
 +        for (final Answer answer : answers) {
 +            if ( answer instanceof ClusterVMMetaDataSyncAnswer) {
 +                final ClusterVMMetaDataSyncAnswer cvms = (ClusterVMMetaDataSyncAnswer)answer;
 +                if (!cvms.isExecuted()) {
 +                    syncVMMetaData(cvms.getVMMetaDatum());
 +                    cvms.setExecuted();
 +                }
 +            }
 +        }
 +        return true;
 +    }
 +
 +    @Override
 +    public boolean processTimeout(final long agentId, final long seq) {
 +        return true;
 +    }
 +
 +    @Override
 +    public int getTimeout() {
 +        return -1;
 +    }
 +
 +    @Override
 +    public boolean processCommands(final long agentId, final long seq, final Command[] cmds) {
 +        boolean processed = false;
 +        for (final Command cmd : cmds) {
 +            if (cmd instanceof PingRoutingCommand) {
 +                final PingRoutingCommand ping = (PingRoutingCommand)cmd;
 +                if (ping.getHostVmStateReport() != null) {
 +                    _syncMgr.processHostVmStatePingReport(agentId, ping.getHostVmStateReport());
 +                }
 +
 +                // take the chance to scan VMs that are stuck in transitional states
 +                // and are missing from the report
 +                scanStalledVMInTransitionStateOnUpHost(agentId);
 +                processed = true;
 +            }
 +        }
 +        return processed;
 +    }
 +
 +    @Override
 +    public AgentControlAnswer processControlCommand(final long agentId, final AgentControlCommand cmd) {
 +        return null;
 +    }
 +
 +    @Override
 +    public boolean processDisconnect(final long agentId, final Status state) {
 +        return true;
 +    }
 +
 +    @Override
 +    public void processHostAboutToBeRemoved(long hostId) {
 +    }
 +
 +    @Override
 +    public void processHostRemoved(long hostId, long clusterId) {
 +    }
 +
 +    @Override
 +    public void processHostAdded(long hostId) {
 +    }
 +
 +    @Override
 +    public void processConnect(final Host agent, final StartupCommand cmd, final boolean forRebalance) throws ConnectionException {
 +        if (!(cmd instanceof StartupRoutingCommand)) {
 +            return;
 +        }
 +
 +        if(s_logger.isDebugEnabled()) {
 +            s_logger.debug("Received startup command from hypervisor host. host id: " + agent.getId());
 +        }
 +
 +        _syncMgr.resetHostSyncState(agent.getId());
 +
 +        if (forRebalance) {
 +            s_logger.debug("Not processing listener " + this + " as connect happens on rebalance process");
 +            return;
 +        }
 +        final Long clusterId = agent.getClusterId();
 +        final long agentId = agent.getId();
 +
 +        if (agent.getHypervisorType() == HypervisorType.XenServer) { // only for Xen
 +            // initiate the cron job
 +            final ClusterVMMetaDataSyncCommand syncVMMetaDataCmd = new ClusterVMMetaDataSyncCommand(ClusterVMMetaDataSyncInterval.value(), clusterId);
 +            try {
 +                final long seq_no = _agentMgr.send(agentId, new Commands(syncVMMetaDataCmd), this);
 +                s_logger.debug("Cluster VM metadata sync started with jobid " + seq_no);
 +            } catch (final AgentUnavailableException e) {
 +                s_logger.fatal("The Cluster VM metadata sync process failed for cluster id " + clusterId + " with ", e);
 +            }
 +        }
 +    }
 +
 +    protected class TransitionTask extends ManagedContextRunnable {
 +        @Override
 +        protected void runInContext() {
 +            final GlobalLock lock = GlobalLock.getInternLock("TransitionChecking");
 +            if (lock == null) {
 +                s_logger.debug("Couldn't get the global lock");
 +                return;
 +            }
 +
 +            if (!lock.lock(30)) {
 +                s_logger.debug("Couldn't lock the db");
 +                return;
 +            }
 +            try {
 +                scanStalledVMInTransitionStateOnDisconnectedHosts();
 +
 +                final List<VMInstanceVO> instances = _vmDao.findVMInTransition(new Date(DateUtil.currentGMTTime().getTime() - AgentManager.Wait.value() * 1000), State.Starting, State.Stopping);
 +                for (final VMInstanceVO instance : instances) {
 +                    final State state = instance.getState();
 +                    if (state == State.Stopping) {
 +                        _haMgr.scheduleStop(instance, instance.getHostId(), WorkType.CheckStop);
 +                    } else if (state == State.Starting) {
 +                        _haMgr.scheduleRestart(instance, true);
 +                    }
 +                }
 +            } catch (final Exception e) {
 +                s_logger.warn("Caught the following exception on transition checking", e);
 +            } finally {
 +                lock.unlock();
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public VMInstanceVO findById(final long vmId) {
 +        return _vmDao.findById(vmId);
 +    }
 +
 +    @Override
 +    public void checkIfCanUpgrade(final VirtualMachine vmInstance, final ServiceOffering newServiceOffering) {
 +        if (newServiceOffering == null) {
 +            throw new InvalidParameterValueException("Invalid parameter, newServiceOffering can't be null");
 +        }
 +
 +        // Check that the VM is stopped / running
 +        if (!(vmInstance.getState().equals(State.Stopped) || vmInstance.getState().equals(State.Running))) {
 +            s_logger.warn("Unable to upgrade virtual machine " + vmInstance.toString() + " in state " + vmInstance.getState());
 +            throw new InvalidParameterValueException("Unable to upgrade virtual machine " + vmInstance.toString() + " " + " in state " + vmInstance.getState() +
 +                    "; make sure the virtual machine is stopped/running");
 +        }
 +
 +        // Check if the service offering being upgraded to is what the VM is already running with
 +        if (!newServiceOffering.isDynamic() && vmInstance.getServiceOfferingId() == newServiceOffering.getId()) {
 +            if (s_logger.isInfoEnabled()) {
 +                s_logger.info("Not upgrading vm " + vmInstance.toString() + " since it already has the requested " + "service offering (" + newServiceOffering.getName() +
 +                        ")");
 +            }
 +
 +            throw new InvalidParameterValueException("Not upgrading vm " + vmInstance.toString() + " since it already " + "has the requested service offering (" +
 +                    newServiceOffering.getName() + ")");
 +        }
 +
 +        final ServiceOfferingVO currentServiceOffering = _offeringDao.findByIdIncludingRemoved(vmInstance.getId(), vmInstance.getServiceOfferingId());
 +
 +        // Check that the service offering being upgraded to has the same Guest IP type as the VM's current service offering
 +        // NOTE: With the new network refactoring in 2.2, we shouldn't need the check for same guest IP type anymore.
 +        /*
 +         * if (!currentServiceOffering.getGuestIpType().equals(newServiceOffering.getGuestIpType())) { String errorMsg =
 +         * "The service offering being upgraded to has a guest IP type: " + newServiceOffering.getGuestIpType(); errorMsg +=
 +         * ". Please select a service offering with the same guest IP type as the VM's current service offering (" +
 +         * currentServiceOffering.getGuestIpType() + ")."; throw new InvalidParameterValueException(errorMsg); }
 +         */
 +
 +        // Check that the service offering being upgraded to has the same storage pool preference as the VM's current service
 +        // offering
 +        if (currentServiceOffering.getUseLocalStorage() != newServiceOffering.getUseLocalStorage()) {
 +            throw new InvalidParameterValueException("Unable to upgrade virtual machine " + vmInstance.toString() +
 +                    ", cannot switch between local storage and shared storage service offerings.  Current offering " + "useLocalStorage=" +
 +                    currentServiceOffering.getUseLocalStorage() + ", target offering useLocalStorage=" + newServiceOffering.getUseLocalStorage());
 +        }
 +
 +        // if vm is a system vm, check if it is a system service offering, if yes return with error as it cannot be used for user vms
 +        if (currentServiceOffering.getSystemUse() != newServiceOffering.getSystemUse()) {
 +            throw new InvalidParameterValueException("isSystem property is different for current service offering and new service offering");
 +        }
 +
 +        // Check that there are enough resources to upgrade the service offering
 +        if (!isVirtualMachineUpgradable(vmInstance, newServiceOffering)) {
 +            throw new InvalidParameterValueException("Unable to upgrade virtual machine, not enough resources available " + "for an offering of " +
 +                    newServiceOffering.getCpu() + " cpu(s) at " + newServiceOffering.getSpeed() + " Mhz, and " + newServiceOffering.getRamSize() + " MB of memory");
 +        }
 +
 +        // Check that the service offering being upgraded to has all the tags of the current service offering.
 +        final List<String> currentTags = StringUtils.csvTagsToList(currentServiceOffering.getTags());
 +        final List<String> newTags = StringUtils.csvTagsToList(newServiceOffering.getTags());
 +        if (!newTags.containsAll(currentTags)) {
 +            throw new InvalidParameterValueException("Unable to upgrade virtual machine; the current service offering " + " should have tags as subset of " +
 +                    "the new service offering tags. Current service offering tags: " + currentTags + "; " + "new service " + "offering tags: " + newTags);
 +        }
 +    }
 +
 +    @Override
 +    public boolean upgradeVmDb(final long vmId, final long serviceOfferingId) {
 +        final VMInstanceVO vmForUpdate = _vmDao.createForUpdate();
 +        vmForUpdate.setServiceOfferingId(serviceOfferingId);
 +        final ServiceOffering newSvcOff = _entityMgr.findById(ServiceOffering.class, serviceOfferingId);
 +        vmForUpdate.setHaEnabled(newSvcOff.getOfferHA());
 +        vmForUpdate.setLimitCpuUse(newSvcOff.getLimitCpuUse());
 +        vmForUpdate.setServiceOfferingId(newSvcOff.getId());
 +        return _vmDao.update(vmId, vmForUpdate);
 +    }
 +
 +    @Override
 +    public NicProfile addVmToNetwork(final VirtualMachine vm, final Network network, final NicProfile requested)
 +            throws ConcurrentOperationException, ResourceUnavailableException, InsufficientCapacityException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                return orchestrateAddVmToNetwork(vm, network, requested);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = addVmToNetworkThroughJobQueue(vm, network, requested);
 +
 +            try {
 +                outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution exception", e);
 +            }
 +
 +            final Object jobException = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobException != null) {
 +                if (jobException instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobException;
 +                } else if (jobException instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobException;
 +                } else if (jobException instanceof InsufficientCapacityException) {
 +                    throw (InsufficientCapacityException)jobException;
 +                } else if (jobException instanceof RuntimeException) {
 +                    throw (RuntimeException)jobException;
 +                } else if (jobException instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobException);
 +                } else if (jobException instanceof NicProfile) {
 +                    return (NicProfile)jobException;
 +                }
 +            }
 +
 +            throw new RuntimeException("Unexpected job execution result");
 +        }
 +    }
 +
 +    private NicProfile orchestrateAddVmToNetwork(final VirtualMachine vm, final Network network, final NicProfile requested) throws ConcurrentOperationException, ResourceUnavailableException,
 +    InsufficientCapacityException {
 +        final CallContext cctx = CallContext.current();
 +
 +        s_logger.debug("Adding vm " + vm + " to network " + network + "; requested nic profile " + requested);
 +        final VMInstanceVO vmVO = _vmDao.findById(vm.getId());
 +        final ReservationContext context = new ReservationContextImpl(null, null, cctx.getCallingUser(), cctx.getCallingAccount());
 +
 +        final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vmVO, null, null, null, null);
 +
 +        final DataCenter dc = _entityMgr.findById(DataCenter.class, network.getDataCenterId());
 +        final Host host = _hostDao.findById(vm.getHostId());
 +        final DeployDestination dest = new DeployDestination(dc, null, null, host);
 +
 +        //check vm state
 +        if (vm.getState() == State.Running) {
 +            //1) allocate and prepare nic
 +            final NicProfile nic = _networkMgr.createNicForVm(network, requested, context, vmProfile, true);
 +
 +            //2) Convert vmProfile to vmTO
 +            final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vmProfile.getVirtualMachine().getHypervisorType());
 +            final VirtualMachineTO vmTO = hvGuru.implement(vmProfile);
 +
 +            //3) Convert nicProfile to NicTO
 +            final NicTO nicTO = toNicTO(nic, vmProfile.getVirtualMachine().getHypervisorType());
 +
 +            //4) plug the nic to the vm
 +            s_logger.debug("Plugging nic for vm " + vm + " in network " + network);
 +
 +            boolean result = false;
 +            try {
 +                result = plugNic(network, nicTO, vmTO, context, dest);
 +                if (result) {
 +                    s_logger.debug("Nic is plugged successfully for vm " + vm + " in network " + network + ". Vm  is a part of network now");
 +                    final long isDefault = nic.isDefaultNic() ? 1 : 0;
 +                    // insert nic's Id into DB as resource_name
 +                    if(VirtualMachine.Type.User.equals(vmVO.getType())) {
 +                        //Log usage event for user Vms only
 +                        UsageEventUtils.publishUsageEvent(EventTypes.EVENT_NETWORK_OFFERING_ASSIGN, vmVO.getAccountId(), vmVO.getDataCenterId(), vmVO.getId(),
 +                                Long.toString(nic.getId()), network.getNetworkOfferingId(), null, isDefault, VirtualMachine.class.getName(), vmVO.getUuid(), vm.isDisplay());
 +                    }
 +                    return nic;
 +                } else {
 +                    s_logger.warn("Failed to plug nic to the vm " + vm + " in network " + network);
 +                    return null;
 +                }
 +            } finally {
 +                if (!result) {
 +                    s_logger.debug("Removing nic " + nic + " from vm " + vmProfile.getVirtualMachine() + " as nic plug failed on the backend");
 +                    _networkMgr.removeNic(vmProfile, _nicsDao.findById(nic.getId()));
 +                }
 +            }
 +        } else if (vm.getState() == State.Stopped) {
 +            //1) allocate nic
 +            return _networkMgr.createNicForVm(network, requested, context, vmProfile, false);
 +        } else {
 +            s_logger.warn("Unable to add vm " + vm + " to network  " + network);
 +            throw new ResourceUnavailableException("Unable to add vm " + vm + " to network, is not in the right state", DataCenter.class, vm.getDataCenterId());
 +        }
 +    }
 +
 +    @Override
 +    public NicTO toNicTO(final NicProfile nic, final HypervisorType hypervisorType) {
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(hypervisorType);
 +
 +        final NicTO nicTO = hvGuru.toNicTO(nic);
 +        return nicTO;
 +    }
 +
 +    @Override
 +    public boolean removeNicFromVm(final VirtualMachine vm, final Nic nic)
 +            throws ConcurrentOperationException, ResourceUnavailableException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                return orchestrateRemoveNicFromVm(vm, nic);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +
 +        } else {
 +            final Outcome<VirtualMachine> outcome = removeNicFromVmThroughJobQueue(vm, nic);
 +
 +            try {
 +                outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                } else if (jobResult instanceof Boolean) {
 +                    return (Boolean)jobResult;
 +                }
 +            }
 +
 +            throw new RuntimeException("Job failed with un-handled exception");
 +        }
 +    }
 +
 +    private boolean orchestrateRemoveNicFromVm(final VirtualMachine vm, final Nic nic) throws ConcurrentOperationException, ResourceUnavailableException {
 +        final CallContext cctx = CallContext.current();
 +        final VMInstanceVO vmVO = _vmDao.findById(vm.getId());
 +        final NetworkVO network = _networkDao.findById(nic.getNetworkId());
 +        final ReservationContext context = new ReservationContextImpl(null, null, cctx.getCallingUser(), cctx.getCallingAccount());
 +
 +        final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vmVO, null, null, null, null);
 +
 +        final DataCenter dc = _entityMgr.findById(DataCenter.class, network.getDataCenterId());
 +        final Host host = _hostDao.findById(vm.getHostId());
 +        final DeployDestination dest = new DeployDestination(dc, null, null, host);
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vmProfile.getVirtualMachine().getHypervisorType());
 +        final VirtualMachineTO vmTO = hvGuru.implement(vmProfile);
 +
 +        final NicProfile nicProfile =
 +                new NicProfile(nic, network, nic.getBroadcastUri(), nic.getIsolationUri(), _networkModel.getNetworkRate(network.getId(), vm.getId()),
 +                        _networkModel.isSecurityGroupSupportedInNetwork(network), _networkModel.getNetworkTag(vmProfile.getVirtualMachine().getHypervisorType(), network));
 +
 +        //1) Unplug the nic
 +        if (vm.getState() == State.Running) {
 +            final NicTO nicTO = toNicTO(nicProfile, vmProfile.getVirtualMachine().getHypervisorType());
 +            s_logger.debug("Un-plugging nic " + nic + " for vm " + vm + " from network " + network);
 +            final boolean result = unplugNic(network, nicTO, vmTO, context, dest);
 +            if (result) {
 +                s_logger.debug("Nic is unplugged successfully for vm " + vm + " in network " + network);
 +                final long isDefault = nic.isDefaultNic() ? 1 : 0;
 +                UsageEventUtils.publishUsageEvent(EventTypes.EVENT_NETWORK_OFFERING_REMOVE, vm.getAccountId(), vm.getDataCenterId(), vm.getId(),
 +                        Long.toString(nic.getId()), network.getNetworkOfferingId(), null, isDefault, VirtualMachine.class.getName(), vm.getUuid(), vm.isDisplay());
 +            } else {
 +                s_logger.warn("Failed to unplug nic for the vm " + vm + " from network " + network);
 +                return false;
 +            }
 +        } else if (vm.getState() != State.Stopped) {
 +            s_logger.warn("Unable to remove vm " + vm + " from network  " + network);
 +            throw new ResourceUnavailableException("Unable to remove vm " + vm + " from network, is not in the right state", DataCenter.class, vm.getDataCenterId());
 +        }
 +
 +        //2) Release the nic
 +        _networkMgr.releaseNic(vmProfile, nic);
 +        s_logger.debug("Successfully released nic " + nic + "for vm " + vm);
 +
 +        //3) Remove the nic
 +        _networkMgr.removeNic(vmProfile, nic);
 +        _nicsDao.expunge(nic.getId());
 +        return true;
 +    }
 +
 +    @Override
 +    @DB
 +    public boolean removeVmFromNetwork(final VirtualMachine vm, final Network network, final URI broadcastUri) throws ConcurrentOperationException, ResourceUnavailableException {
 +        // TODO will serialize on the VM object later to resolve operation conflicts
 +        return orchestrateRemoveVmFromNetwork(vm, network, broadcastUri);
 +    }
 +
 +    @DB
 +    private boolean orchestrateRemoveVmFromNetwork(final VirtualMachine vm, final Network network, final URI broadcastUri) throws ConcurrentOperationException, ResourceUnavailableException {
 +        final CallContext cctx = CallContext.current();
 +        final VMInstanceVO vmVO = _vmDao.findById(vm.getId());
 +        final ReservationContext context = new ReservationContextImpl(null, null, cctx.getCallingUser(), cctx.getCallingAccount());
 +
 +        final VirtualMachineProfileImpl vmProfile = new VirtualMachineProfileImpl(vmVO, null, null, null, null);
 +
 +        final DataCenter dc = _entityMgr.findById(DataCenter.class, network.getDataCenterId());
 +        final Host host = _hostDao.findById(vm.getHostId());
 +        final DeployDestination dest = new DeployDestination(dc, null, null, host);
 +        final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vmProfile.getVirtualMachine().getHypervisorType());
 +        final VirtualMachineTO vmTO = hvGuru.implement(vmProfile);
 +
 +        Nic nic = null;
 +        if (broadcastUri != null) {
 +            nic = _nicsDao.findByNetworkIdInstanceIdAndBroadcastUri(network.getId(), vm.getId(), broadcastUri.toString());
 +        } else {
 +            nic = _networkModel.getNicInNetwork(vm.getId(), network.getId());
 +        }
 +
 +        if (nic == null) {
 +            s_logger.warn("Could not get a nic with " + network);
 +            return false;
 +        }
 +
 +        // don't delete default NIC on a user VM
 +        if (nic.isDefaultNic() && vm.getType() == VirtualMachine.Type.User) {
 +            s_logger.warn("Failed to remove nic from " + vm + " in " + network + ", nic is default.");
 +            throw new CloudRuntimeException("Failed to remove nic from " + vm + " in " + network + ", nic is default.");
 +        }
 +
 +        //Lock on nic is needed here
 +        final Nic lock = _nicsDao.acquireInLockTable(nic.getId());
 +        if (lock == null) {
 +            //check if nic is still there. Return if it was released already
 +            if (_nicsDao.findById(nic.getId()) == null) {
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Not need to remove the vm " + vm + " from network " + network + " as the vm doesn't have nic in this network");
 +                }
 +                return true;
 +            }
 +            throw new ConcurrentOperationException("Unable to lock nic " + nic.getId());
 +        }
 +
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Lock is acquired for nic id " + lock.getId() + " as a part of remove vm " + vm + " from network " + network);
 +        }
 +
 +        try {
 +            final NicProfile nicProfile =
 +                    new NicProfile(nic, network, nic.getBroadcastUri(), nic.getIsolationUri(), _networkModel.getNetworkRate(network.getId(), vm.getId()),
 +                            _networkModel.isSecurityGroupSupportedInNetwork(network), _networkModel.getNetworkTag(vmProfile.getVirtualMachine().getHypervisorType(), network));
 +
 +            //1) Unplug the nic
 +            if (vm.getState() == State.Running) {
 +                final NicTO nicTO = toNicTO(nicProfile, vmProfile.getVirtualMachine().getHypervisorType());
 +                s_logger.debug("Un-plugging nic for vm " + vm + " from network " + network);
 +                final boolean result = unplugNic(network, nicTO, vmTO, context, dest);
 +                if (result) {
 +                    s_logger.debug("Nic is unplugged successfully for vm " + vm + " in network " + network);
 +                } else {
 +                    s_logger.warn("Failed to unplug nic for the vm " + vm + " from network " + network);
 +                    return false;
 +                }
 +            } else if (vm.getState() != State.Stopped) {
 +                s_logger.warn("Unable to remove vm " + vm + " from network  " + network);
 +                throw new ResourceUnavailableException("Unable to remove vm " + vm + " from network, is not in the right state", DataCenter.class, vm.getDataCenterId());
 +            }
 +
 +            //2) Release the nic
 +            _networkMgr.releaseNic(vmProfile, nic);
 +            s_logger.debug("Successfully released nic " + nic + "for vm " + vm);
 +
 +            //3) Remove the nic
 +            _networkMgr.removeNic(vmProfile, nic);
 +            return true;
 +        } finally {
 +            if (lock != null) {
 +                _nicsDao.releaseFromLockTable(lock.getId());
 +                if (s_logger.isDebugEnabled()) {
 +                    s_logger.debug("Lock is released for nic id " + lock.getId() + " as a part of remove vm " + vm + " from network " + network);
 +                }
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public void findHostAndMigrate(final String vmUuid, final Long newSvcOfferingId, final ExcludeList excludes) throws InsufficientCapacityException, ConcurrentOperationException,
 +    ResourceUnavailableException {
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        if (vm == null) {
 +            throw new CloudRuntimeException("Unable to find " + vmUuid);
 +        }
 +
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +
 +        final Long srcHostId = vm.getHostId();
 +        final Long oldSvcOfferingId = vm.getServiceOfferingId();
 +        if (srcHostId == null) {
 +            throw new CloudRuntimeException("Unable to scale the vm because it doesn't have a host id");
 +        }
 +        final Host host = _hostDao.findById(srcHostId);
 +        final DataCenterDeployment plan = new DataCenterDeployment(host.getDataCenterId(), host.getPodId(), host.getClusterId(), null, null, null);
 +        excludes.addHost(vm.getHostId());
 +        vm.setServiceOfferingId(newSvcOfferingId); // Need to find the destination host based on new svc offering
 +
 +        DeployDestination dest = null;
 +
 +        try {
 +            dest = _dpMgr.planDeployment(profile, plan, excludes, null);
 +        } catch (final AffinityConflictException e2) {
 +            s_logger.warn("Unable to create deployment, affinity rules associted to the VM conflict", e2);
 +            throw new CloudRuntimeException("Unable to create deployment, affinity rules associted to the VM conflict");
 +        }
 +
 +        if (dest != null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug(" Found " + dest + " for scaling the vm to.");
 +            }
 +        }
 +
 +        if (dest == null) {
 +            throw new InsufficientServerCapacityException("Unable to find a server to scale the vm to.", host.getClusterId());
 +        }
 +
 +        excludes.addHost(dest.getHost().getId());
 +        try {
 +            migrateForScale(vm.getUuid(), srcHostId, dest, oldSvcOfferingId);
 +        } catch (final ResourceUnavailableException e) {
 +            s_logger.debug("Unable to migrate to unavailable " + dest);
 +            throw e;
 +        } catch (final ConcurrentOperationException e) {
 +            s_logger.debug("Unable to migrate VM due to: " + e.getMessage());
 +            throw e;
 +        }
 +    }
 +
 +    @Override
 +    public void migrateForScale(final String vmUuid, final long srcHostId, final DeployDestination dest, final Long oldSvcOfferingId)
 +            throws ResourceUnavailableException, ConcurrentOperationException {
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                orchestrateMigrateForScale(vmUuid, srcHostId, dest, oldSvcOfferingId);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = migrateVmForScaleThroughJobQueue(vmUuid, srcHostId, dest, oldSvcOfferingId);
 +
 +            try {
 +                final VirtualMachine vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof RuntimeException) {
 +                    throw (RuntimeException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    throw new RuntimeException("Unexpected exception", (Throwable)jobResult);
 +                }
 +            }
 +        }
 +    }
 +
 +    private void orchestrateMigrateForScale(final String vmUuid, final long srcHostId, final DeployDestination dest, final Long oldSvcOfferingId)
 +            throws ResourceUnavailableException, ConcurrentOperationException {
 +
 +        VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +        s_logger.info("Migrating " + vm + " to " + dest);
 +
 +        vm.getServiceOfferingId();
 +        final long dstHostId = dest.getHost().getId();
 +        final Host fromHost = _hostDao.findById(srcHostId);
 +        if (fromHost == null) {
 +            s_logger.info("Unable to find the host to migrate from: " + srcHostId);
 +            throw new CloudRuntimeException("Unable to find the host to migrate from: " + srcHostId);
 +        }
 +
 +        if (fromHost.getClusterId().longValue() != dest.getCluster().getId()) {
 +            s_logger.info("Source and destination host are not in same cluster, unable to migrate to host: " + dstHostId);
 +            throw new CloudRuntimeException("Source and destination host are not in same cluster, unable to migrate to host: " + dest.getHost().getId());
 +        }
 +
 +        final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +
 +        final long vmId = vm.getId();
 +        vm = _vmDao.findByUuid(vmUuid);
 +        if (vm == null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Unable to find the vm " + vm);
 +            }
 +            throw new CloudRuntimeException("Unable to find a virtual machine with id " + vmId);
 +        }
 +
 +        if (vm.getState() != State.Running) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("VM is not Running, unable to migrate the vm " + vm);
 +            }
 +            throw new CloudRuntimeException("VM is not Running, unable to migrate the vm currently " + vm + " , current state: " + vm.getState().toString());
 +        }
 +
 +        AlertManager.AlertType alertType = AlertManager.AlertType.ALERT_TYPE_USERVM_MIGRATE;
 +        if (VirtualMachine.Type.DomainRouter.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_DOMAIN_ROUTER_MIGRATE;
 +        } else if (VirtualMachine.Type.ConsoleProxy.equals(vm.getType())) {
 +            alertType = AlertManager.AlertType.ALERT_TYPE_CONSOLE_PROXY_MIGRATE;
 +        }
 +
 +        final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +        _networkMgr.prepareNicForMigration(profile, dest);
 +
 +        volumeMgr.prepareForMigration(profile, dest);
 +
 +        final VirtualMachineTO to = toVmTO(profile);
 +        final PrepareForMigrationCommand pfmc = new PrepareForMigrationCommand(to);
 +
 +        ItWorkVO work = new ItWorkVO(UUID.randomUUID().toString(), _nodeId, State.Migrating, vm.getType(), vm.getId());
 +        work.setStep(Step.Prepare);
 +        work.setResourceType(ItWorkVO.ResourceType.Host);
 +        work.setResourceId(dstHostId);
 +        work = _workDao.persist(work);
 +
 +        Answer pfma = null;
 +        try {
 +            pfma = _agentMgr.send(dstHostId, pfmc);
 +            if (pfma == null || !pfma.getResult()) {
 +                final String details = pfma != null ? pfma.getDetails() : "null answer returned";
 +                final String msg = "Unable to prepare for migration due to " + details;
 +                pfma = null;
 +                throw new AgentUnavailableException(msg, dstHostId);
 +            }
 +        } catch (final OperationTimedoutException e1) {
 +            throw new AgentUnavailableException("Operation timed out", dstHostId);
 +        } finally {
 +            if (pfma == null) {
 +                work.setStep(Step.Done);
 +                _workDao.update(work.getId(), work);
 +            }
 +        }
 +
 +        vm.setLastHostId(srcHostId);
 +        try {
 +            if (vm == null || vm.getHostId() == null || vm.getHostId() != srcHostId || !changeState(vm, Event.MigrationRequested, dstHostId, work, Step.Migrating)) {
 +                s_logger.info("Migration cancelled because state has changed: " + vm);
 +                throw new ConcurrentOperationException("Migration cancelled because state has changed: " + vm);
 +            }
 +        } catch (final NoTransitionException e1) {
 +            s_logger.info("Migration cancelled because " + e1.getMessage());
 +            throw new ConcurrentOperationException("Migration cancelled because " + e1.getMessage());
 +        }
 +
 +        boolean migrated = false;
 +        try {
 +            final boolean isWindows = _guestOsCategoryDao.findById(_guestOsDao.findById(vm.getGuestOSId()).getCategoryId()).getName().equalsIgnoreCase("Windows");
 +            final MigrateCommand mc = new MigrateCommand(vm.getInstanceName(), dest.getHost().getPrivateIpAddress(), isWindows, to, getExecuteInSequence(vm.getHypervisorType()));
 +
 +            String autoConvergence = _configDao.getValue(Config.KvmAutoConvergence.toString());
 +            boolean kvmAutoConvergence = Boolean.parseBoolean(autoConvergence);
 +
 +            mc.setAutoConvergence(kvmAutoConvergence);
 +
 +            mc.setHostGuid(dest.getHost().getGuid());
 +
 +            try {
 +                final Answer ma = _agentMgr.send(vm.getLastHostId(), mc);
 +                if (ma == null || !ma.getResult()) {
 +                    final String details = ma != null ? ma.getDetails() : "null answer returned";
 +                    final String msg = "Unable to migrate due to " + details;
 +                    s_logger.error(msg);
 +                    throw new CloudRuntimeException(msg);
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                if (e.isActive()) {
 +                    s_logger.warn("Active migration command so scheduling a restart for " + vm);
 +                    _haMgr.scheduleRestart(vm, true);
 +                }
 +                throw new AgentUnavailableException("Operation timed out on migrating " + vm, dstHostId);
 +            }
 +
 +            try {
 +                final long newServiceOfferingId = vm.getServiceOfferingId();
 +                vm.setServiceOfferingId(oldSvcOfferingId); // release capacity for the old service offering only
 +                if (!changeState(vm, VirtualMachine.Event.OperationSucceeded, dstHostId, work, Step.Started)) {
 +                    throw new ConcurrentOperationException("Unable to change the state for " + vm);
 +                }
 +                vm.setServiceOfferingId(newServiceOfferingId);
 +            } catch (final NoTransitionException e1) {
 +                throw new ConcurrentOperationException("Unable to change state due to " + e1.getMessage());
 +            }
 +
 +            try {
 +                if (!checkVmOnHost(vm, dstHostId)) {
 +                    s_logger.error("Unable to complete migration for " + vm);
 +                    try {
 +                        _agentMgr.send(srcHostId, new Commands(cleanup(vm.getInstanceName())), null);
 +                    } catch (final AgentUnavailableException e) {
 +                        s_logger.error("AgentUnavailableException while cleanup on source host: " + srcHostId);
 +                    }
 +                    cleanup(vmGuru, new VirtualMachineProfileImpl(vm), work, Event.AgentReportStopped, true);
 +                    throw new CloudRuntimeException("Unable to complete migration for " + vm);
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                s_logger.debug("Error while checking the vm " + vm + " on host " + dstHostId, e);
 +            }
 +
 +            migrated = true;
 +        } finally {
 +            if (!migrated) {
 +                s_logger.info("Migration was unsuccessful.  Cleaning up: " + vm);
 +
 +                _alertMgr.sendAlert(alertType, fromHost.getDataCenterId(), fromHost.getPodId(),
 +                        "Unable to migrate vm " + vm.getInstanceName() + " from host " + fromHost.getName() + " in zone " + dest.getDataCenter().getName() + " and pod " +
 +                                dest.getPod().getName(), "Migrate Command failed.  Please check logs.");
 +                try {
 +                    _agentMgr.send(dstHostId, new Commands(cleanup(vm.getInstanceName())), null);
 +                } catch (final AgentUnavailableException ae) {
 +                    s_logger.info("Looks like the destination Host is unavailable for cleanup");
 +                }
 +
 +                try {
 +                    stateTransitTo(vm, Event.OperationFailed, srcHostId);
 +                } catch (final NoTransitionException e) {
 +                    s_logger.warn(e.getMessage());
 +                }
 +            }
 +
 +            work.setStep(Step.Done);
 +            _workDao.update(work.getId(), work);
 +        }
 +    }
 +
 +    @Override
 +    public boolean replugNic(final Network network, final NicTO nic, final VirtualMachineTO vm, final ReservationContext context, final DeployDestination dest) throws ConcurrentOperationException,
 +    ResourceUnavailableException, InsufficientCapacityException {
 +        boolean result = true;
 +
 +        final VMInstanceVO router = _vmDao.findById(vm.getId());
 +        if (router.getState() == State.Running) {
 +            try {
 +                final ReplugNicCommand replugNicCmd = new ReplugNicCommand(nic, vm.getName(), vm.getType(), vm.getDetails());
 +                final Commands cmds = new Commands(Command.OnError.Stop);
 +                cmds.addCommand("replugnic", replugNicCmd);
 +                _agentMgr.send(dest.getHost().getId(), cmds);
 +                final ReplugNicAnswer replugNicAnswer = cmds.getAnswer(ReplugNicAnswer.class);
 +                if (replugNicAnswer == null || !replugNicAnswer.getResult()) {
 +                    s_logger.warn("Unable to replug nic for vm " + vm.getName());
 +                    result = false;
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                throw new AgentUnavailableException("Unable to plug nic for router " + vm.getName() + " in network " + network, dest.getHost().getId(), e);
 +            }
 +        } else {
 +            s_logger.warn("Unable to apply ReplugNic, vm " + router + " is not in the right state " + router.getState());
 +
 +            throw new ResourceUnavailableException("Unable to apply ReplugNic on the backend," + " vm " + vm + " is not in the right state", DataCenter.class,
 +                    router.getDataCenterId());
 +        }
 +
 +        return result;
 +    }
 +
 +    public boolean plugNic(final Network network, final NicTO nic, final VirtualMachineTO vm, final ReservationContext context, final DeployDestination dest) throws ConcurrentOperationException,
 +    ResourceUnavailableException, InsufficientCapacityException {
 +        boolean result = true;
 +
 +        final VMInstanceVO router = _vmDao.findById(vm.getId());
 +        if (router.getState() == State.Running) {
 +            try {
 +                final PlugNicCommand plugNicCmd = new PlugNicCommand(nic, vm.getName(), vm.getType(), vm.getDetails());
 +                final Commands cmds = new Commands(Command.OnError.Stop);
 +                cmds.addCommand("plugnic", plugNicCmd);
 +                _agentMgr.send(dest.getHost().getId(), cmds);
 +                final PlugNicAnswer plugNicAnswer = cmds.getAnswer(PlugNicAnswer.class);
 +                if (plugNicAnswer == null || !plugNicAnswer.getResult()) {
 +                    s_logger.warn("Unable to plug nic for vm " + vm.getName());
 +                    result = false;
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                throw new AgentUnavailableException("Unable to plug nic for router " + vm.getName() + " in network " + network, dest.getHost().getId(), e);
 +            }
 +        } else {
 +            s_logger.warn("Unable to apply PlugNic, vm " + router + " is not in the right state " + router.getState());
 +
 +            throw new ResourceUnavailableException("Unable to apply PlugNic on the backend," + " vm " + vm + " is not in the right state", DataCenter.class,
 +                    router.getDataCenterId());
 +        }
 +
 +        return result;
 +    }
 +
 +    public boolean unplugNic(final Network network, final NicTO nic, final VirtualMachineTO vm, final ReservationContext context, final DeployDestination dest) throws ConcurrentOperationException,
 +    ResourceUnavailableException {
 +
 +        boolean result = true;
 +        final VMInstanceVO router = _vmDao.findById(vm.getId());
 +
 +        if (router.getState() == State.Running) {
 +            // collect vm network statistics before unplug a nic
 +            UserVmVO userVm = _userVmDao.findById(vm.getId());
 +            if (userVm != null && userVm.getType() == VirtualMachine.Type.User) {
 +                _userVmService.collectVmNetworkStatistics(userVm);
 +            }
 +            try {
 +                final Commands cmds = new Commands(Command.OnError.Stop);
 +                final UnPlugNicCommand unplugNicCmd = new UnPlugNicCommand(nic, vm.getName());
 +                cmds.addCommand("unplugnic", unplugNicCmd);
 +                _agentMgr.send(dest.getHost().getId(), cmds);
 +
 +                final UnPlugNicAnswer unplugNicAnswer = cmds.getAnswer(UnPlugNicAnswer.class);
 +                if (unplugNicAnswer == null || !unplugNicAnswer.getResult()) {
 +                    s_logger.warn("Unable to unplug nic from router " + router);
 +                    result = false;
 +                }
 +            } catch (final OperationTimedoutException e) {
 +                throw new AgentUnavailableException("Unable to unplug nic from rotuer " + router + " from network " + network, dest.getHost().getId(), e);
 +            }
 +        } else if (router.getState() == State.Stopped || router.getState() == State.Stopping) {
 +            s_logger.debug("Vm " + router.getInstanceName() + " is in " + router.getState() + ", so not sending unplug nic command to the backend");
 +        } else {
 +            s_logger.warn("Unable to apply unplug nic, Vm " + router + " is not in the right state " + router.getState());
 +
 +            throw new ResourceUnavailableException("Unable to apply unplug nic on the backend," + " vm " + router + " is not in the right state", DataCenter.class,
 +                    router.getDataCenterId());
 +        }
 +
 +        return result;
 +    }
 +
 +    @Override
 +    public VMInstanceVO reConfigureVm(final String vmUuid, final ServiceOffering oldServiceOffering,
 +            final boolean reconfiguringOnExistingHost)
 +                    throws ResourceUnavailableException, InsufficientServerCapacityException, ConcurrentOperationException {
 +
 +        final AsyncJobExecutionContext jobContext = AsyncJobExecutionContext.getCurrentExecutionContext();
 +        if (jobContext.isJobDispatchedBy(VmWorkConstants.VM_WORK_JOB_DISPATCHER)) {
 +            // avoid re-entrance
 +            VmWorkJobVO placeHolder = null;
 +            final VirtualMachine vm = _vmDao.findByUuid(vmUuid);
 +            placeHolder = createPlaceHolderWork(vm.getId());
 +            try {
 +                return orchestrateReConfigureVm(vmUuid, oldServiceOffering, reconfiguringOnExistingHost);
 +            } finally {
 +                if (placeHolder != null) {
 +                    _workJobDao.expunge(placeHolder.getId());
 +                }
 +            }
 +        } else {
 +            final Outcome<VirtualMachine> outcome = reconfigureVmThroughJobQueue(vmUuid, oldServiceOffering, reconfiguringOnExistingHost);
 +
 +            VirtualMachine vm = null;
 +            try {
 +                vm = outcome.get();
 +            } catch (final InterruptedException e) {
 +                throw new RuntimeException("Operation is interrupted", e);
 +            } catch (final java.util.concurrent.ExecutionException e) {
 +                throw new RuntimeException("Execution excetion", e);
 +            }
 +
 +            final Object jobResult = _jobMgr.unmarshallResultObject(outcome.getJob());
 +            if (jobResult != null) {
 +                if (jobResult instanceof ResourceUnavailableException) {
 +                    throw (ResourceUnavailableException)jobResult;
 +                } else if (jobResult instanceof ConcurrentOperationException) {
 +                    throw (ConcurrentOperationException)jobResult;
 +                } else if (jobResult instanceof InsufficientServerCapacityException) {
 +                    throw (InsufficientServerCapacityException)jobResult;
 +                } else if (jobResult instanceof Throwable) {
 +                    s_logger.error("Unhandled exception", (Throwable)jobResult);
 +                    throw new RuntimeException("Unhandled exception", (Throwable)jobResult);
 +                }
 +            }
 +
 +            return (VMInstanceVO)vm;
 +        }
 +    }
 +
 +    private VMInstanceVO orchestrateReConfigureVm(final String vmUuid, final ServiceOffering oldServiceOffering, final boolean reconfiguringOnExistingHost) throws ResourceUnavailableException,
 +    ConcurrentOperationException {
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final long newServiceofferingId = vm.getServiceOfferingId();
 +        final ServiceOffering newServiceOffering = _offeringDao.findById(vm.getId(), newServiceofferingId);
 +        final HostVO hostVo = _hostDao.findById(vm.getHostId());
 +
 +        final Float memoryOvercommitRatio = CapacityManager.MemOverprovisioningFactor.valueIn(hostVo.getClusterId());
 +        final Float cpuOvercommitRatio = CapacityManager.CpuOverprovisioningFactor.valueIn(hostVo.getClusterId());
 +        final long minMemory = (long)(newServiceOffering.getRamSize() / memoryOvercommitRatio);
 +        final ScaleVmCommand reconfigureCmd =
 +                new ScaleVmCommand(vm.getInstanceName(), newServiceOffering.getCpu(), (int)(newServiceOffering.getSpeed() / cpuOvercommitRatio),
 +                        newServiceOffering.getSpeed(), minMemory * 1024L * 1024L, newServiceOffering.getRamSize() * 1024L * 1024L, newServiceOffering.getLimitCpuUse());
 +
 +        final Long dstHostId = vm.getHostId();
 +        if(vm.getHypervisorType().equals(HypervisorType.VMware)) {
 +            final HypervisorGuru hvGuru = _hvGuruMgr.getGuru(vm.getHypervisorType());
 +            Map<String, String> details = null;
 +            details = hvGuru.getClusterSettings(vm.getId());
 +            reconfigureCmd.getVirtualMachine().setDetails(details);
 +        }
 +
 +        final ItWorkVO work = new ItWorkVO(UUID.randomUUID().toString(), _nodeId, State.Running, vm.getType(), vm.getId());
 +
 +        work.setStep(Step.Prepare);
 +        work.setResourceType(ItWorkVO.ResourceType.Host);
 +        work.setResourceId(vm.getHostId());
 +        _workDao.persist(work);
 +        boolean success = false;
 +        try {
 +            if (reconfiguringOnExistingHost) {
 +                vm.setServiceOfferingId(oldServiceOffering.getId());
 +                _capacityMgr.releaseVmCapacity(vm, false, false, vm.getHostId()); //release the old capacity
 +                vm.setServiceOfferingId(newServiceofferingId);
 +                _capacityMgr.allocateVmCapacity(vm, false); // lock the new capacity
 +            }
 +
 +            final Answer reconfigureAnswer = _agentMgr.send(vm.getHostId(), reconfigureCmd);
 +            if (reconfigureAnswer == null || !reconfigureAnswer.getResult()) {
 +                s_logger.error("Unable to scale vm due to " + (reconfigureAnswer == null ? "" : reconfigureAnswer.getDetails()));
 +                throw new CloudRuntimeException("Unable to scale vm due to " + (reconfigureAnswer == null ? "" : reconfigureAnswer.getDetails()));
 +            }
 +
 +            success = true;
 +        } catch (final OperationTimedoutException e) {
 +            throw new AgentUnavailableException("Operation timed out on reconfiguring " + vm, dstHostId);
 +        } catch (final AgentUnavailableException e) {
 +            throw e;
 +        } finally {
 +            if (!success) {
 +                _capacityMgr.releaseVmCapacity(vm, false, false, vm.getHostId()); // release the new capacity
 +                vm.setServiceOfferingId(oldServiceOffering.getId());
 +                _capacityMgr.allocateVmCapacity(vm, false); // allocate the old capacity
 +            }
 +        }
 +
 +        return vm;
 +
 +    }
 +
 +    @Override
 +    public String getConfigComponentName() {
 +        return VirtualMachineManager.class.getSimpleName();
 +    }
 +
 +    @Override
 +    public ConfigKey<?>[] getConfigKeys() {
 +        return new ConfigKey<?>[] {ClusterDeltaSyncInterval, StartRetry, VmDestroyForcestop, VmOpCancelInterval, VmOpCleanupInterval, VmOpCleanupWait,
 +                VmOpLockStateRetry,
-                 VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval, VmConfigDriveLabel, HaVmRestartHostUp};
++                VmOpWaitInterval, ExecuteInSequence, VmJobCheckInterval, VmJobTimeout, VmJobStateReportInterval, VmConfigDriveLabel, VmConfigDriveOnPrimaryPool, HaVmRestartHostUp};
 +    }
 +
 +    public List<StoragePoolAllocator> getStoragePoolAllocators() {
 +        return _storagePoolAllocators;
 +    }
 +
 +    @Inject
 +    public void setStoragePoolAllocators(final List<StoragePoolAllocator> storagePoolAllocators) {
 +        _storagePoolAllocators = storagePoolAllocators;
 +    }
 +
 +    //
 +    // PowerState report handling for out-of-band changes and handling of left-over transitional VM states
 +    //
 +
 +    @MessageHandler(topic = Topics.VM_POWER_STATE)
 +    protected void HandlePowerStateReport(final String subject, final String senderAddress, final Object args) {
 +        assert args != null;
 +        final Long vmId = (Long)args;
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vmId);
 +        if (pendingWorkJobs.size() == 0 && !_haMgr.hasPendingHaWork(vmId)) {
 +            // there is no pending operation job
 +            final VMInstanceVO vm = _vmDao.findById(vmId);
 +            if (vm != null) {
 +                switch (vm.getPowerState()) {
 +                case PowerOn:
 +                    handlePowerOnReportWithNoPendingJobsOnVM(vm);
 +                    break;
 +
 +                case PowerOff:
 +                case PowerReportMissing:
 +                    handlePowerOffReportWithNoPendingJobsOnVM(vm);
 +                    break;
 +
 +                    // PowerUnknown shouldn't be reported, it is a derived
 +                    // VM power state from host state (host un-reachable)
 +                case PowerUnknown:
 +                default:
 +                    assert false;
 +                    break;
 +                }
 +            } else {
 +                s_logger.warn("VM " + vmId + " no longer exists when processing VM state report");
 +            }
 +        } else {
 +            s_logger.info("There is pending job or HA tasks working on the VM. vm id: " + vmId + ", postpone power-change report by resetting power-change counters");
 +
 +            // reset VM power state tracking so that we won't lost signal when VM has
 +            // been translated to
 +            _vmDao.resetVmPowerStateTracking(vmId);
 +        }
 +    }
 +
 +    private void handlePowerOnReportWithNoPendingJobsOnVM(final VMInstanceVO vm) {
 +        //
 +        //    1) handle left-over transitional VM states
 +        //    2) handle out of band VM live migration
 +        //    3) handle out of sync stationary states, marking VM from Stopped to Running with
 +        //       alert messages
 +        //
 +        switch (vm.getState()) {
 +        case Starting:
 +            s_logger.info("VM " + vm.getInstanceName() + " is at " + vm.getState() + " and we received a power-on report while there is no pending jobs on it");
 +
 +            try {
 +                stateTransitTo(vm, VirtualMachine.Event.FollowAgentPowerOnReport, vm.getPowerHostId());
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn("Unexpected VM state transition exception, race-condition?", e);
 +            }
 +
 +            s_logger.info("VM " + vm.getInstanceName() + " is sync-ed to at Running state according to power-on report from hypervisor");
 +
 +            // we need to alert admin or user about this risky state transition
 +            _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_SYNC, vm.getDataCenterId(), vm.getPodIdToDeployIn(),
 +                    VM_SYNC_ALERT_SUBJECT, "VM " + vm.getHostName() + "(" + vm.getInstanceName()
 +                    + ") state is sync-ed (Starting -> Running) from out-of-context transition. VM network environment may need to be reset");
 +            break;
 +
 +        case Running:
 +            try {
 +                if (vm.getHostId() != null && vm.getHostId().longValue() != vm.getPowerHostId().longValue()) {
 +                    s_logger.info("Detected out of band VM migration from host " + vm.getHostId() + " to host " + vm.getPowerHostId());
 +                }
 +                stateTransitTo(vm, VirtualMachine.Event.FollowAgentPowerOnReport, vm.getPowerHostId());
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn("Unexpected VM state transition exception, race-condition?", e);
 +            }
 +
 +            break;
 +
 +        case Stopping:
 +        case Stopped:
 +            s_logger.info("VM " + vm.getInstanceName() + " is at " + vm.getState() + " and we received a power-on report while there is no pending jobs on it");
 +
 +            try {
 +                stateTransitTo(vm, VirtualMachine.Event.FollowAgentPowerOnReport, vm.getPowerHostId());
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn("Unexpected VM state transition exception, race-condition?", e);
 +            }
 +            _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_SYNC, vm.getDataCenterId(), vm.getPodIdToDeployIn(),
 +                    VM_SYNC_ALERT_SUBJECT, "VM " + vm.getHostName() + "(" + vm.getInstanceName() + ") state is sync-ed (" + vm.getState()
 +                    + " -> Running) from out-of-context transition. VM network environment may need to be reset");
 +
 +            s_logger.info("VM " + vm.getInstanceName() + " is sync-ed to at Running state according to power-on report from hypervisor");
 +            break;
 +
 +        case Destroyed:
 +        case Expunging:
 +            s_logger.info("Receive power on report when VM is in destroyed or expunging state. vm: "
 +                    + vm.getId() + ", state: " + vm.getState());
 +            break;
 +
 +        case Migrating:
 +            s_logger.info("VM " + vm.getInstanceName() + " is at " + vm.getState() + " and we received a power-on report while there is no pending jobs on it");
 +            try {
 +                stateTransitTo(vm, VirtualMachine.Event.FollowAgentPowerOnReport, vm.getPowerHostId());
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn("Unexpected VM state transition exception, race-condition?", e);
 +            }
 +            s_logger.info("VM " + vm.getInstanceName() + " is sync-ed to at Running state according to power-on report from hypervisor");
 +            break;
 +
 +        case Error:
 +        default:
 +            s_logger.info("Receive power on report when VM is in error or unexpected state. vm: "
 +                    + vm.getId() + ", state: " + vm.getState());
 +            break;
 +        }
 +    }
 +
 +    private void handlePowerOffReportWithNoPendingJobsOnVM(final VMInstanceVO vm) {
 +
 +        //    1) handle left-over transitional VM states
 +        //    2) handle out of sync stationary states, schedule force-stop to release resources
 +        //
 +        switch (vm.getState()) {
 +        case Starting:
 +        case Stopping:
 +        case Running:
 +        case Stopped:
 +        case Migrating:
 +            s_logger.info("VM " + vm.getInstanceName() + " is at " + vm.getState() + " and we received a power-off report while there is no pending jobs on it");
 +            if(vm.isHaEnabled() && vm.getState() == State.Running && HaVmRestartHostUp.value() && vm.getHypervisorType() != HypervisorType.VMware && vm.getHypervisorType() != HypervisorType.Hyperv) {
 +                s_logger.info("Detected out-of-band stop of a HA enabled VM " + vm.getInstanceName() + ", will schedule restart");
 +                if(!_haMgr.hasPendingHaWork(vm.getId())) {
 +                    _haMgr.scheduleRestart(vm, true);
 +                } else {
 +                    s_logger.info("VM " + vm.getInstanceName() + " already has an pending HA task working on it");
 +                }
 +                return;
 +            }
 +
 +            final VirtualMachineGuru vmGuru = getVmGuru(vm);
 +            final VirtualMachineProfile profile = new VirtualMachineProfileImpl(vm);
 +            if (!sendStop(vmGuru, profile, true, true)) {
 +                // In case StopCommand fails, don't proceed further
 +                return;
 +            }
 +
 +            try {
 +                stateTransitTo(vm, VirtualMachine.Event.FollowAgentPowerOffReport, null);
 +            } catch (final NoTransitionException e) {
 +                s_logger.warn("Unexpected VM state transition exception, race-condition?", e);
 +            }
 +
 +            _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_SYNC, vm.getDataCenterId(), vm.getPodIdToDeployIn(),
 +                    VM_SYNC_ALERT_SUBJECT, "VM " + vm.getHostName() + "(" + vm.getInstanceName() + ") state is sync-ed (" + vm.getState()
 +                    + " -> Stopped) from out-of-context transition.");
 +
 +            s_logger.info("VM " + vm.getInstanceName() + " is sync-ed to at Stopped state according to power-off report from hypervisor");
 +
 +            break;
 +
 +        case Destroyed:
 +        case Expunging:
 +            break;
 +
 +        case Error:
 +        default:
 +            break;
 +        }
 +    }
 +
 +    private void scanStalledVMInTransitionStateOnUpHost(final long hostId) {
 +        //
 +        // Check VM that is stuck in Starting, Stopping, Migrating states, we won't check
 +        // VMs in expunging state (this need to be handled specially)
 +        //
 +        // checking condition
 +        //    1) no pending VmWork job
 +        //    2) on hostId host and host is UP
 +        //
 +        // When host is UP, soon or later we will get a report from the host about the VM,
 +        // however, if VM is missing from the host report (it may happen in out of band changes
 +        // or from designed behave of XS/KVM), the VM may not get a chance to run the state-sync logic
 +        //
 +        // Therefore, we will scan thoses VMs on UP host based on last update timestamp, if the host is UP
 +        // and a VM stalls for status update, we will consider them to be powered off
 +        // (which is relatively safe to do so)
 +
 +        final long stallThresholdInMs = VmJobStateReportInterval.value() + (VmJobStateReportInterval.value() >> 1);
 +        final Date cutTime = new Date(DateUtil.currentGMTTime().getTime() - stallThresholdInMs);
 +        final List<Long> mostlikelyStoppedVMs = listStalledVMInTransitionStateOnUpHost(hostId, cutTime);
 +        for (final Long vmId : mostlikelyStoppedVMs) {
 +            final VMInstanceVO vm = _vmDao.findById(vmId);
 +            assert vm != null;
 +            handlePowerOffReportWithNoPendingJobsOnVM(vm);
 +        }
 +
 +        final List<Long> vmsWithRecentReport = listVMInTransitionStateWithRecentReportOnUpHost(hostId, cutTime);
 +        for (final Long vmId : vmsWithRecentReport) {
 +            final VMInstanceVO vm = _vmDao.findById(vmId);
 +            assert vm != null;
 +            if (vm.getPowerState() == PowerState.PowerOn) {
 +                handlePowerOnReportWithNoPendingJobsOnVM(vm);
 +            } else {
 +                handlePowerOffReportWithNoPendingJobsOnVM(vm);
 +            }
 +        }
 +    }
 +
 +    private void scanStalledVMInTransitionStateOnDisconnectedHosts() {
 +        final Date cutTime = new Date(DateUtil.currentGMTTime().getTime() - VmOpWaitInterval.value() * 1000);
 +        final List<Long> stuckAndUncontrollableVMs = listStalledVMInTransitionStateOnDisconnectedHosts(cutTime);
 +        for (final Long vmId : stuckAndUncontrollableVMs) {
 +            final VMInstanceVO vm = _vmDao.findById(vmId);
 +
 +            // We now only alert administrator about this situation
 +            _alertMgr.sendAlert(AlertManager.AlertType.ALERT_TYPE_SYNC, vm.getDataCenterId(), vm.getPodIdToDeployIn(),
 +                    VM_SYNC_ALERT_SUBJECT, "VM " + vm.getHostName() + "(" + vm.getInstanceName() + ") is stuck in " + vm.getState()
 +                    + " state and its host is unreachable for too long");
 +        }
 +    }
 +
 +    // VMs that in transitional state without recent power state report
 +    private List<Long> listStalledVMInTransitionStateOnUpHost(final long hostId, final Date cutTime) {
 +        final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status = 'UP' " +
 +                "AND h.id = ? AND i.power_state_update_time < ? AND i.host_id = h.id " +
 +                "AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " +
 +                "AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" +
 +                "AND i.removed IS NULL";
 +
 +        final List<Long> l = new ArrayList<Long>();
 +        TransactionLegacy txn = null;
 +        try {
 +            txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
 +
 +            PreparedStatement pstmt = null;
 +            try {
 +                pstmt = txn.prepareAutoCloseStatement(sql);
 +
 +                pstmt.setLong(1, hostId);
 +                pstmt.setString(2, DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime));
 +                pstmt.setInt(3, JobInfo.Status.IN_PROGRESS.ordinal());
 +                final ResultSet rs = pstmt.executeQuery();
 +                while (rs.next()) {
 +                    l.add(rs.getLong(1));
 +                }
 +            } catch (final SQLException e) {
 +            } catch (final Throwable e) {
 +            }
 +
 +        } finally {
 +            if (txn != null) {
 +                txn.close();
 +            }
 +        }
 +        return l;
 +    }
 +
 +    // VMs that in transitional state and recently have power state update
 +    private List<Long> listVMInTransitionStateWithRecentReportOnUpHost(final long hostId, final Date cutTime) {
 +        final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status = 'UP' " +
 +                "AND h.id = ? AND i.power_state_update_time > ? AND i.host_id = h.id " +
 +                "AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " +
 +                "AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" +
 +                "AND i.removed IS NULL";
 +
 +        final List<Long> l = new ArrayList<Long>();
 +        TransactionLegacy txn = null;
 +        try {
 +            txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
 +            PreparedStatement pstmt = null;
 +            try {
 +                pstmt = txn.prepareAutoCloseStatement(sql);
 +
 +                pstmt.setLong(1, hostId);
 +                pstmt.setString(2, DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime));
 +                pstmt.setInt(3, JobInfo.Status.IN_PROGRESS.ordinal());
 +                final ResultSet rs = pstmt.executeQuery();
 +                while (rs.next()) {
 +                    l.add(rs.getLong(1));
 +                }
 +            } catch (final SQLException e) {
 +            } catch (final Throwable e) {
 +            }
 +            return l;
 +        } finally {
 +            if (txn != null) {
 +                txn.close();
 +            }
 +        }
 +    }
 +
 +    private List<Long> listStalledVMInTransitionStateOnDisconnectedHosts(final Date cutTime) {
 +        final String sql = "SELECT i.* FROM vm_instance as i, host as h WHERE h.status != 'UP' " +
 +                "AND i.power_state_update_time < ? AND i.host_id = h.id " +
 +                "AND (i.state ='Starting' OR i.state='Stopping' OR i.state='Migrating') " +
 +                "AND i.id NOT IN (SELECT w.vm_instance_id FROM vm_work_job AS w JOIN async_job AS j ON w.id = j.id WHERE j.job_status = ?)" +
 +                "AND i.removed IS NULL";
 +
 +        final List<Long> l = new ArrayList<Long>();
 +        TransactionLegacy txn = null;
 +        try {
 +            txn = TransactionLegacy.open(TransactionLegacy.CLOUD_DB);
 +            PreparedStatement pstmt = null;
 +            try {
 +                pstmt = txn.prepareAutoCloseStatement(sql);
 +
 +                pstmt.setString(1, DateUtil.getDateDisplayString(TimeZone.getTimeZone("GMT"), cutTime));
 +                pstmt.setInt(2, JobInfo.Status.IN_PROGRESS.ordinal());
 +                final ResultSet rs = pstmt.executeQuery();
 +                while (rs.next()) {
 +                    l.add(rs.getLong(1));
 +                }
 +            } catch (final SQLException e) {
 +            } catch (final Throwable e) {
 +            }
 +            return l;
 +        } finally {
 +            if (txn != null) {
 +                txn.close();
 +            }
 +        }
 +    }
 +
 +    //
 +    // VM operation based on new sync model
 +    //
 +
 +    public class VmStateSyncOutcome extends OutcomeImpl<VirtualMachine> {
 +        private long _vmId;
 +
 +        public VmStateSyncOutcome(final AsyncJob job, final PowerState desiredPowerState, final long vmId, final Long srcHostIdForMigration) {
 +            super(VirtualMachine.class, job, VmJobCheckInterval.value(), new Predicate() {
 +                @Override
 +                public boolean checkCondition() {
 +                    final AsyncJobVO jobVo = _entityMgr.findById(AsyncJobVO.class, job.getId());
 +                    assert jobVo != null;
 +                    if (jobVo == null || jobVo.getStatus() != JobInfo.Status.IN_PROGRESS) {
 +                        return true;
 +                    }
 +                    return false;
 +                }
 +            }, Topics.VM_POWER_STATE, AsyncJob.Topics.JOB_STATE);
 +            _vmId = vmId;
 +        }
 +
 +        @Override
 +        protected VirtualMachine retrieve() {
 +            return _vmDao.findById(_vmId);
 +        }
 +    }
 +
 +    public class VmJobVirtualMachineOutcome extends OutcomeImpl<VirtualMachine> {
 +        private long _vmId;
 +
 +        public VmJobVirtualMachineOutcome(final AsyncJob job, final long vmId) {
 +            super(VirtualMachine.class, job, VmJobCheckInterval.value(), new Predicate() {
 +                @Override
 +                public boolean checkCondition() {
 +                    final AsyncJobVO jobVo = _entityMgr.findById(AsyncJobVO.class, job.getId());
 +                    assert jobVo != null;
 +                    if (jobVo == null || jobVo.getStatus() != JobInfo.Status.IN_PROGRESS) {
 +                        return true;
 +                    }
 +
 +                    return false;
 +                }
 +            }, AsyncJob.Topics.JOB_STATE);
 +            _vmId = vmId;
 +        }
 +
 +        @Override
 +        protected VirtualMachine retrieve() {
 +            return _vmDao.findById(_vmId);
 +        }
 +    }
 +
 +    //
 +    // TODO build a common pattern to reduce code duplication in following methods
 +    // no time for this at current iteration
 +    //
 +    public Outcome<VirtualMachine> startVmThroughJobQueue(final String vmUuid,
 +            final Map<VirtualMachineProfile.Param, Object> params,
 +            final DeploymentPlan planToDeploy, final DeploymentPlanner planner) {
 +
 +        final CallContext context = CallContext.current();
 +        final User callingUser = context.getCallingUser();
 +        final Account callingAccount = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        VmWorkJobVO workJob = null;
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
 +                vm.getId(), VmWorkStart.class.getName());
 +
 +        if (pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkStart.class.getName());
 +
 +            workJob.setAccountId(callingAccount.getId());
 +            workJob.setUserId(callingUser.getId());
 +            workJob.setStep(VmWorkJobVO.Step.Starting);
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkStart workInfo = new VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(), VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
 +            workInfo.setPlan(planToDeploy);
 +            workInfo.setParams(params);
 +            if (planner != null) {
 +                workInfo.setDeploymentPlanner(planner.getName());
 +            }
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmStateSyncOutcome(workJob,
 +                VirtualMachine.PowerState.PowerOn, vm.getId(), null);
 +    }
 +
 +    public Outcome<VirtualMachine> stopVmThroughJobQueue(final String vmUuid, final boolean cleanup) {
 +        final CallContext context = CallContext.current();
 +        final Account account = context.getCallingAccount();
 +        final User user = context.getCallingUser();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                vm.getType(), vm.getId(),
 +                VmWorkStop.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkStop.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setStep(VmWorkJobVO.Step.Prepare);
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkStop workInfo = new VmWorkStop(user.getId(), account.getId(), vm.getId(), VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, cleanup);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmStateSyncOutcome(workJob,
 +                VirtualMachine.PowerState.PowerOff, vm.getId(), null);
 +    }
 +
 +    public Outcome<VirtualMachine> rebootVmThroughJobQueue(final String vmUuid,
 +            final Map<VirtualMachineProfile.Param, Object> params) {
 +
 +        final CallContext context = CallContext.current();
 +        final Account account = context.getCallingAccount();
 +        final User user = context.getCallingUser();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkReboot.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkReboot.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setStep(VmWorkJobVO.Step.Prepare);
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkReboot workInfo = new VmWorkReboot(user.getId(), account.getId(), vm.getId(), VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, params);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob,
 +                vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> migrateVmThroughJobQueue(final String vmUuid, final long srcHostId, final DeployDestination dest) {
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkMigrate.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkMigrate.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkMigrate workInfo = new VmWorkMigrate(user.getId(), account.getId(), vm.getId(), VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, srcHostId, dest);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmStateSyncOutcome(workJob,
 +                VirtualMachine.PowerState.PowerOn, vm.getId(), vm.getPowerHostId());
 +    }
 +
 +    public Outcome<VirtualMachine> migrateVmAwayThroughJobQueue(final String vmUuid, final long srcHostId) {
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkMigrateAway.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkMigrateAway.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkMigrateAway workInfo = new VmWorkMigrateAway(user.getId(), account.getId(), vm.getId(), VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, srcHostId);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +        }
 +
 +        _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmStateSyncOutcome(workJob, VirtualMachine.PowerState.PowerOn, vm.getId(), vm.getPowerHostId());
 +    }
 +
 +    public Outcome<VirtualMachine> migrateVmWithStorageThroughJobQueue(
 +            final String vmUuid, final long srcHostId, final long destHostId,
 +            final Map<Long, Long> volumeToPool) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkMigrateWithStorage.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkMigrateWithStorage.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkMigrateWithStorage workInfo = new VmWorkMigrateWithStorage(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, srcHostId, destHostId, volumeToPool);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmStateSyncOutcome(workJob,
 +                VirtualMachine.PowerState.PowerOn, vm.getId(), destHostId);
 +    }
 +
 +    public Outcome<VirtualMachine> migrateVmForScaleThroughJobQueue(
 +            final String vmUuid, final long srcHostId, final DeployDestination dest, final Long newSvcOfferingId) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkMigrateForScale.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkMigrateForScale.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkMigrateForScale workInfo = new VmWorkMigrateForScale(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, srcHostId, dest, newSvcOfferingId);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> migrateVmStorageThroughJobQueue(
 +            final String vmUuid, final StoragePool destPool) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkStorageMigration.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkStorageMigration.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkStorageMigration workInfo = new VmWorkStorageMigration(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, destPool.getId());
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> addVmToNetworkThroughJobQueue(
 +            final VirtualMachine vm, final Network network, final NicProfile requested) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkAddVmToNetwork.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkAddVmToNetwork.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkAddVmToNetwork workInfo = new VmWorkAddVmToNetwork(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, network.getId(), requested);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> removeNicFromVmThroughJobQueue(
 +            final VirtualMachine vm, final Nic nic) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkRemoveNicFromVm.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkRemoveNicFromVm.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkRemoveNicFromVm workInfo = new VmWorkRemoveNicFromVm(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, nic.getId());
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> removeVmFromNetworkThroughJobQueue(
 +            final VirtualMachine vm, final Network network, final URI broadcastUri) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkRemoveVmFromNetwork.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkRemoveVmFromNetwork.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkRemoveVmFromNetwork workInfo = new VmWorkRemoveVmFromNetwork(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, network, broadcastUri);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    public Outcome<VirtualMachine> reconfigureVmThroughJobQueue(
 +            final String vmUuid, final ServiceOffering newServiceOffering, final boolean reconfiguringOnExistingHost) {
 +
 +        final CallContext context = CallContext.current();
 +        final User user = context.getCallingUser();
 +        final Account account = context.getCallingAccount();
 +
 +        final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 +
 +        final List<VmWorkJobVO> pendingWorkJobs = _workJobDao.listPendingWorkJobs(
 +                VirtualMachine.Type.Instance, vm.getId(),
 +                VmWorkReconfigure.class.getName());
 +
 +        VmWorkJobVO workJob = null;
 +        if (pendingWorkJobs != null && pendingWorkJobs.size() > 0) {
 +            assert pendingWorkJobs.size() == 1;
 +            workJob = pendingWorkJobs.get(0);
 +        } else {
 +
 +            workJob = new VmWorkJobVO(context.getContextId());
 +
 +            workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
 +            workJob.setCmd(VmWorkReconfigure.class.getName());
 +
 +            workJob.setAccountId(account.getId());
 +            workJob.setUserId(user.getId());
 +            workJob.setVmType(VirtualMachine.Type.Instance);
 +            workJob.setVmInstanceId(vm.getId());
 +            workJob.setRelated(AsyncJobExecutionContext.getOriginJobId());
 +
 +            // save work context info (there are some duplications)
 +            final VmWorkReconfigure workInfo = new VmWorkReconfigure(user.getId(), account.getId(), vm.getId(),
 +                    VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER, newServiceOffering.getId(), reconfiguringOnExistingHost);
 +            workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 +
 +            _jobMgr.submitAsyncJob(workJob, VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 +        }
 +        AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(workJob.getId());
 +
 +        return new VmJobVirtualMachineOutcome(workJob, vm.getId());
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateStart(final VmWorkStart work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +
 +        try{
 +            orchestrateStart(vm.getUuid(), work.getParams(), work.getPlan(), _dpMgr.getDeploymentPlannerByName(work.getDeploymentPlanner()));
 +        }
 +        catch (CloudRuntimeException e){
 +            e.printStackTrace();
 +            s_logger.info("Caught CloudRuntimeException, returning job failed " + e);
 +            CloudRuntimeException ex = new CloudRuntimeException("Unable to start VM instance");
 +            return new Pair<JobInfo.Status, String>(JobInfo.Status.FAILED, JobSerializerHelper.toObjectSerializedString(ex));
 +        }
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateStop(final VmWorkStop work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +            throw new CloudRuntimeException("Unable to find VM id=" + work.getVmId());
 +        }
 +
 +        orchestrateStop(vm.getUuid(), work.isCleanup());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateMigrate(final VmWorkMigrate work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +
 +        orchestrateMigrate(vm.getUuid(), work.getSrcHostId(), work.getDeployDestination());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateMigrateAway(final VmWorkMigrateAway work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +
 +        try {
 +            orchestrateMigrateAway(vm.getUuid(), work.getSrcHostId(), null);
 +        } catch (final InsufficientServerCapacityException e) {
 +            s_logger.warn("Failed to deploy vm " + vm.getId() + " with original planner, sending HAPlanner");
 +            orchestrateMigrateAway(vm.getUuid(), work.getSrcHostId(), _haMgr.getHAPlanner());
 +        }
 +
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateMigrateWithStorage(final VmWorkMigrateWithStorage work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        orchestrateMigrateWithStorage(vm.getUuid(),
 +                work.getSrcHostId(),
 +                work.getDestHostId(),
 +                work.getVolumeToPool());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateMigrateForScale(final VmWorkMigrateForScale work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        orchestrateMigrateForScale(vm.getUuid(),
 +                work.getSrcHostId(),
 +                work.getDeployDestination(),
 +                work.getNewServiceOfferringId());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateReboot(final VmWorkReboot work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        orchestrateReboot(vm.getUuid(), work.getParams());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateAddVmToNetwork(final VmWorkAddVmToNetwork work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +
 +        final Network network = _networkDao.findById(work.getNetworkId());
 +        final NicProfile nic = orchestrateAddVmToNetwork(vm, network,
 +                work.getRequestedNicProfile());
 +
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, _jobMgr.marshallResultObject(nic));
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateRemoveNicFromVm(final VmWorkRemoveNicFromVm work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        final NicVO nic = _entityMgr.findById(NicVO.class, work.getNicId());
 +        final boolean result = orchestrateRemoveNicFromVm(vm, nic);
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED,
 +                _jobMgr.marshallResultObject(result));
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateRemoveVmFromNetwork(final VmWorkRemoveVmFromNetwork work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        final boolean result = orchestrateRemoveVmFromNetwork(vm,
 +                work.getNetwork(), work.getBroadcastUri());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED,
 +                _jobMgr.marshallResultObject(result));
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateReconfigure(final VmWorkReconfigure work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +
 +        final ServiceOffering newServiceOffering = _offeringDao.findById(vm.getId(), work.getNewServiceOfferingId());
 +
 +        reConfigureVm(vm.getUuid(), newServiceOffering,
 +                work.isSameHost());
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @ReflectionUse
 +    private Pair<JobInfo.Status, String> orchestrateStorageMigration(final VmWorkStorageMigration work) throws Exception {
 +        final VMInstanceVO vm = _entityMgr.findById(VMInstanceVO.class, work.getVmId());
 +        if (vm == null) {
 +            s_logger.info("Unable to find vm " + work.getVmId());
 +        }
 +        assert vm != null;
 +        final StoragePool pool = (PrimaryDataStoreInfo)dataStoreMgr.getPrimaryDataStore(work.getDestStoragePoolId());
 +        orchestrateStorageMigration(vm.getUuid(), pool);
 +
 +        return new Pair<JobInfo.Status, String>(JobInfo.Status.SUCCEEDED, null);
 +    }
 +
 +    @Override
 +    public Pair<JobInfo.Status, String> handleVmWorkJob(final VmWork work) throws Exception {
 +        return _jobHandlerProxy.handleVmWorkJob(work);
 +    }
 +
 +    private VmWorkJobVO createPlaceHolderWork(final long instanceId) {
 +        final VmWorkJobVO workJob = new VmWorkJobVO("");
 +
 +        workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_PLACEHOLDER);
 +        workJob.setCmd("");
 +        workJob.setCmdInfo("");
 +
 +        workJob.setAccountId(0);
 +        workJob.setUserId(0);
 +        workJob.setStep(VmWorkJobVO.Step.Starting);
 +        workJob.setVmType(VirtualMachine.Type.Instance);
 +        workJob.setVmInstanceId(instanceId);
 +        workJob.setInitMsid(ManagementServerNode.getManagementServerId());
 +
 +        _workJobDao.persist(workJob);
 +
 +        return workJob;
 +    }
 +}
diff --cc engine/storage/configdrive/pom.xml
index 0000000,dc3d118..83c882b
mode 000000,100644..100644
--- a/engine/storage/configdrive/pom.xml
+++ b/engine/storage/configdrive/pom.xml
@@@ -1,0 -1,43 +1,43 @@@
+ <!--
+   Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements. See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership. The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License. You may obtain a copy of the License at
+ 
+   http://www.apache.org/licenses/LICENSE-2.0
+ 
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied. See the License for the
+   specific language governing permissions and limitations
+   under the License.
+ -->
+ <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+   <modelVersion>4.0.0</modelVersion>
+   <artifactId>cloud-engine-storage-configdrive</artifactId>
+   <name>Apache CloudStack Framework - Storage Config Drive Component</name>
+   <parent>
+     <groupId>org.apache.cloudstack</groupId>
+     <artifactId>cloud-engine</artifactId>
 -    <version>4.11.1.0-SNAPSHOT</version>
++    <version>4.12.0.0-SNAPSHOT</version>
+     <relativePath>../../pom.xml</relativePath>
+   </parent>
+ 
+   <dependencies>
+     <dependency>
+       <groupId>org.apache.cloudstack</groupId>
+       <artifactId>cloud-api</artifactId>
+       <version>${project.version}</version>
+     </dependency>
+     <dependency>
+       <groupId>org.apache.cloudstack</groupId>
+       <artifactId>cloud-core</artifactId>
+       <version>${project.version}</version>
+     </dependency>
+   </dependencies>
+ 
+ </project>
diff --cc engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDrive.java
index 0000000,ec46199..ec46199
mode 000000,100644..100644
--- a/engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDrive.java
+++ b/engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDrive.java
diff --cc engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilder.java
index 0000000,d847aa1..d847aa1
mode 000000,100644..100644
--- a/engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilder.java
+++ b/engine/storage/configdrive/src/main/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilder.java
diff --cc engine/storage/configdrive/src/test/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilderTest.java
index 0000000,50a4384..50a4384
mode 000000,100644..100644
--- a/engine/storage/configdrive/src/test/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilderTest.java
+++ b/engine/storage/configdrive/src/test/java/org/apache/cloudstack/storage/configdrive/ConfigDriveBuilderTest.java
diff --cc plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
index f26d8de,0000000..303f748
mode 100644,000000..100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
@@@ -1,3813 -1,0 +1,3812 @@@
 +// Licensed to the Apache Software Foundation (ASF) under one
 +// or more contributor license agreements.  See the NOTICE file
 +// distributed with this work for additional information
 +// regarding copyright ownership.  The ASF licenses this file
 +// to you under the Apache License, Version 2.0 (the
 +// "License"); you may not use this file except in compliance
 +// with the License.  You may obtain a copy of the License at
 +//
 +//   http://www.apache.org/licenses/LICENSE-2.0
 +//
 +// Unless required by applicable law or agreed to in writing,
 +// software distributed under the License is distributed on an
 +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 +// KIND, either express or implied.  See the License for the
 +// specific language governing permissions and limitations
 +// under the License.
 +package com.cloud.hypervisor.kvm.resource;
 +
 +import java.io.BufferedReader;
 +import java.io.File;
 +import java.io.FileNotFoundException;
 +import java.io.FileReader;
 +import java.io.IOException;
 +import java.io.Reader;
 +import java.io.StringReader;
 +import java.net.InetAddress;
 +import java.net.URI;
 +import java.net.URISyntaxException;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Calendar;
 +import java.util.Collections;
 +import java.util.Comparator;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Properties;
 +import java.util.Set;
 +import java.util.UUID;
 +import java.util.concurrent.ConcurrentHashMap;
 +import java.util.regex.Matcher;
 +import java.util.regex.Pattern;
 +
 +import javax.naming.ConfigurationException;
 +import javax.xml.parsers.DocumentBuilder;
 +import javax.xml.parsers.DocumentBuilderFactory;
 +import javax.xml.parsers.ParserConfigurationException;
 +
 +import org.apache.cloudstack.storage.to.PrimaryDataStoreTO;
 +import org.apache.cloudstack.storage.to.VolumeObjectTO;
 +import org.apache.cloudstack.utils.hypervisor.HypervisorUtils;
 +import org.apache.cloudstack.utils.linux.CPUStat;
 +import org.apache.cloudstack.utils.linux.MemStat;
 +import org.apache.cloudstack.utils.qemu.QemuImg.PhysicalDiskFormat;
 +import org.apache.cloudstack.utils.security.KeyStoreUtils;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.commons.io.IOUtils;
 +import org.apache.commons.lang.ArrayUtils;
 +import org.apache.commons.lang.math.NumberUtils;
 +import org.apache.log4j.Logger;
 +import org.joda.time.Duration;
 +import org.libvirt.Connect;
 +import org.libvirt.Domain;
 +import org.libvirt.DomainBlockStats;
 +import org.libvirt.DomainInfo;
 +import org.libvirt.DomainInfo.DomainState;
 +import org.libvirt.DomainInterfaceStats;
 +import org.libvirt.DomainSnapshot;
 +import org.libvirt.LibvirtException;
 +import org.libvirt.MemoryStatistic;
 +import org.libvirt.NodeInfo;
 +import org.w3c.dom.Document;
 +import org.w3c.dom.Element;
 +import org.w3c.dom.Node;
 +import org.w3c.dom.NodeList;
 +import org.xml.sax.InputSource;
 +import org.xml.sax.SAXException;
 +
 +import com.cloud.agent.api.Answer;
 +import com.cloud.agent.api.Command;
 +import com.cloud.agent.api.HostVmStateReportEntry;
 +import com.cloud.agent.api.PingCommand;
 +import com.cloud.agent.api.PingRoutingCommand;
 +import com.cloud.agent.api.PingRoutingWithNwGroupsCommand;
 +import com.cloud.agent.api.SetupGuestNetworkCommand;
 +import com.cloud.agent.api.StartupCommand;
 +import com.cloud.agent.api.StartupRoutingCommand;
 +import com.cloud.agent.api.StartupStorageCommand;
 +import com.cloud.agent.api.VmDiskStatsEntry;
 +import com.cloud.agent.api.VmNetworkStatsEntry;
 +import com.cloud.agent.api.VmStatsEntry;
 +import com.cloud.agent.api.routing.IpAssocCommand;
 +import com.cloud.agent.api.routing.IpAssocVpcCommand;
 +import com.cloud.agent.api.routing.NetworkElementCommand;
 +import com.cloud.agent.api.routing.SetSourceNatCommand;
 +import com.cloud.agent.api.to.DataStoreTO;
 +import com.cloud.agent.api.to.DataTO;
 +import com.cloud.agent.api.to.DiskTO;
 +import com.cloud.agent.api.to.IpAddressTO;
 +import com.cloud.agent.api.to.NfsTO;
 +import com.cloud.agent.api.to.NicTO;
 +import com.cloud.agent.api.to.VirtualMachineTO;
 +import com.cloud.agent.resource.virtualnetwork.VRScripts;
 +import com.cloud.agent.resource.virtualnetwork.VirtualRouterDeployer;
 +import com.cloud.agent.resource.virtualnetwork.VirtualRoutingResource;
 +import com.cloud.dc.Vlan;
 +import com.cloud.exception.InternalErrorException;
 +import com.cloud.host.Host.Type;
 +import com.cloud.hypervisor.Hypervisor.HypervisorType;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.ChannelDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.ClockDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.ConsoleDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.CpuModeDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.CpuTuneDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DevicesDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef.DeviceType;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef.DiscardType;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.DiskDef.DiskProtocol;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.FeaturesDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.FilesystemDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.GraphicDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.GuestDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.GuestResourceDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.InputDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.InterfaceDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.InterfaceDef.GuestNetType;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.RngDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.RngDef.RngBackendModel;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.SCSIDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.SerialDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.TermPolicy;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.VideoDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogAction;
 +import com.cloud.hypervisor.kvm.resource.LibvirtVMDef.WatchDogDef.WatchDogModel;
 +import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper;
 +import com.cloud.hypervisor.kvm.resource.wrapper.LibvirtUtilitiesHelper;
 +import com.cloud.hypervisor.kvm.storage.KVMPhysicalDisk;
 +import com.cloud.hypervisor.kvm.storage.KVMStoragePool;
 +import com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager;
 +import com.cloud.hypervisor.kvm.storage.KVMStorageProcessor;
 +import com.cloud.network.Networks.BroadcastDomainType;
 +import com.cloud.network.Networks.RouterPrivateIpStrategy;
 +import com.cloud.network.Networks.TrafficType;
 +import com.cloud.resource.ServerResource;
 +import com.cloud.resource.ServerResourceBase;
 +import com.cloud.storage.JavaStorageLayer;
 +import com.cloud.storage.Storage;
 +import com.cloud.storage.Storage.StoragePoolType;
 +import com.cloud.storage.StorageLayer;
 +import com.cloud.storage.Volume;
 +import com.cloud.storage.resource.StorageSubsystemCommandHandler;
 +import com.cloud.storage.resource.StorageSubsystemCommandHandlerBase;
 +import com.cloud.utils.ExecutionResult;
 +import com.cloud.utils.NumbersUtil;
 +import com.cloud.utils.Pair;
 +import com.cloud.utils.PropertiesUtil;
 +import com.cloud.utils.StringUtils;
 +import com.cloud.utils.Ternary;
 +import com.cloud.utils.exception.CloudRuntimeException;
 +import com.cloud.utils.net.NetUtils;
 +import com.cloud.utils.script.OutputInterpreter;
 +import com.cloud.utils.script.OutputInterpreter.AllLinesParser;
 +import com.cloud.utils.script.Script;
 +import com.cloud.utils.ssh.SshHelper;
 +import com.cloud.vm.VirtualMachine;
 +import com.cloud.vm.VirtualMachine.PowerState;
 +import com.cloud.vm.VmDetailConstants;
 +import com.google.common.base.Strings;
 +
 +/**
 + * LibvirtComputingResource execute requests on the computing/routing host using
 + * the libvirt API
 + *
 + * @config {@table || Param Name | Description | Values | Default || ||
 + *         hypervisor.type | type of local hypervisor | string | kvm || ||
 + *         hypervisor.uri | local hypervisor to connect to | URI |
 + *         qemu:///system || || domr.arch | instruction set for domr template |
 + *         string | i686 || || private.bridge.name | private bridge where the
 + *         domrs have their private interface | string | vmops0 || ||
 + *         public.bridge.name | public bridge where the domrs have their public
 + *         interface | string | br0 || || private.network.name | name of the
 + *         network where the domrs have their private interface | string |
 + *         vmops-private || || private.ipaddr.start | start of the range of
 + *         private ip addresses for domrs | ip address | 192.168.166.128 || ||
 + *         private.ipaddr.end | end of the range of private ip addresses for
 + *         domrs | ip address | start + 126 || || private.macaddr.start | start
 + *         of the range of private mac addresses for domrs | mac address |
 + *         00:16:3e:77:e2:a0 || || private.macaddr.end | end of the range of
 + *         private mac addresses for domrs | mac address | start + 126 || ||
 + *         pool | the parent of the storage pool hierarchy * }
 + **/
 +public class LibvirtComputingResource extends ServerResourceBase implements ServerResource, VirtualRouterDeployer {
 +    private static final Logger s_logger = Logger.getLogger(LibvirtComputingResource.class);
 +
 +    private String _modifyVlanPath;
 +    private String _versionstringpath;
 +    private String _patchViaSocketPath;
 +    private String _createvmPath;
 +    private String _manageSnapshotPath;
 +    private String _resizeVolumePath;
 +    private String _createTmplPath;
 +    private String _heartBeatPath;
 +    private String _vmActivityCheckPath;
 +    private String _securityGroupPath;
 +    private String _ovsPvlanDhcpHostPath;
 +    private String _ovsPvlanVmPath;
 +    private String _routerProxyPath;
 +    private String _ovsTunnelPath;
 +    private String _host;
 +    private String _dcId;
 +    private String _pod;
 +    private String _clusterId;
 +
 +    private long _hvVersion;
 +    private Duration _timeout;
 +    private static final int NUMMEMSTATS =2;
 +
 +    private KVMHAMonitor _monitor;
 +    public static final String SSHKEYSPATH = "/root/.ssh";
 +    public static final String SSHPRVKEYPATH = SSHKEYSPATH + File.separator + "id_rsa.cloud";
 +    public static final String SSHPUBKEYPATH = SSHKEYSPATH + File.separator + "id_rsa.pub.cloud";
 +
 +    public static final String BASH_SCRIPT_PATH = "/bin/bash";
 +
 +    private String _mountPoint = "/mnt";
 +    private StorageLayer _storage;
 +    private KVMStoragePoolManager _storagePoolMgr;
 +
 +    private VifDriver _defaultVifDriver;
 +    private Map<TrafficType, VifDriver> _trafficTypeVifDrivers;
 +
 +    protected static final String DEFAULT_OVS_VIF_DRIVER_CLASS_NAME = "com.cloud.hypervisor.kvm.resource.OvsVifDriver";
 +    protected static final String DEFAULT_BRIDGE_VIF_DRIVER_CLASS_NAME = "com.cloud.hypervisor.kvm.resource.BridgeVifDriver";
 +
 +    protected HypervisorType _hypervisorType;
 +    protected String _hypervisorURI;
 +    protected long _hypervisorLibvirtVersion;
 +    protected long _hypervisorQemuVersion;
 +    protected String _hypervisorPath;
 +    protected String _hostDistro;
 +    protected String _networkDirectSourceMode;
 +    protected String _networkDirectDevice;
 +    protected String _sysvmISOPath;
 +    protected String _privNwName;
 +    protected String _privBridgeName;
 +    protected String _linkLocalBridgeName;
 +    protected String _publicBridgeName;
 +    protected String _guestBridgeName;
 +    protected String _privateIp;
 +    protected String _pool;
 +    protected String _localGateway;
 +    private boolean _canBridgeFirewall;
 +    protected String _localStoragePath;
 +    protected String _localStorageUUID;
 +    protected boolean _noMemBalloon = false;
 +    protected String _guestCpuMode;
 +    protected String _guestCpuModel;
 +    protected boolean _noKvmClock;
 +    protected String _videoHw;
 +    protected int _videoRam;
 +    protected Pair<Integer,Integer> hostOsVersion;
 +    protected int _migrateSpeed;
 +    protected int _migrateDowntime;
 +    protected int _migratePauseAfter;
 +    protected boolean _diskActivityCheckEnabled;
 +    protected long _diskActivityCheckFileSizeMin = 10485760; // 10MB
 +    protected int _diskActivityCheckTimeoutSeconds = 120; // 120s
 +    protected long _diskActivityInactiveThresholdMilliseconds = 30000; // 30s
 +    protected boolean _rngEnable = false;
 +    protected RngBackendModel _rngBackendModel = RngBackendModel.RANDOM;
 +    protected String _rngPath = "/dev/random";
 +    protected int _rngRatePeriod = 1000;
 +    protected int _rngRateBytes = 2048;
 +    private File _qemuSocketsPath;
 +    private final String _qemuGuestAgentSocketName = "org.qemu.guest_agent.0";
 +    private long _totalMemory;
 +    protected WatchDogAction _watchDogAction = WatchDogAction.NONE;
 +    protected WatchDogModel _watchDogModel = WatchDogModel.I6300ESB;
 +
 +    private final Map <String, String> _pifs = new HashMap<String, String>();
 +    private final Map<String, VmStats> _vmStats = new ConcurrentHashMap<String, VmStats>();
 +
 +    protected static final HashMap<DomainState, PowerState> s_powerStatesTable;
 +    static {
 +        s_powerStatesTable = new HashMap<DomainState, PowerState>();
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_SHUTOFF, PowerState.PowerOff);
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_PAUSED, PowerState.PowerOn);
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_RUNNING, PowerState.PowerOn);
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_BLOCKED, PowerState.PowerOn);
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_NOSTATE, PowerState.PowerUnknown);
 +        s_powerStatesTable.put(DomainState.VIR_DOMAIN_SHUTDOWN, PowerState.PowerOff);
 +    }
 +
 +    protected List<String> _vmsKilled = new ArrayList<String>();
 +
 +    private VirtualRoutingResource _virtRouterResource;
 +
 +    private String _pingTestPath;
 +
 +    private String _updateHostPasswdPath;
 +
 +    private long _dom0MinMem;
 +
 +    private long _dom0OvercommitMem;
 +
 +    protected boolean _disconnected = true;
 +    protected int _cmdsTimeout;
 +    protected int _stopTimeout;
 +    protected CPUStat _cpuStat = new CPUStat();
 +    protected MemStat _memStat = new MemStat();
 +
 +    private final LibvirtUtilitiesHelper libvirtUtilitiesHelper = new LibvirtUtilitiesHelper();
 +
 +    @Override
 +    public ExecutionResult executeInVR(final String routerIp, final String script, final String args) {
 +        return executeInVR(routerIp, script, args, _timeout);
 +    }
 +
 +    @Override
 +    public ExecutionResult executeInVR(final String routerIp, final String script, final String args, final Duration timeout) {
 +        final Script command = new Script(_routerProxyPath, timeout, s_logger);
 +        final AllLinesParser parser = new AllLinesParser();
 +        command.add(script);
 +        command.add(routerIp);
 +        if (args != null) {
 +            command.add(args);
 +        }
 +        String details = command.execute(parser);
 +        if (details == null) {
 +            details = parser.getLines();
 +        }
 +
 +        s_logger.debug("Executing script in VR: " + script);
 +
 +        return new ExecutionResult(command.getExitValue() == 0, details);
 +    }
 +
 +    @Override
 +    public ExecutionResult createFileInVR(final String routerIp, final String path, final String filename, final String content) {
 +        final File permKey = new File("/root/.ssh/id_rsa.cloud");
 +        boolean success = true;
 +        String details = "Creating file in VR, with ip: " + routerIp + ", file: " + filename;
 +        s_logger.debug(details);
 +
 +        try {
 +            SshHelper.scpTo(routerIp, 3922, "root", permKey, null, path, content.getBytes(), filename, null);
 +        } catch (final Exception e) {
 +            s_logger.warn("Fail to create file " + path + filename + " in VR " + routerIp, e);
 +            details = e.getMessage();
 +            success = false;
 +        }
 +        return new ExecutionResult(success, details);
 +    }
 +
 +    @Override
 +    public ExecutionResult prepareCommand(final NetworkElementCommand cmd) {
 +        //Update IP used to access router
 +        cmd.setRouterAccessIp(cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP));
 +        assert cmd.getRouterAccessIp() != null;
 +
 +        if (cmd instanceof IpAssocVpcCommand) {
 +            return prepareNetworkElementCommand((IpAssocVpcCommand)cmd);
 +        } else if (cmd instanceof IpAssocCommand) {
 +            return prepareNetworkElementCommand((IpAssocCommand)cmd);
 +        } else if (cmd instanceof SetupGuestNetworkCommand) {
 +            return prepareNetworkElementCommand((SetupGuestNetworkCommand)cmd);
 +        } else if (cmd instanceof SetSourceNatCommand) {
 +            return prepareNetworkElementCommand((SetSourceNatCommand)cmd);
 +        }
 +        return new ExecutionResult(true, null);
 +    }
 +
 +    @Override
 +    public ExecutionResult cleanupCommand(final NetworkElementCommand cmd) {
 +        if (cmd instanceof IpAssocCommand && !(cmd instanceof IpAssocVpcCommand)) {
 +            return cleanupNetworkElementCommand((IpAssocCommand)cmd);
 +        }
 +        return new ExecutionResult(true, null);
 +    }
 +
 +    public LibvirtUtilitiesHelper getLibvirtUtilitiesHelper() {
 +        return libvirtUtilitiesHelper;
 +    }
 +
 +    public CPUStat getCPUStat() {
 +        return _cpuStat;
 +    }
 +
 +    public MemStat getMemStat() {
 +        return _memStat;
 +    }
 +
 +    public VirtualRoutingResource getVirtRouterResource() {
 +        return _virtRouterResource;
 +    }
 +
 +    public String getPublicBridgeName() {
 +        return _publicBridgeName;
 +    }
 +
 +    public KVMStoragePoolManager getStoragePoolMgr() {
 +        return _storagePoolMgr;
 +    }
 +
 +    public String getPrivateIp() {
 +        return _privateIp;
 +    }
 +
 +    public int getMigrateDowntime() {
 +        return _migrateDowntime;
 +    }
 +
 +    public int getMigratePauseAfter() {
 +        return _migratePauseAfter;
 +    }
 +
 +    public int getMigrateSpeed() {
 +        return _migrateSpeed;
 +    }
 +
 +    public String getPingTestPath() {
 +        return _pingTestPath;
 +    }
 +
 +    public String getUpdateHostPasswdPath() {
 +        return _updateHostPasswdPath;
 +    }
 +
 +    public Duration getTimeout() {
 +        return _timeout;
 +    }
 +
 +    public String getOvsTunnelPath() {
 +        return _ovsTunnelPath;
 +    }
 +
 +    public KVMHAMonitor getMonitor() {
 +        return _monitor;
 +    }
 +
 +    public StorageLayer getStorage() {
 +        return _storage;
 +    }
 +
 +    public String createTmplPath() {
 +        return _createTmplPath;
 +    }
 +
 +    public int getCmdsTimeout() {
 +        return _cmdsTimeout;
 +    }
 +
 +    public String manageSnapshotPath() {
 +        return _manageSnapshotPath;
 +    }
 +
 +    public String getGuestBridgeName() {
 +        return _guestBridgeName;
 +    }
 +
 +    public String getVmActivityCheckPath() {
 +        return _vmActivityCheckPath;
 +    }
 +
 +    public String getOvsPvlanDhcpHostPath() {
 +        return _ovsPvlanDhcpHostPath;
 +    }
 +
 +    public String getOvsPvlanVmPath() {
 +        return _ovsPvlanVmPath;
 +    }
 +
 +    public String getResizeVolumePath() {
 +        return _resizeVolumePath;
 +    }
 +
 +    public StorageSubsystemCommandHandler getStorageHandler() {
 +        return storageHandler;
 +    }
 +
 +    private static final class KeyValueInterpreter extends OutputInterpreter {
 +        private final Map<String, String> map = new HashMap<String, String>();
 +
 +        @Override
 +        public String interpret(final BufferedReader reader) throws IOException {
 +            String line = null;
 +            int numLines = 0;
 +            while ((line = reader.readLine()) != null) {
 +                final String[] toks = line.trim().split("=");
 +                if (toks.length < 2) {
 +                    s_logger.warn("Failed to parse Script output: " + line);
 +                } else {
 +                    map.put(toks[0].trim(), toks[1].trim());
 +                }
 +                numLines++;
 +            }
 +            if (numLines == 0) {
 +                s_logger.warn("KeyValueInterpreter: no output lines?");
 +            }
 +            return null;
 +        }
 +
 +        public Map<String, String> getKeyValues() {
 +            return map;
 +        }
 +    }
 +
 +    @Override
 +    protected String getDefaultScriptsDir() {
 +        return null;
 +    }
 +
 +    protected List<String> _cpuFeatures;
 +
 +    protected enum BridgeType {
 +        NATIVE, OPENVSWITCH
 +    }
 +
 +    protected BridgeType _bridgeType;
 +
 +    protected StorageSubsystemCommandHandler storageHandler;
 +
 +    private String getEndIpFromStartIp(final String startIp, final int numIps) {
 +        final String[] tokens = startIp.split("[.]");
 +        assert tokens.length == 4;
 +        int lastbyte = Integer.parseInt(tokens[3]);
 +        lastbyte = lastbyte + numIps;
 +        tokens[3] = Integer.toString(lastbyte);
 +        final StringBuilder end = new StringBuilder(15);
 +        end.append(tokens[0]).append(".").append(tokens[1]).append(".").append(tokens[2]).append(".").append(tokens[3]);
 +        return end.toString();
 +    }
 +
 +    private Map<String, Object> getDeveloperProperties() throws ConfigurationException {
 +
 +        final File file = PropertiesUtil.findConfigFile("developer.properties");
 +        if (file == null) {
 +            throw new ConfigurationException("Unable to find developer.properties.");
 +        }
 +
 +        s_logger.info("developer.properties found at " + file.getAbsolutePath());
 +        try {
 +            final Properties properties = PropertiesUtil.loadFromFile(file);
 +
 +            final String startMac = (String)properties.get("private.macaddr.start");
 +            if (startMac == null) {
 +                throw new ConfigurationException("Developers must specify start mac for private ip range");
 +            }
 +
 +            final String startIp = (String)properties.get("private.ipaddr.start");
 +            if (startIp == null) {
 +                throw new ConfigurationException("Developers must specify start ip for private ip range");
 +            }
 +            final Map<String, Object> params = PropertiesUtil.toMap(properties);
 +
 +            String endIp = (String)properties.get("private.ipaddr.end");
 +            if (endIp == null) {
 +                endIp = getEndIpFromStartIp(startIp, 16);
 +                params.put("private.ipaddr.end", endIp);
 +            }
 +            return params;
 +        } catch (final FileNotFoundException ex) {
 +            throw new CloudRuntimeException("Cannot find the file: " + file.getAbsolutePath(), ex);
 +        } catch (final IOException ex) {
 +            throw new CloudRuntimeException("IOException in reading " + file.getAbsolutePath(), ex);
 +        }
 +    }
 +
 +    protected String getDefaultNetworkScriptsDir() {
 +        return "scripts/vm/network/vnet";
 +    }
 +
 +    protected String getDefaultStorageScriptsDir() {
 +        return "scripts/storage/qcow2";
 +    }
 +
 +    protected String getDefaultHypervisorScriptsDir() {
 +        return "scripts/vm/hypervisor";
 +    }
 +
 +    protected String getDefaultKvmScriptsDir() {
 +        return "scripts/vm/hypervisor/kvm";
 +    }
 +
 +    protected String getDefaultDomrScriptsDir() {
 +        return "scripts/network/domr";
 +    }
 +
 +    protected String getNetworkDirectSourceMode() {
 +        return _networkDirectSourceMode;
 +    }
 +
 +    protected String getNetworkDirectDevice() {
 +        return _networkDirectDevice;
 +    }
 +
 +    @Override
 +    public boolean configure(final String name, final Map<String, Object> params) throws ConfigurationException {
 +        boolean success = super.configure(name, params);
 +        if (!success) {
 +            return false;
 +        }
 +
 +        _storage = new JavaStorageLayer();
 +        _storage.configure("StorageLayer", params);
 +
 +        String domrScriptsDir = (String)params.get("domr.scripts.dir");
 +        if (domrScriptsDir == null) {
 +            domrScriptsDir = getDefaultDomrScriptsDir();
 +        }
 +
 +        String hypervisorScriptsDir = (String)params.get("hypervisor.scripts.dir");
 +        if (hypervisorScriptsDir == null) {
 +            hypervisorScriptsDir = getDefaultHypervisorScriptsDir();
 +        }
 +
 +        String kvmScriptsDir = (String)params.get("kvm.scripts.dir");
 +        if (kvmScriptsDir == null) {
 +            kvmScriptsDir = getDefaultKvmScriptsDir();
 +        }
 +
 +        String networkScriptsDir = (String)params.get("network.scripts.dir");
 +        if (networkScriptsDir == null) {
 +            networkScriptsDir = getDefaultNetworkScriptsDir();
 +        }
 +
 +        String storageScriptsDir = (String)params.get("storage.scripts.dir");
 +        if (storageScriptsDir == null) {
 +            storageScriptsDir = getDefaultStorageScriptsDir();
 +        }
 +
 +        final String bridgeType = (String)params.get("network.bridge.type");
 +        if (bridgeType == null) {
 +            _bridgeType = BridgeType.NATIVE;
 +        } else {
 +            _bridgeType = BridgeType.valueOf(bridgeType.toUpperCase());
 +        }
 +
 +        params.put("domr.scripts.dir", domrScriptsDir);
 +
 +        _virtRouterResource = new VirtualRoutingResource(this);
 +        success = _virtRouterResource.configure(name, params);
 +
 +        if (!success) {
 +            return false;
 +        }
 +
 +        _host = (String)params.get("host");
 +        if (_host == null) {
 +            _host = "localhost";
 +        }
 +
 +        _dcId = (String)params.get("zone");
 +        if (_dcId == null) {
 +            _dcId = "default";
 +        }
 +
 +        _pod = (String)params.get("pod");
 +        if (_pod == null) {
 +            _pod = "default";
 +        }
 +
 +        _clusterId = (String)params.get("cluster");
 +
 +        _updateHostPasswdPath = Script.findScript(hypervisorScriptsDir, VRScripts.UPDATE_HOST_PASSWD);
 +        if (_updateHostPasswdPath == null) {
 +            throw new ConfigurationException("Unable to find update_host_passwd.sh");
 +        }
 +
 +        _modifyVlanPath = Script.findScript(networkScriptsDir, "modifyvlan.sh");
 +        if (_modifyVlanPath == null) {
 +            throw new ConfigurationException("Unable to find modifyvlan.sh");
 +        }
 +
 +        _versionstringpath = Script.findScript(kvmScriptsDir, "versions.sh");
 +        if (_versionstringpath == null) {
 +            throw new ConfigurationException("Unable to find versions.sh");
 +        }
 +
 +        _patchViaSocketPath = Script.findScript(kvmScriptsDir + "/patch/", "patchviasocket.py");
 +        if (_patchViaSocketPath == null) {
 +            throw new ConfigurationException("Unable to find patchviasocket.py");
 +        }
 +
 +        _heartBeatPath = Script.findScript(kvmScriptsDir, "kvmheartbeat.sh");
 +        if (_heartBeatPath == null) {
 +            throw new ConfigurationException("Unable to find kvmheartbeat.sh");
 +        }
 +
 +        _createvmPath = Script.findScript(storageScriptsDir, "createvm.sh");
 +        if (_createvmPath == null) {
 +            throw new ConfigurationException("Unable to find the createvm.sh");
 +        }
 +
 +        _manageSnapshotPath = Script.findScript(storageScriptsDir, "managesnapshot.sh");
 +        if (_manageSnapshotPath == null) {
 +            throw new ConfigurationException("Unable to find the managesnapshot.sh");
 +        }
 +
 +        _resizeVolumePath = Script.findScript(storageScriptsDir, "resizevolume.sh");
 +        if (_resizeVolumePath == null) {
 +            throw new ConfigurationException("Unable to find the resizevolume.sh");
 +        }
 +
 +        _vmActivityCheckPath = Script.findScript(kvmScriptsDir, "kvmvmactivity.sh");
 +        if (_vmActivityCheckPath == null) {
 +            throw new ConfigurationException("Unable to find kvmvmactivity.sh");
 +        }
 +
 +        _createTmplPath = Script.findScript(storageScriptsDir, "createtmplt.sh");
 +        if (_createTmplPath == null) {
 +            throw new ConfigurationException("Unable to find the createtmplt.sh");
 +        }
 +
 +        _securityGroupPath = Script.findScript(networkScriptsDir, "security_group.py");
 +        if (_securityGroupPath == null) {
 +            throw new ConfigurationException("Unable to find the security_group.py");
 +        }
 +
 +        _ovsTunnelPath = Script.findScript(networkScriptsDir, "ovstunnel.py");
 +        if (_ovsTunnelPath == null) {
 +            throw new ConfigurationException("Unable to find the ovstunnel.py");
 +        }
 +
 +        _routerProxyPath = Script.findScript("scripts/network/domr/", "router_proxy.sh");
 +        if (_routerProxyPath == null) {
 +            throw new ConfigurationException("Unable to find the router_proxy.sh");
 +        }
 +
 +        _ovsPvlanDhcpHostPath = Script.findScript(networkScriptsDir, "ovs-pvlan-dhcp-host.sh");
 +        if (_ovsPvlanDhcpHostPath == null) {
 +            throw new ConfigurationException("Unable to find the ovs-pvlan-dhcp-host.sh");
 +        }
 +
 +        _ovsPvlanVmPath = Script.findScript(networkScriptsDir, "ovs-pvlan-vm.sh");
 +        if (_ovsPvlanVmPath == null) {
 +            throw new ConfigurationException("Unable to find the ovs-pvlan-vm.sh");
 +        }
 +
 +        String value = (String)params.get("developer");
 +        final boolean isDeveloper = Boolean.parseBoolean(value);
 +
 +        if (isDeveloper) {
 +            params.putAll(getDeveloperProperties());
 +        }
 +
 +        _pool = (String)params.get("pool");
 +        if (_pool == null) {
 +            _pool = "/root";
 +        }
 +
 +        final String instance = (String)params.get("instance");
 +
 +        _hypervisorType = HypervisorType.getType((String)params.get("hypervisor.type"));
 +        if (_hypervisorType == HypervisorType.None) {
 +            _hypervisorType = HypervisorType.KVM;
 +        }
 +
 +        _hypervisorURI = (String)params.get("hypervisor.uri");
 +        if (_hypervisorURI == null) {
 +            _hypervisorURI = LibvirtConnection.getHypervisorURI(_hypervisorType.toString());
 +        }
 +
 +        _networkDirectSourceMode = (String)params.get("network.direct.source.mode");
 +        _networkDirectDevice = (String)params.get("network.direct.device");
 +
 +        String startMac = (String)params.get("private.macaddr.start");
 +        if (startMac == null) {
 +            startMac = "00:16:3e:77:e2:a0";
 +        }
 +
 +        String startIp = (String)params.get("private.ipaddr.start");
 +        if (startIp == null) {
 +            startIp = "192.168.166.128";
 +        }
 +
 +        _pingTestPath = Script.findScript(kvmScriptsDir, "pingtest.sh");
 +        if (_pingTestPath == null) {
 +            throw new ConfigurationException("Unable to find the pingtest.sh");
 +        }
 +
 +        _linkLocalBridgeName = (String)params.get("private.bridge.name");
 +        if (_linkLocalBridgeName == null) {
 +            if (isDeveloper) {
 +                _linkLocalBridgeName = "cloud-" + instance + "-0";
 +            } else {
 +                _linkLocalBridgeName = "cloud0";
 +            }
 +        }
 +
 +        _publicBridgeName = (String)params.get("public.network.device");
 +        if (_publicBridgeName == null) {
 +            _publicBridgeName = "cloudbr0";
 +        }
 +
 +        _privBridgeName = (String)params.get("private.network.device");
 +        if (_privBridgeName == null) {
 +            _privBridgeName = "cloudbr1";
 +        }
 +
 +        _guestBridgeName = (String)params.get("guest.network.device");
 +        if (_guestBridgeName == null) {
 +            _guestBridgeName = _privBridgeName;
 +        }
 +
 +        _privNwName = (String)params.get("private.network.name");
 +        if (_privNwName == null) {
 +            if (isDeveloper) {
 +                _privNwName = "cloud-" + instance + "-private";
 +            } else {
 +                _privNwName = "cloud-private";
 +            }
 +        }
 +
 +        _localStoragePath = (String)params.get("local.storage.path");
 +        if (_localStoragePath == null) {
 +            _localStoragePath = "/var/lib/libvirt/images/";
 +        }
 +
 +        /* Directory to use for Qemu sockets like for the Qemu Guest Agent */
 +        _qemuSocketsPath = new File("/var/lib/libvirt/qemu");
 +        String _qemuSocketsPathVar = (String)params.get("qemu.sockets.path");
 +        if (_qemuSocketsPathVar != null && StringUtils.isNotBlank(_qemuSocketsPathVar)) {
 +            _qemuSocketsPath = new File(_qemuSocketsPathVar);
 +        }
 +
 +        final File storagePath = new File(_localStoragePath);
 +        _localStoragePath = storagePath.getAbsolutePath();
 +
 +        _localStorageUUID = (String)params.get("local.storage.uuid");
 +        if (_localStorageUUID == null) {
 +            _localStorageUUID = UUID.randomUUID().toString();
 +        }
 +
 +        value = (String)params.get("scripts.timeout");
 +        _timeout = Duration.standardSeconds(NumbersUtil.parseInt(value, 30 * 60));
 +
 +        value = (String)params.get("stop.script.timeout");
 +        _stopTimeout = NumbersUtil.parseInt(value, 120) * 1000;
 +
 +        value = (String)params.get("cmds.timeout");
 +        _cmdsTimeout = NumbersUtil.parseInt(value, 7200) * 1000;
 +
 +        value = (String) params.get("vm.memballoon.disable");
 +        if (Boolean.parseBoolean(value)) {
 +            _noMemBalloon = true;
 +        }
 +
 +        _videoHw = (String) params.get("vm.video.hardware");
 +        value = (String) params.get("vm.video.ram");
 +        _videoRam = NumbersUtil.parseInt(value, 0);
 +
 +        value = (String)params.get("host.reserved.mem.mb");
 +        // Reserve 1GB unless admin overrides
 +        _dom0MinMem = NumbersUtil.parseInt(value, 1024) * 1024 * 1024L;
 +
 +        value = (String)params.get("host.overcommit.mem.mb");
 +        // Support overcommit memory for host if host uses ZSWAP, KSM and other memory
 +        // compressing technologies
 +        _dom0OvercommitMem = NumbersUtil.parseInt(value, 0) * 1024 * 1024L;
 +
 +        value = (String) params.get("kvmclock.disable");
 +        if (Boolean.parseBoolean(value)) {
 +            _noKvmClock = true;
 +        }
 +
 +        value = (String) params.get("vm.rng.enable");
 +        if (Boolean.parseBoolean(value)) {
 +            _rngEnable = true;
 +
 +            value = (String) params.get("vm.rng.model");
 +            if (!Strings.isNullOrEmpty(value)) {
 +                _rngBackendModel = RngBackendModel.valueOf(value.toUpperCase());
 +            }
 +
 +            value = (String) params.get("vm.rng.path");
 +            if (!Strings.isNullOrEmpty(value)) {
 +                _rngPath = value;
 +            }
 +
 +            value = (String) params.get("vm.rng.rate.bytes");
 +            _rngRateBytes = NumbersUtil.parseInt(value, new Integer(_rngRateBytes));
 +
 +            value = (String) params.get("vm.rng.rate.period");
 +            _rngRatePeriod = NumbersUtil.parseInt(value, new Integer(_rngRatePeriod));
 +        }
 +
 +        value = (String) params.get("vm.watchdog.model");
 +        if (!Strings.isNullOrEmpty(value)) {
 +            _watchDogModel = WatchDogModel.valueOf(value.toUpperCase());
 +        }
 +
 +        value = (String) params.get("vm.watchdog.action");
 +        if (!Strings.isNullOrEmpty(value)) {
 +            _watchDogAction = WatchDogAction.valueOf(value.toUpperCase());
 +        }
 +
 +        LibvirtConnection.initialize(_hypervisorURI);
 +        Connect conn = null;
 +        try {
 +            conn = LibvirtConnection.getConnection();
 +
 +            if (_bridgeType == BridgeType.OPENVSWITCH) {
 +                if (conn.getLibVirVersion() < 10 * 1000 + 0) {
 +                    throw new ConfigurationException("Libvirt version 0.10.0 required for openvswitch support, but version " + conn.getLibVirVersion() + " detected");
 +                }
 +            }
 +        } catch (final LibvirtException e) {
 +            throw new CloudRuntimeException(e.getMessage());
 +        }
 +
 +        if (HypervisorType.KVM == _hypervisorType) {
 +            /* Does node support HVM guest? If not, exit */
 +            if (!IsHVMEnabled(conn)) {
 +                throw new ConfigurationException("NO HVM support on this machine, please make sure: " + "1. VT/SVM is supported by your CPU, or is enabled in BIOS. "
 +                        + "2. kvm modules are loaded (kvm, kvm_amd|kvm_intel)");
 +            }
 +        }
 +
 +        _hypervisorPath = getHypervisorPath(conn);
 +        try {
 +            _hvVersion = conn.getVersion();
 +            _hvVersion = _hvVersion % 1000000 / 1000;
 +            _hypervisorLibvirtVersion = conn.getLibVirVersion();
 +            _hypervisorQemuVersion = conn.getVersion();
 +        } catch (final LibvirtException e) {
 +            s_logger.trace("Ignoring libvirt error.", e);
 +        }
 +
 +        _guestCpuMode = (String)params.get("guest.cpu.mode");
 +        if (_guestCpuMode != null) {
 +            _guestCpuModel = (String)params.get("guest.cpu.model");
 +
 +            if (_hypervisorLibvirtVersion < 9 * 1000 + 10) {
 +                s_logger.warn("Libvirt version 0.9.10 required for guest cpu mode, but version " + prettyVersion(_hypervisorLibvirtVersion) +
 +                        " detected, so it will be disabled");
 +                _guestCpuMode = "";
 +                _guestCpuModel = "";
 +            }
 +            params.put("guest.cpu.mode", _guestCpuMode);
 +            params.put("guest.cpu.model", _guestCpuModel);
 +        }
 +
 +        final String cpuFeatures = (String)params.get("guest.cpu.features");
 +        if (cpuFeatures != null) {
 +            _cpuFeatures = new ArrayList<String>();
 +            for (final String feature: cpuFeatures.split(" ")) {
 +                if (!feature.isEmpty()) {
 +                    _cpuFeatures.add(feature);
 +                }
 +            }
 +        }
 +
 +        final String[] info = NetUtils.getNetworkParams(_privateNic);
 +
 +        _monitor = new KVMHAMonitor(null, info[0], _heartBeatPath);
 +        final Thread ha = new Thread(_monitor);
 +        ha.start();
 +
 +        _storagePoolMgr = new KVMStoragePoolManager(_storage, _monitor);
 +
 +        _sysvmISOPath = (String)params.get("systemvm.iso.path");
 +        if (_sysvmISOPath == null) {
 +            final String[] isoPaths = {"/usr/share/cloudstack-common/vms/systemvm.iso"};
 +            for (final String isoPath : isoPaths) {
 +                if (_storage.exists(isoPath)) {
 +                    _sysvmISOPath = isoPath;
 +                    break;
 +                }
 +            }
 +            if (_sysvmISOPath == null) {
 +                s_logger.debug("Can't find system vm ISO");
 +            }
 +        }
 +
 +        final Map<String, String> bridges = new HashMap<String, String>();
 +
 +        params.put("libvirt.host.bridges", bridges);
 +        params.put("libvirt.host.pifs", _pifs);
 +
 +        params.put("libvirt.computing.resource", this);
 +        params.put("libvirtVersion", _hypervisorLibvirtVersion);
 +
 +
 +        configureVifDrivers(params);
 +
 +        /*
 +        switch (_bridgeType) {
 +        case OPENVSWITCH:
 +            getOvsPifs();
 +            break;
 +        case NATIVE:
 +        default:
 +            getPifs();
 +            break;
 +        }
 +        */
 +
 +        if (_pifs.get("private") == null) {
 +            s_logger.debug("Failed to get private nic name");
 +            throw new ConfigurationException("Failed to get private nic name");
 +        }
 +
 +        if (_pifs.get("public") == null) {
 +            s_logger.debug("Failed to get public nic name");
 +            throw new ConfigurationException("Failed to get public nic name");
 +        }
 +        s_logger.debug("Found pif: " + _pifs.get("private") + " on " + _privBridgeName + ", pif: " + _pifs.get("public") + " on " + _publicBridgeName);
 +
 +        _canBridgeFirewall = canBridgeFirewall(_pifs.get("public"));
 +
 +        _localGateway = Script.runSimpleBashScript("ip route |grep default|awk '{print $3}'");
 +        if (_localGateway == null) {
 +            s_logger.debug("Failed to found the local gateway");
 +        }
 +
 +        _mountPoint = (String)params.get("mount.path");
 +        if (_mountPoint == null) {
 +            _mountPoint = "/mnt";
 +        }
 +
 +        value = (String) params.get("vm.migrate.downtime");
 +        _migrateDowntime = NumbersUtil.parseInt(value, -1);
 +
 +        value = (String) params.get("vm.migrate.pauseafter");
 +        _migratePauseAfter = NumbersUtil.parseInt(value, -1);
 +
 +        value = (String)params.get("vm.migrate.speed");
 +        _migrateSpeed = NumbersUtil.parseInt(value, -1);
 +        if (_migrateSpeed == -1) {
 +            //get guest network device speed
 +            _migrateSpeed = 0;
 +            final String speed = Script.runSimpleBashScript("ethtool " + _pifs.get("public") + " |grep Speed | cut -d \\  -f 2");
 +            if (speed != null) {
 +                final String[] tokens = speed.split("M");
 +                if (tokens.length == 2) {
 +                    try {
 +                        _migrateSpeed = Integer.parseInt(tokens[0]);
 +                    } catch (final NumberFormatException e) {
 +                        s_logger.trace("Ignoring migrateSpeed extraction error.", e);
 +                    }
 +                    s_logger.debug("device " + _pifs.get("public") + " has speed: " + String.valueOf(_migrateSpeed));
 +                }
 +            }
 +            params.put("vm.migrate.speed", String.valueOf(_migrateSpeed));
 +        }
 +
 +        bridges.put("linklocal", _linkLocalBridgeName);
 +        bridges.put("public", _publicBridgeName);
 +        bridges.put("private", _privBridgeName);
 +        bridges.put("guest", _guestBridgeName);
 +
 +        getVifDriver(TrafficType.Control).createControlNetwork(_linkLocalBridgeName);
 +
 +        configureDiskActivityChecks(params);
 +
 +        final KVMStorageProcessor storageProcessor = new KVMStorageProcessor(_storagePoolMgr, this);
 +        storageProcessor.configure(name, params);
 +        storageHandler = new StorageSubsystemCommandHandlerBase(storageProcessor);
 +
 +        return true;
 +    }
 +
 +    protected void configureDiskActivityChecks(final Map<String, Object> params) {
 +        _diskActivityCheckEnabled = Boolean.parseBoolean((String)params.get("vm.diskactivity.checkenabled"));
 +        if (_diskActivityCheckEnabled) {
 +            final int timeout = NumbersUtil.parseInt((String)params.get("vm.diskactivity.checktimeout_s"), 0);
 +            if (timeout > 0) {
 +                _diskActivityCheckTimeoutSeconds = timeout;
 +            }
 +            final long inactiveTime = NumbersUtil.parseLong((String)params.get("vm.diskactivity.inactivetime_ms"), 0L);
 +            if (inactiveTime > 0) {
 +                _diskActivityInactiveThresholdMilliseconds = inactiveTime;
 +            }
 +        }
 +    }
 +
 +    protected void configureVifDrivers(final Map<String, Object> params) throws ConfigurationException {
 +        final String LIBVIRT_VIF_DRIVER = "libvirt.vif.driver";
 +
 +        _trafficTypeVifDrivers = new HashMap<TrafficType, VifDriver>();
 +
 +        // Load the default vif driver
 +        String defaultVifDriverName = (String)params.get(LIBVIRT_VIF_DRIVER);
 +        if (defaultVifDriverName == null) {
 +            if (_bridgeType == BridgeType.OPENVSWITCH) {
 +                s_logger.info("No libvirt.vif.driver specified. Defaults to OvsVifDriver.");
 +                defaultVifDriverName = DEFAULT_OVS_VIF_DRIVER_CLASS_NAME;
 +            } else {
 +                s_logger.info("No libvirt.vif.driver specified. Defaults to BridgeVifDriver.");
 +                defaultVifDriverName = DEFAULT_BRIDGE_VIF_DRIVER_CLASS_NAME;
 +            }
 +        }
 +        _defaultVifDriver = getVifDriverClass(defaultVifDriverName, params);
 +
 +        // Load any per-traffic-type vif drivers
 +        for (final Map.Entry<String, Object> entry : params.entrySet()) {
 +            final String k = entry.getKey();
 +            final String vifDriverPrefix = LIBVIRT_VIF_DRIVER + ".";
 +
 +            if (k.startsWith(vifDriverPrefix)) {
 +                // Get trafficType
 +                final String trafficTypeSuffix = k.substring(vifDriverPrefix.length());
 +
 +                // Does this suffix match a real traffic type?
 +                final TrafficType trafficType = TrafficType.getTrafficType(trafficTypeSuffix);
 +                if (!trafficType.equals(TrafficType.None)) {
 +                    // Get vif driver class name
 +                    final String vifDriverClassName = (String)entry.getValue();
 +                    // if value is null, ignore
 +                    if (vifDriverClassName != null) {
 +                        // add traffic type to vif driver mapping to Map
 +                        _trafficTypeVifDrivers.put(trafficType, getVifDriverClass(vifDriverClassName, params));
 +                    }
 +                }
 +            }
 +        }
 +    }
 +
 +    protected VifDriver getVifDriverClass(final String vifDriverClassName, final Map<String, Object> params) throws ConfigurationException {
 +        VifDriver vifDriver;
 +
 +        try {
 +            final Class<?> clazz = Class.forName(vifDriverClassName);
 +            vifDriver = (VifDriver)clazz.newInstance();
 +            vifDriver.configure(params);
 +        } catch (final ClassNotFoundException e) {
 +            throw new ConfigurationException("Unable to find class for libvirt.vif.driver " + e);
 +        } catch (final InstantiationException e) {
 +            throw new ConfigurationException("Unable to instantiate class for libvirt.vif.driver " + e);
 +        } catch (final IllegalAccessException e) {
 +            throw new ConfigurationException("Unable to instantiate class for libvirt.vif.driver " + e);
 +        }
 +        return vifDriver;
 +    }
 +
 +    public VifDriver getVifDriver(final TrafficType trafficType) {
 +        VifDriver vifDriver = _trafficTypeVifDrivers.get(trafficType);
 +
 +        if (vifDriver == null) {
 +            vifDriver = _defaultVifDriver;
 +        }
 +
 +        return vifDriver;
 +    }
 +
 +    public VifDriver getVifDriver(final TrafficType trafficType, final String bridgeName) {
 +        VifDriver vifDriver = null;
 +
 +        for (VifDriver driver : getAllVifDrivers()) {
 +            if (driver.isExistingBridge(bridgeName)) {
 +                vifDriver = driver;
 +                break;
 +            }
 +        }
 +
 +        if (vifDriver == null) {
 +            vifDriver = getVifDriver(trafficType);
 +        }
 +
 +        return vifDriver;
 +    }
 +
 +    public List<VifDriver> getAllVifDrivers() {
 +        final Set<VifDriver> vifDrivers = new HashSet<VifDriver>();
 +
 +        vifDrivers.add(_defaultVifDriver);
 +        vifDrivers.addAll(_trafficTypeVifDrivers.values());
 +
 +        final ArrayList<VifDriver> vifDriverList = new ArrayList<VifDriver>(vifDrivers);
 +
 +        return vifDriverList;
 +    }
 +
 +    private void getPifs() {
 +        final File dir = new File("/sys/devices/virtual/net");
 +        final File[] netdevs = dir.listFiles();
 +        final List<String> bridges = new ArrayList<String>();
 +        for (int i = 0; i < netdevs.length; i++) {
 +            final File isbridge = new File(netdevs[i].getAbsolutePath() + "/bridge");
 +            final String netdevName = netdevs[i].getName();
 +            s_logger.debug("looking in file " + netdevs[i].getAbsolutePath() + "/bridge");
 +            if (isbridge.exists()) {
 +                s_logger.debug("Found bridge " + netdevName);
 +                bridges.add(netdevName);
 +            }
 +        }
 +
 +        for (final String bridge : bridges) {
 +            s_logger.debug("looking for pif for bridge " + bridge);
 +            final String pif = getPif(bridge);
 +            if (isPublicBridge(bridge)) {
 +                _pifs.put("public", pif);
 +            }
 +            if (isGuestBridge(bridge)) {
 +                _pifs.put("private", pif);
 +            }
 +            _pifs.put(bridge, pif);
 +        }
 +
 +        // guest(private) creates bridges on a pif, if private bridge not found try pif direct
 +        // This addresses the unnecessary requirement of someone to create an unused bridge just for traffic label
 +        if (_pifs.get("private") == null) {
 +            s_logger.debug("guest(private) traffic label '" + _guestBridgeName + "' not found as bridge, looking for physical interface");
 +            final File dev = new File("/sys/class/net/" + _guestBridgeName);
 +            if (dev.exists()) {
 +                s_logger.debug("guest(private) traffic label '" + _guestBridgeName + "' found as a physical device");
 +                _pifs.put("private", _guestBridgeName);
 +            }
 +        }
 +
 +        // public creates bridges on a pif, if private bridge not found try pif direct
 +        // This addresses the unnecessary requirement of someone to create an unused bridge just for traffic label
 +        if (_pifs.get("public") == null) {
 +            s_logger.debug("public traffic label '" + _publicBridgeName+ "' not found as bridge, looking for physical interface");
 +            final File dev = new File("/sys/class/net/" + _publicBridgeName);
 +            if (dev.exists()) {
 +                s_logger.debug("public traffic label '" + _publicBridgeName + "' found as a physical device");
 +                _pifs.put("public", _publicBridgeName);
 +            }
 +        }
 +
 +        s_logger.debug("done looking for pifs, no more bridges");
 +    }
 +
 +    boolean isGuestBridge(String bridge) {
 +        return _guestBridgeName != null && bridge.equals(_guestBridgeName);
 +    }
 +
 +    private void getOvsPifs() {
 +        final String cmdout = Script.runSimpleBashScript("ovs-vsctl list-br | sed '{:q;N;s/\\n/%/g;t q}'");
 +        s_logger.debug("cmdout was " + cmdout);
 +        final List<String> bridges = Arrays.asList(cmdout.split("%"));
 +        for (final String bridge : bridges) {
 +            s_logger.debug("looking for pif for bridge " + bridge);
 +            // String pif = getOvsPif(bridge);
 +            // Not really interested in the pif name at this point for ovs
 +            // bridges
 +            final String pif = bridge;
 +            if (isPublicBridge(bridge)) {
 +                _pifs.put("public", pif);
 +            }
 +            if (isGuestBridge(bridge)) {
 +                _pifs.put("private", pif);
 +            }
 +            _pifs.put(bridge, pif);
 +        }
 +        s_logger.debug("done looking for pifs, no more bridges");
 +    }
 +
 +    public boolean isPublicBridge(String bridge) {
 +        return _publicBridgeName != null && bridge.equals(_publicBridgeName);
 +    }
 +
 +    private String getPif(final String bridge) {
 +        String pif = matchPifFileInDirectory(bridge);
 +        final File vlanfile = new File("/proc/net/vlan/" + pif);
 +
 +        if (vlanfile.isFile()) {
 +            pif = Script.runSimpleBashScript("grep ^Device\\: /proc/net/vlan/" + pif + " | awk {'print $2'}");
 +        }
 +
 +        return pif;
 +    }
 +
 +    private String matchPifFileInDirectory(final String bridgeName) {
 +        final File brif = new File("/sys/devices/virtual/net/" + bridgeName + "/brif");
 +
 +        if (!brif.isDirectory()) {
 +            final File pif = new File("/sys/class/net/" + bridgeName);
 +            if (pif.isDirectory()) {
 +                // if bridgeName already refers to a pif, return it as-is
 +                return bridgeName;
 +            }
 +            s_logger.debug("failing to get physical interface from bridge " + bridgeName + ", does " + brif.getAbsolutePath() + "exist?");
 +            return "";
 +        }
 +
 +        final File[] interfaces = brif.listFiles();
 +
 +        for (int i = 0; i < interfaces.length; i++) {
 +            final String fname = interfaces[i].getName();
 +            s_logger.debug("matchPifFileInDirectory: file name '" + fname + "'");
 +            if (isInterface(fname)) {
 +                return fname;
 +            }
 +        }
 +
 +        s_logger.debug("failing to get physical interface from bridge " + bridgeName + ", did not find an eth*, bond*, team*, vlan*, em*, p*p*, ens*, eno*, enp*, or enx* in " + brif.getAbsolutePath());
 +        return "";
 +    }
 +
 +    String [] _ifNamePatterns = {
 +            "^eth",
 +            "^bond",
 +            "^vlan",
 +            "^vx",
 +            "^em",
 +            "^ens",
 +            "^eno",
 +            "^enp",
 +            "^team",
 +            "^enx",
 +            "^p\\d+p\\d+"
 +    };
 +    /**
 +     * @param fname
 +     * @return
 +     */
 +    boolean isInterface(final String fname) {
 +        StringBuffer commonPattern = new StringBuffer();
 +        for (final String ifNamePattern : _ifNamePatterns) {
 +            commonPattern.append("|(").append(ifNamePattern).append(".*)");
 +        }
 +        if(fname.matches(commonPattern.toString())) {
 +            return true;
 +        }
 +        return false;
 +    }
 +
 +    public boolean checkNetwork(final TrafficType trafficType, final String networkName) {
 +        if (networkName == null) {
 +            return true;
 +        }
 +
 +        if (getVifDriver(trafficType, networkName) instanceof OvsVifDriver) {
 +            return checkOvsNetwork(networkName);
 +        } else {
 +            return checkBridgeNetwork(networkName);
 +        }
 +    }
 +
 +    private boolean checkBridgeNetwork(final String networkName) {
 +        if (networkName == null) {
 +            return true;
 +        }
 +
 +        final String name = matchPifFileInDirectory(networkName);
 +
 +        if (name == null || name.isEmpty()) {
 +            return false;
 +        } else {
 +            return true;
 +        }
 +    }
 +
 +    private boolean checkOvsNetwork(final String networkName) {
 +        s_logger.debug("Checking if network " + networkName + " exists as openvswitch bridge");
 +        if (networkName == null) {
 +            return true;
 +        }
 +
 +        final Script command = new Script("/bin/sh", _timeout);
 +        command.add("-c");
 +        command.add("ovs-vsctl br-exists " + networkName);
 +        return "0".equals(command.execute(null));
 +    }
 +
 +    public boolean passCmdLine(final String vmName, final String cmdLine) throws InternalErrorException {
 +        final Script command = new Script(_patchViaSocketPath, 5 * 1000, s_logger);
 +        String result;
 +        command.add("-n", vmName);
 +        command.add("-p", cmdLine.replaceAll(" ", "%"));
 +        result = command.execute();
 +        if (result != null) {
 +            s_logger.error("passcmd failed:" + result);
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    boolean isDirectAttachedNetwork(final String type) {
 +        if ("untagged".equalsIgnoreCase(type)) {
 +            return true;
 +        } else {
 +            try {
 +                Long.valueOf(type);
 +            } catch (final NumberFormatException e) {
 +                return true;
 +            }
 +            return false;
 +        }
 +    }
 +
 +    public String startVM(final Connect conn, final String vmName, final String domainXML) throws LibvirtException, InternalErrorException {
 +        try {
 +            /*
 +                We create a transient domain here. When this method gets
 +                called we receive a full XML specification of the guest,
 +                so no need to define it persistent.
 +
 +                This also makes sure we never have any old "garbage" defined
 +                in libvirt which might haunt us.
 +             */
 +
 +            // check for existing inactive vm definition and remove it
 +            // this can sometimes happen during crashes, etc
 +            Domain dm = null;
 +            try {
 +                dm = conn.domainLookupByName(vmName);
 +                if (dm != null && dm.isPersistent() == 1) {
 +                    // this is safe because it doesn't stop running VMs
 +                    dm.undefine();
 +                }
 +            } catch (final LibvirtException e) {
 +                // this is what we want, no domain found
 +            } finally {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            }
 +
 +            conn.domainCreateXML(domainXML, 0);
 +        } catch (final LibvirtException e) {
 +            throw e;
 +        }
 +        return null;
 +    }
 +
 +    @Override
 +    public boolean stop() {
 +        try {
 +            final Connect conn = LibvirtConnection.getConnection();
 +            conn.close();
 +        } catch (final LibvirtException e) {
 +            s_logger.trace("Ignoring libvirt error.", e);
 +        }
 +
 +        return true;
 +    }
 +
 +    @Override
 +    public Answer executeRequest(final Command cmd) {
 +
 +        final LibvirtRequestWrapper wrapper = LibvirtRequestWrapper.getInstance();
 +        try {
 +            return wrapper.execute(cmd, this);
 +        } catch (final Exception e) {
 +            return Answer.createUnsupportedCommandAnswer(cmd);
 +        }
 +    }
 +
 +    public synchronized boolean destroyTunnelNetwork(final String bridge) {
 +        findOrCreateTunnelNetwork(bridge);
 +
 +        final Script cmd = new Script(_ovsTunnelPath, _timeout, s_logger);
 +        cmd.add("destroy_ovs_bridge");
 +        cmd.add("--bridge", bridge);
 +
 +        final String result = cmd.execute();
 +
 +        if (result != null) {
 +            s_logger.debug("OVS Bridge could not be destroyed due to error ==> " + result);
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public synchronized boolean findOrCreateTunnelNetwork(final String nwName) {
 +        try {
 +            if (checkNetwork(TrafficType.Guest, nwName)) {
 +                return true;
 +            }
 +            // if not found, create a new one
 +            final Map<String, String> otherConfig = new HashMap<String, String>();
 +            otherConfig.put("ovs-host-setup", "");
 +            Script.runSimpleBashScript("ovs-vsctl -- --may-exist add-br "
 +                    + nwName + " -- set bridge " + nwName
 +                    + " other_config:ovs-host-setup='-1'");
 +            s_logger.debug("### KVM network for tunnels created:" + nwName);
 +        } catch (final Exception e) {
 +            s_logger.warn("createTunnelNetwork failed", e);
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public synchronized boolean configureTunnelNetwork(final long networkId,
 +            final long hostId, final String nwName) {
 +        try {
 +            final boolean findResult = findOrCreateTunnelNetwork(nwName);
 +            if (!findResult) {
 +                s_logger.warn("LibvirtComputingResource.findOrCreateTunnelNetwork() failed! Cannot proceed creating the tunnel.");
 +                return false;
 +            }
 +            final String configuredHosts = Script
 +                    .runSimpleBashScript("ovs-vsctl get bridge " + nwName
 +                            + " other_config:ovs-host-setup");
 +            boolean configured = false;
 +            if (configuredHosts != null) {
 +                final String hostIdsStr[] = configuredHosts.split(",");
 +                for (final String hostIdStr : hostIdsStr) {
 +                    if (hostIdStr.equals(((Long)hostId).toString())) {
 +                        configured = true;
 +                        break;
 +                    }
 +                }
 +            }
 +            if (!configured) {
 +                final Script cmd = new Script(_ovsTunnelPath, _timeout, s_logger);
 +                cmd.add("setup_ovs_bridge");
 +                cmd.add("--key", nwName);
 +                cmd.add("--cs_host_id", ((Long)hostId).toString());
 +                cmd.add("--bridge", nwName);
 +                final String result = cmd.execute();
 +                if (result != null) {
 +                    throw new CloudRuntimeException(
 +                            "Unable to pre-configure OVS bridge " + nwName
 +                            + " for network ID:" + networkId);
 +                }
 +            }
 +        } catch (final Exception e) {
 +            s_logger.warn("createandConfigureTunnelNetwork failed", e);
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    protected Storage.StorageResourceType getStorageResourceType() {
 +        return Storage.StorageResourceType.STORAGE_POOL;
 +    }
 +
 +    // this is much like PrimaryStorageDownloadCommand, but keeping it separate
 +    public KVMPhysicalDisk templateToPrimaryDownload(final String templateUrl, final KVMStoragePool primaryPool, final String volUuid) {
 +        final int index = templateUrl.lastIndexOf("/");
 +        final String mountpoint = templateUrl.substring(0, index);
 +        String templateName = null;
 +        if (index < templateUrl.length() - 1) {
 +            templateName = templateUrl.substring(index + 1);
 +        }
 +
 +        KVMPhysicalDisk templateVol = null;
 +        KVMStoragePool secondaryPool = null;
 +        try {
 +            secondaryPool = _storagePoolMgr.getStoragePoolByURI(mountpoint);
 +            /* Get template vol */
 +            if (templateName == null) {
 +                secondaryPool.refresh();
 +                final List<KVMPhysicalDisk> disks = secondaryPool.listPhysicalDisks();
 +                if (disks == null || disks.isEmpty()) {
 +                    s_logger.error("Failed to get volumes from pool: " + secondaryPool.getUuid());
 +                    return null;
 +                }
 +                for (final KVMPhysicalDisk disk : disks) {
 +                    if (disk.getName().endsWith("qcow2")) {
 +                        templateVol = disk;
 +                        break;
 +                    }
 +                }
 +                if (templateVol == null) {
 +                    s_logger.error("Failed to get template from pool: " + secondaryPool.getUuid());
 +                    return null;
 +                }
 +            } else {
 +                templateVol = secondaryPool.getPhysicalDisk(templateName);
 +            }
 +
 +            /* Copy volume to primary storage */
 +
 +            final KVMPhysicalDisk primaryVol = _storagePoolMgr.copyPhysicalDisk(templateVol, volUuid, primaryPool, 0);
 +            return primaryVol;
 +        } catch (final CloudRuntimeException e) {
 +            s_logger.error("Failed to download template to primary storage", e);
 +            return null;
 +        } finally {
 +            if (secondaryPool != null) {
 +                _storagePoolMgr.deleteStoragePool(secondaryPool.getType(), secondaryPool.getUuid());
 +            }
 +        }
 +    }
 +
 +    public String getResizeScriptType(final KVMStoragePool pool, final KVMPhysicalDisk vol) {
 +        final StoragePoolType poolType = pool.getType();
 +        final PhysicalDiskFormat volFormat = vol.getFormat();
 +
 +        if (pool.getType() == StoragePoolType.CLVM && volFormat == PhysicalDiskFormat.RAW) {
 +            return "CLVM";
 +        } else if ((poolType == StoragePoolType.NetworkFilesystem
 +                || poolType == StoragePoolType.SharedMountPoint
 +                || poolType == StoragePoolType.Filesystem
 +                || poolType == StoragePoolType.Gluster)
 +                && volFormat == PhysicalDiskFormat.QCOW2 ) {
 +            return "QCOW2";
 +        }
 +        throw new CloudRuntimeException("Cannot determine resize type from pool type " + pool.getType());
 +    }
 +
 +    private String getBroadcastUriFromBridge(final String brName) {
 +        final String pif = matchPifFileInDirectory(brName);
 +        final Pattern pattern = Pattern.compile("(\\D+)(\\d+)(\\D*)(\\d*)(\\D*)(\\d*)");
 +        final Matcher matcher = pattern.matcher(pif);
 +        s_logger.debug("getting broadcast uri for pif " + pif + " and bridge " + brName);
 +        if(matcher.find()) {
 +            if (brName.startsWith("brvx")){
 +                return BroadcastDomainType.Vxlan.toUri(matcher.group(2)).toString();
 +            }
 +            else{
 +                if (!matcher.group(6).isEmpty()) {
 +                    return BroadcastDomainType.Vlan.toUri(matcher.group(6)).toString();
 +                } else if (!matcher.group(4).isEmpty()) {
 +                    return BroadcastDomainType.Vlan.toUri(matcher.group(4)).toString();
 +                } else {
 +                    //untagged or not matching (eth|bond|team)#.#
 +                    s_logger.debug("failed to get vNet id from bridge " + brName
 +                            + "attached to physical interface" + pif + ", perhaps untagged interface");
 +                    return "";
 +                }
 +            }
 +        } else {
 +            s_logger.debug("failed to get vNet id from bridge " + brName + "attached to physical interface" + pif);
 +            return "";
 +        }
 +    }
 +
 +    private void VifHotPlug(final Connect conn, final String vmName, final String broadcastUri, final String macAddr) throws InternalErrorException, LibvirtException {
 +        final NicTO nicTO = new NicTO();
 +        nicTO.setMac(macAddr);
 +        nicTO.setType(TrafficType.Public);
 +        if (broadcastUri == null) {
 +            nicTO.setBroadcastType(BroadcastDomainType.Native);
 +        } else {
 +            final URI uri = BroadcastDomainType.fromString(broadcastUri);
 +            nicTO.setBroadcastType(BroadcastDomainType.getSchemeValue(uri));
 +            nicTO.setBroadcastUri(uri);
 +        }
 +
 +        final Domain vm = getDomain(conn, vmName);
 +        vm.attachDevice(getVifDriver(nicTO.getType()).plug(nicTO, "Other PV", "").toString());
 +    }
 +
 +
 +    private void vifHotUnPlug (final Connect conn, final String vmName, final String macAddr) throws InternalErrorException, LibvirtException {
 +
 +        Domain vm = null;
 +        vm = getDomain(conn, vmName);
 +        final List<InterfaceDef> pluggedNics = getInterfaces(conn, vmName);
 +        for (final InterfaceDef pluggedNic : pluggedNics) {
 +            if (pluggedNic.getMacAddress().equalsIgnoreCase(macAddr)) {
 +                vm.detachDevice(pluggedNic.toString());
 +                // We don't know which "traffic type" is associated with
 +                // each interface at this point, so inform all vif drivers
 +                for (final VifDriver vifDriver : getAllVifDrivers()) {
 +                    vifDriver.unplug(pluggedNic);
 +                }
 +            }
 +        }
 +    }
 +
 +    private ExecutionResult prepareNetworkElementCommand(final SetupGuestNetworkCommand cmd) {
 +        Connect conn;
 +        final NicTO nic = cmd.getNic();
 +        final String routerName = cmd.getAccessDetail(NetworkElementCommand.ROUTER_NAME);
 +
 +        try {
 +            conn = LibvirtConnection.getConnectionByVmName(routerName);
 +            final List<InterfaceDef> pluggedNics = getInterfaces(conn, routerName);
 +            InterfaceDef routerNic = null;
 +
 +            for (final InterfaceDef pluggedNic : pluggedNics) {
 +                if (pluggedNic.getMacAddress().equalsIgnoreCase(nic.getMac())) {
 +                    routerNic = pluggedNic;
 +                    break;
 +                }
 +            }
 +
 +            if (routerNic == null) {
 +                return new ExecutionResult(false, "Can not find nic with mac " + nic.getMac() + " for VM " + routerName);
 +            }
 +
 +            return new ExecutionResult(true, null);
 +        } catch (final LibvirtException e) {
 +            final String msg = "Creating guest network failed due to " + e.toString();
 +            s_logger.warn(msg, e);
 +            return new ExecutionResult(false, msg);
 +        }
 +    }
 +
 +    protected ExecutionResult prepareNetworkElementCommand(final SetSourceNatCommand cmd) {
 +        Connect conn;
 +        final String routerName = cmd.getAccessDetail(NetworkElementCommand.ROUTER_NAME);
 +        cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
 +        final IpAddressTO pubIP = cmd.getIpAddress();
 +
 +        try {
 +            conn = LibvirtConnection.getConnectionByVmName(routerName);
 +            Integer devNum = 0;
 +            final String pubVlan = pubIP.getBroadcastUri();
 +            final List<InterfaceDef> pluggedNics = getInterfaces(conn, routerName);
 +
 +            for (final InterfaceDef pluggedNic : pluggedNics) {
 +                final String pluggedVlanBr = pluggedNic.getBrName();
 +                final String pluggedVlanId = getBroadcastUriFromBridge(pluggedVlanBr);
 +                if (pubVlan.equalsIgnoreCase(Vlan.UNTAGGED) && pluggedVlanBr.equalsIgnoreCase(_publicBridgeName)) {
 +                    break;
 +                } else if (pluggedVlanBr.equalsIgnoreCase(_linkLocalBridgeName)) {
 +                    /*skip over, no physical bridge device exists*/
 +                } else if (pluggedVlanId == null) {
 +                    /*this should only be true in the case of link local bridge*/
 +                    return new ExecutionResult(false, "unable to find the vlan id for bridge " + pluggedVlanBr + " when attempting to set up" + pubVlan +
 +                            " on router " + routerName);
 +                } else if (pluggedVlanId.equals(pubVlan)) {
 +                    break;
 +                }
 +                devNum++;
 +            }
 +
 +            pubIP.setNicDevId(devNum);
 +
 +            return new ExecutionResult(true, "success");
 +        } catch (final LibvirtException e) {
 +            final String msg = "Ip SNAT failure due to " + e.toString();
 +            s_logger.error(msg, e);
 +            return new ExecutionResult(false, msg);
 +        }
 +    }
 +
 +    protected ExecutionResult prepareNetworkElementCommand(final IpAssocVpcCommand cmd) {
 +        Connect conn;
 +        final String routerName = cmd.getAccessDetail(NetworkElementCommand.ROUTER_NAME);
 +
 +        try {
 +            conn = getLibvirtUtilitiesHelper().getConnectionByVmName(routerName);
 +            final IpAddressTO[] ips = cmd.getIpAddresses();
 +            Integer devNum = 0;
 +            final List<InterfaceDef> pluggedNics = getInterfaces(conn, routerName);
 +            final Map<String, Integer> macAddressToNicNum = new HashMap<>(pluggedNics.size());
 +
 +            for (final InterfaceDef pluggedNic : pluggedNics) {
 +                final String pluggedVlan = pluggedNic.getBrName();
 +                macAddressToNicNum.put(pluggedNic.getMacAddress(), devNum);
 +                devNum++;
 +            }
 +
 +            for (final IpAddressTO ip : ips) {
 +                ip.setNicDevId(macAddressToNicNum.get(ip.getVifMacAddress()));
 +            }
 +
 +            return new ExecutionResult(true, null);
 +        } catch (final LibvirtException e) {
 +            s_logger.error("Ip Assoc failure on applying one ip due to exception:  ", e);
 +            return new ExecutionResult(false, e.getMessage());
 +        }
 +    }
 +
 +    public ExecutionResult prepareNetworkElementCommand(final IpAssocCommand cmd) {
 +        final String routerName = cmd.getAccessDetail(NetworkElementCommand.ROUTER_NAME);
 +        final String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
 +        Connect conn;
 +        try {
 +            conn = LibvirtConnection.getConnectionByVmName(routerName);
 +            final List<InterfaceDef> nics = getInterfaces(conn, routerName);
 +            final Map<String, Integer> broadcastUriAllocatedToVM = new HashMap<String, Integer>();
 +            Integer nicPos = 0;
 +            for (final InterfaceDef nic : nics) {
 +                if (nic.getBrName().equalsIgnoreCase(_linkLocalBridgeName)) {
 +                    broadcastUriAllocatedToVM.put("LinkLocal", nicPos);
 +                } else {
 +                    if (nic.getBrName().equalsIgnoreCase(_publicBridgeName) || nic.getBrName().equalsIgnoreCase(_privBridgeName) ||
 +                            nic.getBrName().equalsIgnoreCase(_guestBridgeName)) {
 +                        broadcastUriAllocatedToVM.put(BroadcastDomainType.Vlan.toUri(Vlan.UNTAGGED).toString(), nicPos);
 +                    } else {
 +                        final String broadcastUri = getBroadcastUriFromBridge(nic.getBrName());
 +                        broadcastUriAllocatedToVM.put(broadcastUri, nicPos);
 +                    }
 +                }
 +                nicPos++;
 +            }
 +            final IpAddressTO[] ips = cmd.getIpAddresses();
 +            int nicNum = 0;
 +            for (final IpAddressTO ip : ips) {
 +                boolean newNic = false;
 +                if (!broadcastUriAllocatedToVM.containsKey(ip.getBroadcastUri())) {
 +                    /* plug a vif into router */
 +                    VifHotPlug(conn, routerName, ip.getBroadcastUri(), ip.getVifMacAddress());
 +                    broadcastUriAllocatedToVM.put(ip.getBroadcastUri(), nicPos++);
 +                    newNic = true;
 +                }
 +                nicNum = broadcastUriAllocatedToVM.get(ip.getBroadcastUri());
 +                networkUsage(routerIp, "addVif", "eth" + nicNum);
 +
 +                ip.setNicDevId(nicNum);
 +                ip.setNewNic(newNic);
 +            }
 +            return new ExecutionResult(true, null);
 +        } catch (final LibvirtException e) {
 +            s_logger.error("ipassoccmd failed", e);
 +            return new ExecutionResult(false, e.getMessage());
 +        } catch (final InternalErrorException e) {
 +            s_logger.error("ipassoccmd failed", e);
 +            return new ExecutionResult(false, e.getMessage());
 +        }
 +    }
 +
 +    protected ExecutionResult cleanupNetworkElementCommand(final IpAssocCommand cmd) {
 +
 +        final String routerName = cmd.getAccessDetail(NetworkElementCommand.ROUTER_NAME);
 +        final String routerIp = cmd.getAccessDetail(NetworkElementCommand.ROUTER_IP);
 +        final String lastIp = cmd.getAccessDetail(NetworkElementCommand.NETWORK_PUB_LAST_IP);
 +        Connect conn;
 +
 +
 +        try{
 +            conn = LibvirtConnection.getConnectionByVmName(routerName);
 +            final List<InterfaceDef> nics = getInterfaces(conn, routerName);
 +            final Map<String, Integer> broadcastUriAllocatedToVM = new HashMap<String, Integer>();
 +
 +            Integer nicPos = 0;
 +            for (final InterfaceDef nic : nics) {
 +                if (nic.getBrName().equalsIgnoreCase(_linkLocalBridgeName)) {
 +                    broadcastUriAllocatedToVM.put("LinkLocal", nicPos);
 +                } else {
 +                    if (nic.getBrName().equalsIgnoreCase(_publicBridgeName) || nic.getBrName().equalsIgnoreCase(_privBridgeName) ||
 +                            nic.getBrName().equalsIgnoreCase(_guestBridgeName)) {
 +                        broadcastUriAllocatedToVM.put(BroadcastDomainType.Vlan.toUri(Vlan.UNTAGGED).toString(), nicPos);
 +                    } else {
 +                        final String broadcastUri = getBroadcastUriFromBridge(nic.getBrName());
 +                        broadcastUriAllocatedToVM.put(broadcastUri, nicPos);
 +                    }
 +                }
 +                nicPos++;
 +            }
 +
 +            final IpAddressTO[] ips = cmd.getIpAddresses();
 +            int nicNum = 0;
 +            for (final IpAddressTO ip : ips) {
 +
 +                if (!broadcastUriAllocatedToVM.containsKey(ip.getBroadcastUri())) {
 +                    /* plug a vif into router */
 +                    VifHotPlug(conn, routerName, ip.getBroadcastUri(), ip.getVifMacAddress());
 +                    broadcastUriAllocatedToVM.put(ip.getBroadcastUri(), nicPos++);
 +                }
 +                nicNum = broadcastUriAllocatedToVM.get(ip.getBroadcastUri());
 +
 +                if (org.apache.commons.lang.StringUtils.equalsIgnoreCase(lastIp, "true") && !ip.isAdd()) {
 +                    // in isolated network eth2 is the default public interface. We don't want to delete it.
 +                    if (nicNum != 2) {
 +                        vifHotUnPlug(conn, routerName, ip.getVifMacAddress());
 +                        networkUsage(routerIp, "deleteVif", "eth" + nicNum);
 +                    }
 +                }
 +            }
 +
 +        } catch (final LibvirtException e) {
 +            s_logger.error("ipassoccmd failed", e);
 +            return new ExecutionResult(false, e.getMessage());
 +        } catch (final InternalErrorException e) {
 +            s_logger.error("ipassoccmd failed", e);
 +            return new ExecutionResult(false, e.getMessage());
 +        }
 +
 +        return new ExecutionResult(true, null);
 +    }
 +
 +    protected PowerState convertToPowerState(final DomainState ps) {
 +        final PowerState state = s_powerStatesTable.get(ps);
 +        return state == null ? PowerState.PowerUnknown : state;
 +    }
 +
 +    public PowerState getVmState(final Connect conn, final String vmName) {
 +        int retry = 3;
 +        Domain vms = null;
 +        while (retry-- > 0) {
 +            try {
 +                vms = conn.domainLookupByName(vmName);
 +                final PowerState s = convertToPowerState(vms.getInfo().state);
 +                return s;
 +            } catch (final LibvirtException e) {
 +                s_logger.warn("Can't get vm state " + vmName + e.getMessage() + "retry:" + retry);
 +            } finally {
 +                try {
 +                    if (vms != null) {
 +                        vms.free();
 +                    }
 +                } catch (final LibvirtException l) {
 +                    s_logger.trace("Ignoring libvirt error.", l);
 +                }
 +            }
 +        }
 +        return PowerState.PowerOff;
 +    }
 +
 +    public String networkUsage(final String privateIpAddress, final String option, final String vif) {
 +        final Script getUsage = new Script(_routerProxyPath, s_logger);
 +        getUsage.add("netusage.sh");
 +        getUsage.add(privateIpAddress);
 +        if (option.equals("get")) {
 +            getUsage.add("-g");
 +        } else if (option.equals("create")) {
 +            getUsage.add("-c");
 +        } else if (option.equals("reset")) {
 +            getUsage.add("-r");
 +        } else if (option.equals("addVif")) {
 +            getUsage.add("-a", vif);
 +        } else if (option.equals("deleteVif")) {
 +            getUsage.add("-d", vif);
 +        }
 +
 +        final OutputInterpreter.OneLineParser usageParser = new OutputInterpreter.OneLineParser();
 +        final String result = getUsage.execute(usageParser);
 +        if (result != null) {
 +            s_logger.debug("Failed to execute networkUsage:" + result);
 +            return null;
 +        }
 +        return usageParser.getLine();
 +    }
 +
 +    public long[] getNetworkStats(final String privateIP) {
 +        final String result = networkUsage(privateIP, "get", null);
 +        final long[] stats = new long[2];
 +        if (result != null) {
 +            final String[] splitResult = result.split(":");
 +            int i = 0;
 +            while (i < splitResult.length - 1) {
 +                stats[0] += Long.parseLong(splitResult[i++]);
 +                stats[1] += Long.parseLong(splitResult[i++]);
 +            }
 +        }
 +        return stats;
 +    }
 +
 +    public String configureVPCNetworkUsage(final String privateIpAddress, final String publicIp, final String option, final String vpcCIDR) {
 +        final Script getUsage = new Script(_routerProxyPath, s_logger);
 +        getUsage.add("vpc_netusage.sh");
 +        getUsage.add(privateIpAddress);
 +        getUsage.add("-l", publicIp);
 +
 +        if (option.equals("get")) {
 +            getUsage.add("-g");
 +        } else if (option.equals("create")) {
 +            getUsage.add("-c");
 +            getUsage.add("-v", vpcCIDR);
 +        } else if (option.equals("reset")) {
 +            getUsage.add("-r");
 +        } else if (option.equals("vpn")) {
 +            getUsage.add("-n");
 +        } else if (option.equals("remove")) {
 +            getUsage.add("-d");
 +        }
 +
 +        final OutputInterpreter.OneLineParser usageParser = new OutputInterpreter.OneLineParser();
 +        final String result = getUsage.execute(usageParser);
 +        if (result != null) {
 +            s_logger.debug("Failed to execute VPCNetworkUsage:" + result);
 +            return null;
 +        }
 +        return usageParser.getLine();
 +    }
 +
 +    public long[] getVPCNetworkStats(final String privateIP, final String publicIp, final String option) {
 +        final String result = configureVPCNetworkUsage(privateIP, publicIp, option, null);
 +        final long[] stats = new long[2];
 +        if (result != null) {
 +            final String[] splitResult = result.split(":");
 +            int i = 0;
 +            while (i < splitResult.length - 1) {
 +                stats[0] += Long.parseLong(splitResult[i++]);
 +                stats[1] += Long.parseLong(splitResult[i++]);
 +            }
 +        }
 +        return stats;
 +    }
 +
 +    public void handleVmStartFailure(final Connect conn, final String vmName, final LibvirtVMDef vm) {
 +        if (vm != null && vm.getDevices() != null) {
 +            cleanupVMNetworks(conn, vm.getDevices().getInterfaces());
 +        }
 +    }
 +
 +    protected String getUuid(String uuid) {
 +        if (uuid == null) {
 +            uuid = UUID.randomUUID().toString();
 +        } else {
 +            try {
 +                final UUID uuid2 = UUID.fromString(uuid);
 +                final String uuid3 = uuid2.toString();
 +                if (!uuid3.equals(uuid)) {
 +                    uuid = UUID.randomUUID().toString();
 +                }
 +            } catch (final IllegalArgumentException e) {
 +                uuid = UUID.randomUUID().toString();
 +            }
 +        }
 +        return uuid;
 +    }
 +
 +    /**
 +     * Set quota and period tags on 'ctd' when CPU limit use is set
 +     */
 +    protected void setQuotaAndPeriod(VirtualMachineTO vmTO, CpuTuneDef ctd) {
 +        if (vmTO.getLimitCpuUse() && vmTO.getCpuQuotaPercentage() != null) {
 +            Double cpuQuotaPercentage = vmTO.getCpuQuotaPercentage();
 +            int period = CpuTuneDef.DEFAULT_PERIOD;
 +            int quota = (int) (period * cpuQuotaPercentage);
 +            if (quota < CpuTuneDef.MIN_QUOTA) {
 +                s_logger.info("Calculated quota (" + quota + ") below the minimum (" + CpuTuneDef.MIN_QUOTA + ") for VM domain " + vmTO.getUuid() + ", setting it to minimum " +
 +                        "and calculating period instead of using the default");
 +                quota = CpuTuneDef.MIN_QUOTA;
 +                period = (int) ((double) quota / cpuQuotaPercentage);
 +                if (period > CpuTuneDef.MAX_PERIOD) {
 +                    s_logger.info("Calculated period (" + period + ") exceeds the maximum (" + CpuTuneDef.MAX_PERIOD +
 +                            "), setting it to the maximum");
 +                    period = CpuTuneDef.MAX_PERIOD;
 +                }
 +            }
 +            ctd.setQuota(quota);
 +            ctd.setPeriod(period);
 +            s_logger.info("Setting quota=" + quota + ", period=" + period + " to VM domain " + vmTO.getUuid());
 +        }
 +    }
 +
 +    public LibvirtVMDef createVMFromSpec(final VirtualMachineTO vmTO) {
 +        final LibvirtVMDef vm = new LibvirtVMDef();
 +        vm.setDomainName(vmTO.getName());
 +        String uuid = vmTO.getUuid();
 +        uuid = getUuid(uuid);
 +        vm.setDomUUID(uuid);
 +        vm.setDomDescription(vmTO.getOs());
 +        vm.setPlatformEmulator(vmTO.getPlatformEmulator());
 +
 +        final GuestDef guest = new GuestDef();
 +
 +        if (HypervisorType.LXC == _hypervisorType && VirtualMachine.Type.User == vmTO.getType()) {
 +            // LXC domain is only valid for user VMs. Use KVM for system VMs.
 +            guest.setGuestType(GuestDef.GuestType.LXC);
 +            vm.setHvsType(HypervisorType.LXC.toString().toLowerCase());
 +        } else {
 +            guest.setGuestType(GuestDef.GuestType.KVM);
 +            vm.setHvsType(HypervisorType.KVM.toString().toLowerCase());
 +            vm.setLibvirtVersion(_hypervisorLibvirtVersion);
 +            vm.setQemuVersion(_hypervisorQemuVersion);
 +        }
 +        guest.setGuestArch(vmTO.getArch());
 +        guest.setMachineType("pc");
 +        guest.setUuid(uuid);
 +        guest.setBootOrder(GuestDef.BootOrder.CDROM);
 +        guest.setBootOrder(GuestDef.BootOrder.HARDISK);
 +
 +        vm.addComp(guest);
 +
 +        final GuestResourceDef grd = new GuestResourceDef();
 +
 +        if (vmTO.getMinRam() != vmTO.getMaxRam() && !_noMemBalloon) {
 +            grd.setMemBalloning(true);
 +            grd.setCurrentMem(vmTO.getMinRam() / 1024);
 +            grd.setMemorySize(vmTO.getMaxRam() / 1024);
 +        } else {
 +            grd.setMemorySize(vmTO.getMaxRam() / 1024);
 +        }
 +        final int vcpus = vmTO.getCpus();
 +        grd.setVcpuNum(vcpus);
 +        vm.addComp(grd);
 +
 +        final CpuModeDef cmd = new CpuModeDef();
 +        cmd.setMode(_guestCpuMode);
 +        cmd.setModel(_guestCpuModel);
 +        if (vmTO.getType() == VirtualMachine.Type.User) {
 +            cmd.setFeatures(_cpuFeatures);
 +        }
 +        // multi cores per socket, for larger core configs
 +        if (vcpus % 6 == 0) {
 +            final int sockets = vcpus / 6;
 +            cmd.setTopology(6, sockets);
 +        } else if (vcpus % 4 == 0) {
 +            final int sockets = vcpus / 4;
 +            cmd.setTopology(4, sockets);
 +        }
 +        vm.addComp(cmd);
 +
 +        if (_hypervisorLibvirtVersion >= 9000) {
 +            final CpuTuneDef ctd = new CpuTuneDef();
 +            /**
 +             A 4.0.X/4.1.X management server doesn't send the correct JSON
 +             command for getMinSpeed, it only sends a 'speed' field.
 +
 +             So if getMinSpeed() returns null we fall back to getSpeed().
 +
 +             This way a >4.1 agent can work communicate a <=4.1 management server
 +
 +             This change is due to the overcommit feature in 4.2
 +             */
 +            if (vmTO.getMinSpeed() != null) {
 +                ctd.setShares(vmTO.getCpus() * vmTO.getMinSpeed());
 +            } else {
 +                ctd.setShares(vmTO.getCpus() * vmTO.getSpeed());
 +            }
 +
 +            setQuotaAndPeriod(vmTO, ctd);
 +
 +            vm.addComp(ctd);
 +        }
 +
 +        final FeaturesDef features = new FeaturesDef();
 +        features.addFeatures("pae");
 +        features.addFeatures("apic");
 +        features.addFeatures("acpi");
 +        //for rhel 6.5 and above, hyperv enlightment feature is added
 +        /*
 +         * if (vmTO.getOs().contains("Windows Server 2008") && hostOsVersion != null && ((hostOsVersion.first() == 6 && hostOsVersion.second() >= 5) || (hostOsVersion.first() >= 7))) {
 +         *    LibvirtVMDef.HyperVEnlightenmentFeatureDef hyv = new LibvirtVMDef.HyperVEnlightenmentFeatureDef();
 +         *    hyv.setRelaxed(true);
 +         *    features.addHyperVFeature(hyv);
 +         * }
 +         */
 +        vm.addComp(features);
 +
 +        final TermPolicy term = new TermPolicy();
 +        term.setCrashPolicy("destroy");
 +        term.setPowerOffPolicy("destroy");
 +        term.setRebootPolicy("restart");
 +        vm.addComp(term);
 +
 +        final ClockDef clock = new ClockDef();
 +        if (vmTO.getOs().startsWith("Windows")) {
 +            clock.setClockOffset(ClockDef.ClockOffset.LOCALTIME);
 +            clock.setTimer("rtc", "catchup", null);
 +        } else if (vmTO.getType() != VirtualMachine.Type.User || isGuestPVEnabled(vmTO.getOs())) {
 +            if (_hypervisorLibvirtVersion >= 9 * 1000 + 10) {
 +                clock.setTimer("kvmclock", null, null, _noKvmClock);
 +            }
 +        }
 +
 +        vm.addComp(clock);
 +
 +        final DevicesDef devices = new DevicesDef();
 +        devices.setEmulatorPath(_hypervisorPath);
 +        devices.setGuestType(guest.getGuestType());
 +
 +        final SerialDef serial = new SerialDef("pty", null, (short)0);
 +        devices.addDevice(serial);
 +
 +        /* Add a VirtIO channel for SystemVMs for communication and provisioning */
 +        if (vmTO.getType() != VirtualMachine.Type.User) {
 +            devices.addDevice(new ChannelDef(vmTO.getName() + ".vport", ChannelDef.ChannelType.UNIX,
 +                              new File(_qemuSocketsPath + "/" + vmTO.getName() + ".agent")));
 +        }
 +
 +        if (_rngEnable) {
 +            final RngDef rngDevice = new RngDef(_rngPath, _rngBackendModel, _rngRateBytes, _rngRatePeriod);
 +            devices.addDevice(rngDevice);
 +        }
 +
 +        /* Add a VirtIO channel for the Qemu Guest Agent tools */
 +        devices.addDevice(new ChannelDef(_qemuGuestAgentSocketName, ChannelDef.ChannelType.UNIX,
 +                          new File(_qemuSocketsPath + "/" + vmTO.getName() + "." + _qemuGuestAgentSocketName)));
 +
 +        devices.addDevice(new WatchDogDef(_watchDogAction, _watchDogModel));
 +
 +        final VideoDef videoCard = new VideoDef(_videoHw, _videoRam);
 +        devices.addDevice(videoCard);
 +
 +        final ConsoleDef console = new ConsoleDef("pty", null, null, (short)0);
 +        devices.addDevice(console);
 +
 +        //add the VNC port passwd here, get the passwd from the vmInstance.
 +        final String passwd = vmTO.getVncPassword();
 +        final GraphicDef grap = new GraphicDef("vnc", (short)0, true, vmTO.getVncAddr(), passwd, null);
 +        devices.addDevice(grap);
 +
 +        final InputDef input = new InputDef("tablet", "usb");
 +        devices.addDevice(input);
 +
 +
 +        DiskDef.DiskBus busT = getDiskModelFromVMDetail(vmTO);
 +
 +        if (busT == null) {
 +            busT = getGuestDiskModel(vmTO.getPlatformEmulator());
 +        }
 +
 +        // If we're using virtio scsi, then we need to add a virtual scsi controller
 +        if (busT == DiskDef.DiskBus.SCSI) {
 +            final SCSIDef sd = new SCSIDef((short)0, 0, 0, 9, 0);
 +            devices.addDevice(sd);
 +        }
 +
 +        vm.addComp(devices);
 +
 +        return vm;
 +    }
 +
 +    public void createVifs(final VirtualMachineTO vmSpec, final LibvirtVMDef vm) throws InternalErrorException, LibvirtException {
 +        final NicTO[] nics = vmSpec.getNics();
 +        final Map <String, String> params = vmSpec.getDetails();
 +        String nicAdapter = "";
 +        if (params != null && params.get("nicAdapter") != null && !params.get("nicAdapter").isEmpty()) {
 +            nicAdapter = params.get("nicAdapter");
 +        }
 +        for (int i = 0; i < nics.length; i++) {
 +            for (final NicTO nic : vmSpec.getNics()) {
 +                if (nic.getDeviceId() == i) {
 +                    createVif(vm, nic, nicAdapter);
 +                }
 +            }
 +        }
 +    }
 +
 +    public String getVolumePath(final Connect conn, final DiskTO volume) throws LibvirtException, URISyntaxException {
 +        final DataTO data = volume.getData();
 +        final DataStoreTO store = data.getDataStore();
 +
-         if (volume.getType() == Volume.Type.ISO && data.getPath() != null) {
-             final NfsTO nfsStore = (NfsTO)store;
-             final String isoPath = nfsStore.getUrl() + File.separator + data.getPath();
++        if (volume.getType() == Volume.Type.ISO && data.getPath() != null && (store instanceof NfsTO || store instanceof PrimaryDataStoreTO)) {
++            final String isoPath = store.getUrl().split("\\?")[0] + File.separator + data.getPath();
 +            final int index = isoPath.lastIndexOf("/");
 +            final String path = isoPath.substring(0, index);
 +            final String name = isoPath.substring(index + 1);
 +            final KVMStoragePool secondaryPool = _storagePoolMgr.getStoragePoolByURI(path);
 +            final KVMPhysicalDisk isoVol = secondaryPool.getPhysicalDisk(name);
 +            return isoVol.getPath();
 +        } else {
 +            return data.getPath();
 +        }
 +    }
 +
 +    public void createVbd(final Connect conn, final VirtualMachineTO vmSpec, final String vmName, final LibvirtVMDef vm) throws InternalErrorException, LibvirtException, URISyntaxException {
 +        final List<DiskTO> disks = Arrays.asList(vmSpec.getDisks());
 +        Collections.sort(disks, new Comparator<DiskTO>() {
 +            @Override
 +            public int compare(final DiskTO arg0, final DiskTO arg1) {
 +                return arg0.getDiskSeq() > arg1.getDiskSeq() ? 1 : -1;
 +            }
 +        });
 +
 +        for (final DiskTO volume : disks) {
 +            KVMPhysicalDisk physicalDisk = null;
 +            KVMStoragePool pool = null;
 +            final DataTO data = volume.getData();
 +            if (volume.getType() == Volume.Type.ISO && data.getPath() != null) {
 +                DataStoreTO dataStore = data.getDataStore();
 +                String dataStoreUrl = null;
 +                if (dataStore instanceof NfsTO) {
 +                    NfsTO nfsStore = (NfsTO)data.getDataStore();
 +                    dataStoreUrl = nfsStore.getUrl();
 +                } else if (dataStore instanceof PrimaryDataStoreTO && ((PrimaryDataStoreTO) dataStore).getPoolType().equals(StoragePoolType.NetworkFilesystem)) {
 +                    //In order to support directly downloaded ISOs
 +                    String psHost = ((PrimaryDataStoreTO) dataStore).getHost();
 +                    String psPath = ((PrimaryDataStoreTO) dataStore).getPath();
 +                    dataStoreUrl = "nfs://" + psHost + File.separator + psPath;
 +                }
 +                final String volPath = dataStoreUrl + File.separator + data.getPath();
 +                final int index = volPath.lastIndexOf("/");
 +                final String volDir = volPath.substring(0, index);
 +                final String volName = volPath.substring(index + 1);
 +                final KVMStoragePool secondaryStorage = _storagePoolMgr.getStoragePoolByURI(volDir);
 +                physicalDisk = secondaryStorage.getPhysicalDisk(volName);
 +            } else if (volume.getType() != Volume.Type.ISO) {
 +                final PrimaryDataStoreTO store = (PrimaryDataStoreTO)data.getDataStore();
 +                physicalDisk = _storagePoolMgr.getPhysicalDisk(store.getPoolType(), store.getUuid(), data.getPath());
 +                pool = physicalDisk.getPool();
 +            }
 +
 +            String volPath = null;
 +            if (physicalDisk != null) {
 +                volPath = physicalDisk.getPath();
 +            }
 +
 +            // check for disk activity, if detected we should exit because vm is running elsewhere
 +            if (_diskActivityCheckEnabled && physicalDisk != null && physicalDisk.getFormat() == PhysicalDiskFormat.QCOW2) {
 +                s_logger.debug("Checking physical disk file at path " + volPath + " for disk activity to ensure vm is not running elsewhere");
 +                try {
 +                    HypervisorUtils.checkVolumeFileForActivity(volPath, _diskActivityCheckTimeoutSeconds, _diskActivityInactiveThresholdMilliseconds, _diskActivityCheckFileSizeMin);
 +                } catch (final IOException ex) {
 +                    throw new CloudRuntimeException("Unable to check physical disk file for activity", ex);
 +                }
 +                s_logger.debug("Disk activity check cleared");
 +            }
 +
 +            // if params contains a rootDiskController key, use its value (this is what other HVs are doing)
 +            DiskDef.DiskBus diskBusType = getDiskModelFromVMDetail(vmSpec);
 +
 +            if (diskBusType == null) {
 +                diskBusType = getGuestDiskModel(vmSpec.getPlatformEmulator());
 +            }
 +
 +            // I'm not sure why previously certain DATADISKs were hard-coded VIRTIO and others not, however this
 +            // maintains existing functionality with the exception that SCSI will override VIRTIO.
 +            DiskDef.DiskBus diskBusTypeData = (diskBusType == DiskDef.DiskBus.SCSI) ? diskBusType : DiskDef.DiskBus.VIRTIO;
 +
 +            final DiskDef disk = new DiskDef();
 +            int devId = volume.getDiskSeq().intValue();
 +            if (volume.getType() == Volume.Type.ISO) {
 +                if (volPath == null) {
 +                    /* Add iso as placeholder */
 +                    disk.defISODisk(null, devId);
 +                } else {
 +                    disk.defISODisk(volPath, devId);
 +                }
 +            } else {
 +                if (diskBusType == DiskDef.DiskBus.SCSI ) {
 +                    disk.setQemuDriver(true);
 +                    disk.setDiscard(DiscardType.UNMAP);
 +                }
 +
 +                if (pool.getType() == StoragePoolType.RBD) {
 +                    /*
 +                            For RBD pools we use the secret mechanism in libvirt.
 +                            We store the secret under the UUID of the pool, that's why
 +                            we pass the pool's UUID as the authSecret
 +                     */
 +                    disk.defNetworkBasedDisk(physicalDisk.getPath().replace("rbd:", ""), pool.getSourceHost(), pool.getSourcePort(), pool.getAuthUserName(),
 +                            pool.getUuid(), devId, diskBusType, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
 +                } else if (pool.getType() == StoragePoolType.Gluster) {
 +                    final String mountpoint = pool.getLocalPath();
 +                    final String path = physicalDisk.getPath();
 +                    final String glusterVolume = pool.getSourceDir().replace("/", "");
 +                    disk.defNetworkBasedDisk(glusterVolume + path.replace(mountpoint, ""), pool.getSourceHost(), pool.getSourcePort(), null,
 +                            null, devId, diskBusType, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2);
 +                } else if (pool.getType() == StoragePoolType.CLVM || physicalDisk.getFormat() == PhysicalDiskFormat.RAW) {
 +                    disk.defBlockBasedDisk(physicalDisk.getPath(), devId, diskBusType);
 +                } else {
 +                    if (volume.getType() == Volume.Type.DATADISK) {
 +                        disk.defFileBasedDisk(physicalDisk.getPath(), devId, diskBusTypeData, DiskDef.DiskFmtType.QCOW2);
 +                    } else {
 +                        disk.defFileBasedDisk(physicalDisk.getPath(), devId, diskBusType, DiskDef.DiskFmtType.QCOW2);
 +                    }
 +
 +                }
 +
 +            }
 +
 +            if (data instanceof VolumeObjectTO) {
 +                final VolumeObjectTO volumeObjectTO = (VolumeObjectTO)data;
 +                disk.setSerial(diskUuidToSerial(volumeObjectTO.getUuid()));
 +                if (volumeObjectTO.getBytesReadRate() != null && volumeObjectTO.getBytesReadRate() > 0) {
 +                    disk.setBytesReadRate(volumeObjectTO.getBytesReadRate());
 +                }
 +                if (volumeObjectTO.getBytesWriteRate() != null && volumeObjectTO.getBytesWriteRate() > 0) {
 +                    disk.setBytesWriteRate(volumeObjectTO.getBytesWriteRate());
 +                }
 +                if (volumeObjectTO.getIopsReadRate() != null && volumeObjectTO.getIopsReadRate() > 0) {
 +                    disk.setIopsReadRate(volumeObjectTO.getIopsReadRate());
 +                }
 +                if (volumeObjectTO.getIopsWriteRate() != null && volumeObjectTO.getIopsWriteRate() > 0) {
 +                    disk.setIopsWriteRate(volumeObjectTO.getIopsWriteRate());
 +                }
 +                if (volumeObjectTO.getCacheMode() != null) {
 +                    disk.setCacheMode(DiskDef.DiskCacheMode.valueOf(volumeObjectTO.getCacheMode().toString().toUpperCase()));
 +                }
 +            }
 +            if (vm.getDevices() == null) {
 +                s_logger.error("There is no devices for" + vm);
 +                throw new RuntimeException("There is no devices for" + vm);
 +            }
 +            vm.getDevices().addDevice(disk);
 +        }
 +
 +        if (vmSpec.getType() != VirtualMachine.Type.User) {
 +            if (_sysvmISOPath != null) {
 +                final DiskDef iso = new DiskDef();
 +                iso.defISODisk(_sysvmISOPath);
 +                vm.getDevices().addDevice(iso);
 +            }
 +        }
 +
 +        // For LXC, find and add the root filesystem, rbd data disks
 +        if (HypervisorType.LXC.toString().toLowerCase().equals(vm.getHvsType())) {
 +            for (final DiskTO volume : disks) {
 +                final DataTO data = volume.getData();
 +                final PrimaryDataStoreTO store = (PrimaryDataStoreTO)data.getDataStore();
 +                if (volume.getType() == Volume.Type.ROOT) {
 +                    final KVMPhysicalDisk physicalDisk = _storagePoolMgr.getPhysicalDisk(store.getPoolType(), store.getUuid(), data.getPath());
 +                    final FilesystemDef rootFs = new FilesystemDef(physicalDisk.getPath(), "/");
 +                    vm.getDevices().addDevice(rootFs);
 +                } else if (volume.getType() == Volume.Type.DATADISK) {
 +                    final KVMPhysicalDisk physicalDisk = _storagePoolMgr.getPhysicalDisk(store.getPoolType(), store.getUuid(), data.getPath());
 +                    final KVMStoragePool pool = physicalDisk.getPool();
 +                    if(StoragePoolType.RBD.equals(pool.getType())) {
 +                        final int devId = volume.getDiskSeq().intValue();
 +                        final String device = mapRbdDevice(physicalDisk);
 +                        if (device != null) {
 +                            s_logger.debug("RBD device on host is: " + device);
 +                            final DiskDef diskdef = new DiskDef();
 +                            diskdef.defBlockBasedDisk(device, devId, DiskDef.DiskBus.VIRTIO);
 +                            diskdef.setQemuDriver(false);
 +                            vm.getDevices().addDevice(diskdef);
 +                        } else {
 +                            throw new InternalErrorException("Error while mapping RBD device on host");
 +                        }
 +                    }
 +                }
 +            }
 +        }
 +
 +    }
 +
 +    private void createVif(final LibvirtVMDef vm, final NicTO nic, final String nicAdapter) throws InternalErrorException, LibvirtException {
 +
 +        if (nic.getType().equals(TrafficType.Guest) && nic.getBroadcastType().equals(BroadcastDomainType.Vsp)) {
 +            String vrIp = nic.getBroadcastUri().getPath().substring(1);
 +            vm.getMetaData().getMetadataNode(LibvirtVMDef.NuageExtensionDef.class).addNuageExtension(nic.getMac(), vrIp);
 +
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("NIC with MAC " + nic.getMac() + " and BroadcastDomainType " + nic.getBroadcastType() + " in network(" + nic.getGateway() + "/" + nic.getNetmask()
 +                        + ") is " + nic.getType() + " traffic type. So, vsp-vr-ip " + vrIp + " is set in the metadata");
 +            }
 +        }
 +        if (vm.getDevices() == null) {
 +            s_logger.error("LibvirtVMDef object get devices with null result");
 +            throw new InternalErrorException("LibvirtVMDef object get devices with null result");
 +        }
 +        vm.getDevices().addDevice(getVifDriver(nic.getType(), nic.getName()).plug(nic, vm.getPlatformEmulator(), nicAdapter));
 +    }
 +
 +    public boolean cleanupDisk(Map<String, String> volumeToDisconnect) {
 +        return _storagePoolMgr.disconnectPhysicalDisk(volumeToDisconnect);
 +    }
 +
 +    public boolean cleanupDisk(final DiskDef disk) {
 +        final String path = disk.getDiskPath();
 +
 +        if (path == null) {
 +            s_logger.debug("Unable to clean up disk with null path (perhaps empty cdrom drive):" + disk);
 +            return false;
 +        }
 +
 +        if (path.endsWith("systemvm.iso")) {
 +            // don't need to clean up system vm ISO as it's stored in local
 +            return true;
 +        }
 +
 +        return _storagePoolMgr.disconnectPhysicalDiskByPath(path);
 +    }
 +
 +    protected KVMStoragePoolManager getPoolManager() {
 +        return _storagePoolMgr;
 +    }
 +
 +    public synchronized String attachOrDetachISO(final Connect conn, final String vmName, String isoPath, final boolean isAttach, final Integer diskSeq) throws LibvirtException, URISyntaxException,
 +    InternalErrorException {
 +        final DiskDef iso = new DiskDef();
 +        if (isoPath != null && isAttach) {
 +            final int index = isoPath.lastIndexOf("/");
 +            final String path = isoPath.substring(0, index);
 +            final String name = isoPath.substring(index + 1);
 +            final KVMStoragePool secondaryPool = _storagePoolMgr.getStoragePoolByURI(path);
 +            final KVMPhysicalDisk isoVol = secondaryPool.getPhysicalDisk(name);
 +            isoPath = isoVol.getPath();
 +
 +            iso.defISODisk(isoPath, diskSeq);
 +        } else {
 +            iso.defISODisk(null, diskSeq);
 +        }
 +
 +        final String result = attachOrDetachDevice(conn, true, vmName, iso.toString());
 +        if (result == null && !isAttach) {
 +            final List<DiskDef> disks = getDisks(conn, vmName);
 +            for (final DiskDef disk : disks) {
 +                if (disk.getDeviceType() == DiskDef.DeviceType.CDROM
 +                        && (diskSeq == null || disk.getDiskLabel() == iso.getDiskLabel())) {
 +                    cleanupDisk(disk);
 +                }
 +            }
 +
 +        }
 +        return result;
 +    }
 +
 +    public synchronized String attachOrDetachDisk(final Connect conn,
 +            final boolean attach, final String vmName, final KVMPhysicalDisk attachingDisk,
 +            final int devId, final Long bytesReadRate, final Long bytesWriteRate, final Long iopsReadRate, final Long iopsWriteRate, final String cacheMode) throws LibvirtException, InternalErrorException {
 +        List<DiskDef> disks = null;
 +        Domain dm = null;
 +        DiskDef diskdef = null;
 +        final KVMStoragePool attachingPool = attachingDisk.getPool();
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            final LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
 +            final String domXml = dm.getXMLDesc(0);
 +            parser.parseDomainXML(domXml);
 +            disks = parser.getDisks();
 +
 +            if (!attach) {
 +                for (final DiskDef disk : disks) {
 +                    final String file = disk.getDiskPath();
 +                    if (file != null && file.equalsIgnoreCase(attachingDisk.getPath())) {
 +                        diskdef = disk;
 +                        break;
 +                    }
 +                }
 +                if (diskdef == null) {
 +                    throw new InternalErrorException("disk: " + attachingDisk.getPath() + " is not attached before");
 +                }
 +            } else {
 +                DiskDef.DiskBus busT = DiskDef.DiskBus.VIRTIO;
 +                for (final DiskDef disk : disks) {
 +                    if (disk.getDeviceType() == DeviceType.DISK) {
 +                        if (disk.getBusType() == DiskDef.DiskBus.SCSI) {
 +                            busT = DiskDef.DiskBus.SCSI;
 +                        }
 +                        break;
 +                    }
 +                }
 +
 +                diskdef = new DiskDef();
 +                if (busT == DiskDef.DiskBus.SCSI) {
 +                    diskdef.setQemuDriver(true);
 +                    diskdef.setDiscard(DiscardType.UNMAP);
 +                }
 +                if (attachingPool.getType() == StoragePoolType.RBD) {
 +                    diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), attachingPool.getAuthUserName(),
 +                            attachingPool.getUuid(), devId, busT, DiskProtocol.RBD, DiskDef.DiskFmtType.RAW);
 +                } else if (attachingPool.getType() == StoragePoolType.Gluster) {
 +                    diskdef.defNetworkBasedDisk(attachingDisk.getPath(), attachingPool.getSourceHost(), attachingPool.getSourcePort(), null,
 +                            null, devId, busT, DiskProtocol.GLUSTER, DiskDef.DiskFmtType.QCOW2);
 +                } else if (attachingDisk.getFormat() == PhysicalDiskFormat.QCOW2) {
 +                    diskdef.defFileBasedDisk(attachingDisk.getPath(), devId, busT, DiskDef.DiskFmtType.QCOW2);
 +                } else if (attachingDisk.getFormat() == PhysicalDiskFormat.RAW) {
 +                    diskdef.defBlockBasedDisk(attachingDisk.getPath(), devId, busT);
 +                }
 +                if (bytesReadRate != null && bytesReadRate > 0) {
 +                    diskdef.setBytesReadRate(bytesReadRate);
 +                }
 +                if (bytesWriteRate != null && bytesWriteRate > 0) {
 +                    diskdef.setBytesWriteRate(bytesWriteRate);
 +                }
 +                if (iopsReadRate != null && iopsReadRate > 0) {
 +                    diskdef.setIopsReadRate(iopsReadRate);
 +                }
 +                if (iopsWriteRate != null && iopsWriteRate > 0) {
 +                    diskdef.setIopsWriteRate(iopsWriteRate);
 +                }
 +
 +                if (cacheMode != null) {
 +                    diskdef.setCacheMode(DiskDef.DiskCacheMode.valueOf(cacheMode.toUpperCase()));
 +                }
 +            }
 +
 +            final String xml = diskdef.toString();
 +            return attachOrDetachDevice(conn, attach, vmName, xml);
 +        } finally {
 +            if (dm != null) {
 +                dm.free();
 +            }
 +        }
 +    }
 +
 +    protected synchronized String attachOrDetachDevice(final Connect conn, final boolean attach, final String vmName, final String xml) throws LibvirtException, InternalErrorException {
 +        Domain dm = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            if (attach) {
 +                s_logger.debug("Attaching device: " + xml);
 +                dm.attachDevice(xml);
 +            } else {
 +                s_logger.debug("Detaching device: " + xml);
 +                dm.detachDevice(xml);
 +            }
 +        } catch (final LibvirtException e) {
 +            if (attach) {
 +                s_logger.warn("Failed to attach device to " + vmName + ": " + e.getMessage());
 +            } else {
 +                s_logger.warn("Failed to detach device from " + vmName + ": " + e.getMessage());
 +            }
 +            throw e;
 +        } finally {
 +            if (dm != null) {
 +                try {
 +                    dm.free();
 +                } catch (final LibvirtException l) {
 +                    s_logger.trace("Ignoring libvirt error.", l);
 +                }
 +            }
 +        }
 +
 +        return null;
 +    }
 +
 +    @Override
 +    public PingCommand getCurrentStatus(final long id) {
 +
 +        if (!_canBridgeFirewall) {
 +            return new PingRoutingCommand(com.cloud.host.Host.Type.Routing, id, this.getHostVmStateReport());
 +        } else {
 +            final HashMap<String, Pair<Long, Long>> nwGrpStates = syncNetworkGroups(id);
 +            return new PingRoutingWithNwGroupsCommand(getType(), id, this.getHostVmStateReport(), nwGrpStates);
 +        }
 +    }
 +
 +    @Override
 +    public Type getType() {
 +        return Type.Routing;
 +    }
 +
 +    private Map<String, String> getVersionStrings() {
 +        final Script command = new Script(_versionstringpath, _timeout, s_logger);
 +        final KeyValueInterpreter kvi = new KeyValueInterpreter();
 +        final String result = command.execute(kvi);
 +        if (result == null) {
 +            return kvi.getKeyValues();
 +        } else {
 +            return new HashMap<String, String>(1);
 +        }
 +    }
 +
 +    @Override
 +    public StartupCommand[] initialize() {
 +
 +        final List<Object> info = getHostInfo();
 +        _totalMemory = (Long)info.get(2);
 +
 +        final StartupRoutingCommand cmd =
 +                new StartupRoutingCommand((Integer)info.get(0), (Long)info.get(1), (Long)info.get(2), (Long)info.get(4), (String)info.get(3), _hypervisorType,
 +                        RouterPrivateIpStrategy.HostLocal);
 +        cmd.setCpuSockets((Integer)info.get(5));
 +        fillNetworkInformation(cmd);
 +        _privateIp = cmd.getPrivateIpAddress();
 +        cmd.getHostDetails().putAll(getVersionStrings());
 +        cmd.getHostDetails().put(KeyStoreUtils.SECURED, String.valueOf(isHostSecured()).toLowerCase());
 +        cmd.setPool(_pool);
 +        cmd.setCluster(_clusterId);
 +        cmd.setGatewayIpAddress(_localGateway);
 +        cmd.setIqn(getIqn());
 +
 +        if (cmd.getHostDetails().containsKey("Host.OS")) {
 +            _hostDistro = cmd.getHostDetails().get("Host.OS");
 +        }
 +
 +        StartupStorageCommand sscmd = null;
 +        try {
 +
 +            final KVMStoragePool localStoragePool = _storagePoolMgr.createStoragePool(_localStorageUUID, "localhost", -1, _localStoragePath, "", StoragePoolType.Filesystem);
 +            final com.cloud.agent.api.StoragePoolInfo pi =
 +                    new com.cloud.agent.api.StoragePoolInfo(localStoragePool.getUuid(), cmd.getPrivateIpAddress(), _localStoragePath, _localStoragePath,
 +                            StoragePoolType.Filesystem, localStoragePool.getCapacity(), localStoragePool.getAvailable());
 +
 +            sscmd = new StartupStorageCommand();
 +            sscmd.setPoolInfo(pi);
 +            sscmd.setGuid(pi.getUuid());
 +            sscmd.setDataCenter(_dcId);
 +            sscmd.setResourceType(Storage.StorageResourceType.STORAGE_POOL);
 +        } catch (final CloudRuntimeException e) {
 +            s_logger.debug("Unable to initialize local storage pool: " + e);
 +        }
 +
 +        if (sscmd != null) {
 +            return new StartupCommand[] {cmd, sscmd};
 +        } else {
 +            return new StartupCommand[] {cmd};
 +        }
 +    }
 +
 +    public String diskUuidToSerial(String uuid) {
 +        String uuidWithoutHyphen = uuid.replace("-","");
 +        return uuidWithoutHyphen.substring(0, Math.min(uuidWithoutHyphen.length(), 20));
 +    }
 +
 +    private String getIqn() {
 +        try {
 +            final String textToFind = "InitiatorName=";
 +
 +            final Script iScsiAdmCmd = new Script(true, "grep", 0, s_logger);
 +
 +            iScsiAdmCmd.add(textToFind);
 +            iScsiAdmCmd.add("/etc/iscsi/initiatorname.iscsi");
 +
 +            final OutputInterpreter.OneLineParser parser = new OutputInterpreter.OneLineParser();
 +
 +            final String result = iScsiAdmCmd.execute(parser);
 +
 +            if (result != null) {
 +                return null;
 +            }
 +
 +            final String textFound = parser.getLine().trim();
 +
 +            return textFound.substring(textToFind.length());
 +        }
 +        catch (final Exception ex) {
 +            return null;
 +        }
 +    }
 +
 +    protected List<String> getAllVmNames(final Connect conn) {
 +        final ArrayList<String> la = new ArrayList<String>();
 +        try {
 +            final String names[] = conn.listDefinedDomains();
 +            for (int i = 0; i < names.length; i++) {
 +                la.add(names[i]);
 +            }
 +        } catch (final LibvirtException e) {
 +            s_logger.warn("Failed to list Defined domains", e);
 +        }
 +
 +        int[] ids = null;
 +        try {
 +            ids = conn.listDomains();
 +        } catch (final LibvirtException e) {
 +            s_logger.warn("Failed to list domains", e);
 +            return la;
 +        }
 +
 +        Domain dm = null;
 +        for (int i = 0; i < ids.length; i++) {
 +            try {
 +                dm = conn.domainLookupByID(ids[i]);
 +                la.add(dm.getName());
 +            } catch (final LibvirtException e) {
 +                s_logger.warn("Unable to get vms", e);
 +            } finally {
 +                try {
 +                    if (dm != null) {
 +                        dm.free();
 +                    }
 +                } catch (final LibvirtException e) {
 +                    s_logger.trace("Ignoring libvirt error.", e);
 +                }
 +            }
 +        }
 +
 +        return la;
 +    }
 +
 +    private HashMap<String, HostVmStateReportEntry> getHostVmStateReport() {
 +        final HashMap<String, HostVmStateReportEntry> vmStates = new HashMap<String, HostVmStateReportEntry>();
 +        Connect conn = null;
 +
 +        if (_hypervisorType == HypervisorType.LXC) {
 +            try {
 +                conn = LibvirtConnection.getConnectionByType(HypervisorType.LXC.toString());
 +                vmStates.putAll(getHostVmStateReport(conn));
 +                conn = LibvirtConnection.getConnectionByType(HypervisorType.KVM.toString());
 +                vmStates.putAll(getHostVmStateReport(conn));
 +            } catch (final LibvirtException e) {
 +                s_logger.debug("Failed to get connection: " + e.getMessage());
 +            }
 +        }
 +
 +        if (_hypervisorType == HypervisorType.KVM) {
 +            try {
 +                conn = LibvirtConnection.getConnectionByType(HypervisorType.KVM.toString());
 +                vmStates.putAll(getHostVmStateReport(conn));
 +            } catch (final LibvirtException e) {
 +                s_logger.debug("Failed to get connection: " + e.getMessage());
 +            }
 +        }
 +
 +        return vmStates;
 +    }
 +
 +    private HashMap<String, HostVmStateReportEntry> getHostVmStateReport(final Connect conn) {
 +        final HashMap<String, HostVmStateReportEntry> vmStates = new HashMap<String, HostVmStateReportEntry>();
 +
 +        String[] vms = null;
 +        int[] ids = null;
 +
 +        try {
 +            ids = conn.listDomains();
 +        } catch (final LibvirtException e) {
 +            s_logger.warn("Unable to listDomains", e);
 +            return null;
 +        }
 +        try {
 +            vms = conn.listDefinedDomains();
 +        } catch (final LibvirtException e) {
 +            s_logger.warn("Unable to listDomains", e);
 +            return null;
 +        }
 +
 +        Domain dm = null;
 +        for (int i = 0; i < ids.length; i++) {
 +            try {
 +                dm = conn.domainLookupByID(ids[i]);
 +
 +                final DomainState ps = dm.getInfo().state;
 +
 +                final PowerState state = convertToPowerState(ps);
 +
 +                s_logger.trace("VM " + dm.getName() + ": powerstate = " + ps + "; vm state=" + state.toString());
 +                final String vmName = dm.getName();
 +
 +                // TODO : for XS/KVM (host-based resource), we require to remove
 +                // VM completely from host, for some reason, KVM seems to still keep
 +                // Stopped VM around, to work-around that, reporting only powered-on VM
 +                //
 +                if (state == PowerState.PowerOn) {
 +                    vmStates.put(vmName, new HostVmStateReportEntry(state, conn.getHostName()));
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.warn("Unable to get vms", e);
 +            } finally {
 +                try {
 +                    if (dm != null) {
 +                        dm.free();
 +                    }
 +                } catch (final LibvirtException e) {
 +                    s_logger.trace("Ignoring libvirt error.", e);
 +                }
 +            }
 +        }
 +
 +        for (int i = 0; i < vms.length; i++) {
 +            try {
 +
 +                dm = conn.domainLookupByName(vms[i]);
 +
 +                final DomainState ps = dm.getInfo().state;
 +                final PowerState state = convertToPowerState(ps);
 +                final String vmName = dm.getName();
 +                s_logger.trace("VM " + vmName + ": powerstate = " + ps + "; vm state=" + state.toString());
 +
 +                // TODO : for XS/KVM (host-based resource), we require to remove
 +                // VM completely from host, for some reason, KVM seems to still keep
 +                // Stopped VM around, to work-around that, reporting only powered-on VM
 +                //
 +                if (state == PowerState.PowerOn) {
 +                    vmStates.put(vmName, new HostVmStateReportEntry(state, conn.getHostName()));
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.warn("Unable to get vms", e);
 +            } finally {
 +                try {
 +                    if (dm != null) {
 +                        dm.free();
 +                    }
 +                } catch (final LibvirtException e) {
 +                    s_logger.trace("Ignoring libvirt error.", e);
 +                }
 +            }
 +        }
 +
 +        return vmStates;
 +    }
 +
 +    protected List<Object> getHostInfo() {
 +        final ArrayList<Object> info = new ArrayList<Object>();
 +        long speed = 0;
 +        long cpus = 0;
 +        long ram = 0;
 +        int cpuSockets = 0;
 +        String cap = null;
 +        try {
 +            final Connect conn = LibvirtConnection.getConnection();
 +            final NodeInfo hosts = conn.nodeInfo();
 +            speed = getCpuSpeed(hosts);
 +
 +            /*
 +            * Some CPUs report a single socket and multiple NUMA cells.
 +            * We need to multiply them to get the correct socket count.
 +            */
 +            cpuSockets = hosts.sockets;
 +            if (hosts.nodes > 0) {
 +                cpuSockets = hosts.sockets * hosts.nodes;
 +            }
 +            cpus = hosts.cpus;
 +            ram = hosts.memory * 1024L;
 +            final LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
 +            parser.parseCapabilitiesXML(conn.getCapabilities());
 +            final ArrayList<String> oss = parser.getGuestOsType();
 +            for (final String s : oss) {
 +                /*
 +                 * Even host supports guest os type more than hvm, we only
 +                 * report hvm to management server
 +                 */
 +                if (s.equalsIgnoreCase("hvm")) {
 +                    cap = "hvm";
 +                }
 +            }
 +        } catch (final LibvirtException e) {
 +            s_logger.trace("Ignoring libvirt error.", e);
 +        }
 +
 +        if (isSnapshotSupported()) {
 +            cap = cap + ",snapshot";
 +        }
 +
 +        info.add((int)cpus);
 +        info.add(speed);
 +        // Report system's RAM as actual RAM minus host OS reserved RAM
 +        ram = ram - _dom0MinMem + _dom0OvercommitMem;
 +        info.add(ram);
 +        info.add(cap);
 +        info.add(_dom0MinMem);
 +        info.add(cpuSockets);
 +        s_logger.debug("cpus=" + cpus + ", speed=" + speed + ", ram=" + ram + ", _dom0MinMem=" + _dom0MinMem + ", _dom0OvercommitMem=" + _dom0OvercommitMem + ", cpu sockets=" + cpuSockets);
 +
 +        return info;
 +    }
 +
 +    protected static long getCpuSpeed(final NodeInfo nodeInfo) {
 +        try (final Reader reader = new FileReader(
 +                "/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq")) {
 +            return Long.parseLong(IOUtils.toString(reader).trim()) / 1000;
 +        } catch (IOException | NumberFormatException e) {
 +            s_logger.warn("Could not read cpuinfo_max_freq");
 +            return nodeInfo.mhz;
 +        }
 +    }
 +
 +    public String rebootVM(final Connect conn, final String vmName) {
 +        Domain dm = null;
 +        String msg = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            // Get XML Dump including the secure information such as VNC password
 +            // By passing 1, or VIR_DOMAIN_XML_SECURE flag
 +            // https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainXMLFlags
 +            String vmDef = dm.getXMLDesc(1);
 +            final LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
 +            parser.parseDomainXML(vmDef);
 +            for (final InterfaceDef nic : parser.getInterfaces()) {
 +                if (nic.getNetType() == GuestNetType.BRIDGE && nic.getBrName().startsWith("cloudVirBr")) {
 +                    try {
 +                        final int vnetId = Integer.parseInt(nic.getBrName().replaceFirst("cloudVirBr", ""));
 +                        final String pifName = getPif(_guestBridgeName);
 +                        final String newBrName = "br" + pifName + "-" + vnetId;
 +                        vmDef = vmDef.replaceAll("'" + nic.getBrName() + "'", "'" + newBrName + "'");
 +                        s_logger.debug("VM bridge name is changed from " + nic.getBrName() + " to " + newBrName);
 +                    } catch (final NumberFormatException e) {
 +                        continue;
 +                    }
 +                }
 +            }
 +            s_logger.debug(vmDef);
 +            msg = stopVM(conn, vmName, false);
 +            msg = startVM(conn, vmName, vmDef);
 +            return null;
 +        } catch (final LibvirtException e) {
 +            s_logger.warn("Failed to create vm", e);
 +            msg = e.getMessage();
 +        } catch (final InternalErrorException e) {
 +            s_logger.warn("Failed to create vm", e);
 +            msg = e.getMessage();
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.trace("Ignoring libvirt error.", e);
 +            }
 +        }
 +
 +        return msg;
 +    }
 +
 +    public String stopVM(final Connect conn, final String vmName, final boolean forceStop) {
 +        DomainState state = null;
 +        Domain dm = null;
 +
 +        // delete the metadata of vm snapshots before stopping
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            cleanVMSnapshotMetadata(dm);
 +        } catch (LibvirtException e) {
 +            s_logger.debug("Failed to get vm :" + e.getMessage());
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (LibvirtException l) {
 +                s_logger.trace("Ignoring libvirt error.", l);
 +            }
 +        }
 +
 +        s_logger.debug("Try to stop the vm at first");
 +        if (forceStop) {
 +            return stopVMInternal(conn, vmName, true);
 +        }
 +        String ret = stopVMInternal(conn, vmName, false);
 +        if (ret == Script.ERR_TIMEOUT) {
 +            ret = stopVMInternal(conn, vmName, true);
 +        } else if (ret != null) {
 +            /*
 +             * There is a race condition between libvirt and qemu: libvirt
 +             * listens on qemu's monitor fd. If qemu is shutdown, while libvirt
 +             * is reading on the fd, then libvirt will report an error.
 +             */
 +            /* Retry 3 times, to make sure we can get the vm's status */
 +            for (int i = 0; i < 3; i++) {
 +                try {
 +                    dm = conn.domainLookupByName(vmName);
 +                    state = dm.getInfo().state;
 +                    break;
 +                } catch (final LibvirtException e) {
 +                    s_logger.debug("Failed to get vm status:" + e.getMessage());
 +                } finally {
 +                    try {
 +                        if (dm != null) {
 +                            dm.free();
 +                        }
 +                    } catch (final LibvirtException l) {
 +                        s_logger.trace("Ignoring libvirt error.", l);
 +                    }
 +                }
 +            }
 +
 +            if (state == null) {
 +                s_logger.debug("Can't get vm's status, assume it's dead already");
 +                return null;
 +            }
 +
 +            if (state != DomainState.VIR_DOMAIN_SHUTOFF) {
 +                s_logger.debug("Try to destroy the vm");
 +                ret = stopVMInternal(conn, vmName, true);
 +                if (ret != null) {
 +                    return ret;
 +                }
 +            }
 +        }
 +
 +        return null;
 +    }
 +
 +    protected String stopVMInternal(final Connect conn, final String vmName, final boolean force) {
 +        Domain dm = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            final int persist = dm.isPersistent();
 +            if (force) {
 +                if (dm.isActive() == 1) {
 +                    dm.destroy();
 +                    if (persist == 1) {
 +                        dm.undefine();
 +                    }
 +                }
 +            } else {
 +                if (dm.isActive() == 0) {
 +                    return null;
 +                }
 +                dm.shutdown();
 +                int retry = _stopTimeout / 2000;
 +                /* Wait for the domain gets into shutoff state. When it does
 +                   the dm object will no longer work, so we need to catch it. */
 +                try {
 +                    while (dm.isActive() == 1 && retry >= 0) {
 +                        Thread.sleep(2000);
 +                        retry--;
 +                    }
 +                } catch (final LibvirtException e) {
 +                    final String error = e.toString();
 +                    if (error.contains("Domain not found")) {
 +                        s_logger.debug("successfully shut down vm " + vmName);
 +                    } else {
 +                        s_logger.debug("Error in waiting for vm shutdown:" + error);
 +                    }
 +                }
 +                if (retry < 0) {
 +                    s_logger.warn("Timed out waiting for domain " + vmName + " to shutdown gracefully");
 +                    return Script.ERR_TIMEOUT;
 +                } else {
 +                    if (persist == 1) {
 +                        dm.undefine();
 +                    }
 +                }
 +            }
 +        } catch (final LibvirtException e) {
 +            if (e.getMessage().contains("Domain not found")) {
 +                s_logger.debug("VM " + vmName + " doesn't exist, no need to stop it");
 +                return null;
 +            }
 +            s_logger.debug("Failed to stop VM :" + vmName + " :", e);
 +            return e.getMessage();
 +        } catch (final InterruptedException ie) {
 +            s_logger.debug("Interrupted sleep");
 +            return ie.getMessage();
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.trace("Ignoring libvirt error.", e);
 +            }
 +        }
 +
 +        return null;
 +    }
 +
 +    public Integer getVncPort(final Connect conn, final String vmName) throws LibvirtException {
 +        final LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
 +        Domain dm = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            final String xmlDesc = dm.getXMLDesc(0);
 +            parser.parseDomainXML(xmlDesc);
 +            return parser.getVncPort();
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (final LibvirtException l) {
 +                s_logger.trace("Ignoring libvirt error.", l);
 +            }
 +        }
 +    }
 +
 +    private boolean IsHVMEnabled(final Connect conn) {
 +        final LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
 +        try {
 +            parser.parseCapabilitiesXML(conn.getCapabilities());
 +            final ArrayList<String> osTypes = parser.getGuestOsType();
 +            for (final String o : osTypes) {
 +                if (o.equalsIgnoreCase("hvm")) {
 +                    return true;
 +                }
 +            }
 +        } catch (final LibvirtException e) {
 +            s_logger.trace("Ignoring libvirt error.", e);
 +        }
 +        return false;
 +    }
 +
 +    private String getHypervisorPath(final Connect conn) {
 +        final LibvirtCapXMLParser parser = new LibvirtCapXMLParser();
 +        try {
 +            parser.parseCapabilitiesXML(conn.getCapabilities());
 +        } catch (final LibvirtException e) {
 +            s_logger.debug(e.getMessage());
 +        }
 +        return parser.getEmulator();
 +    }
 +
 +    boolean isGuestPVEnabled(final String guestOSName) {
 +        DiskDef.DiskBus db = getGuestDiskModel(guestOSName);
 +        return db != DiskDef.DiskBus.IDE;
 +    }
 +
 +    public boolean isCentosHost() {
 +        if (_hvVersion <= 9) {
 +            return true;
 +        } else {
 +            return false;
 +        }
 +    }
 +
 +    public DiskDef.DiskBus getDiskModelFromVMDetail(final VirtualMachineTO vmTO) {
 +        Map<String, String> details = vmTO.getDetails();
 +        if (details == null) {
 +            return null;
 +        }
 +
 +        final String rootDiskController = details.get(VmDetailConstants.ROOT_DISK_CONTROLLER);
 +        if (StringUtils.isNotBlank(rootDiskController)) {
 +            s_logger.debug("Passed custom disk bus " + rootDiskController);
 +            for (final DiskDef.DiskBus bus : DiskDef.DiskBus.values()) {
 +                if (bus.toString().equalsIgnoreCase(rootDiskController)) {
 +                    s_logger.debug("Found matching enum for disk bus " + rootDiskController);
 +                    return bus;
 +                }
 +            }
 +        }
 +        return null;
 +    }
 +
 +    private DiskDef.DiskBus getGuestDiskModel(final String platformEmulator) {
 +        if (platformEmulator == null) {
 +            return DiskDef.DiskBus.IDE;
 +        } else if (platformEmulator.startsWith("Other PV Virtio-SCSI")) {
 +            return DiskDef.DiskBus.SCSI;
 +        } else if (platformEmulator.startsWith("Ubuntu") || platformEmulator.startsWith("Fedora 13") || platformEmulator.startsWith("Fedora 12") || platformEmulator.startsWith("Fedora 11") ||
 +                platformEmulator.startsWith("Fedora 10") || platformEmulator.startsWith("Fedora 9") || platformEmulator.startsWith("CentOS 5.3") || platformEmulator.startsWith("CentOS 5.4") ||
 +                platformEmulator.startsWith("CentOS 5.5") || platformEmulator.startsWith("CentOS") || platformEmulator.startsWith("Fedora") ||
 +                platformEmulator.startsWith("Red Hat Enterprise Linux 5.3") || platformEmulator.startsWith("Red Hat Enterprise Linux 5.4") ||
 +                platformEmulator.startsWith("Red Hat Enterprise Linux 5.5") || platformEmulator.startsWith("Red Hat Enterprise Linux 6") || platformEmulator.startsWith("Debian GNU/Linux") ||
 +                platformEmulator.startsWith("FreeBSD 10") || platformEmulator.startsWith("Oracle") || platformEmulator.startsWith("Other PV")) {
 +            return DiskDef.DiskBus.VIRTIO;
 +        } else {
 +            return DiskDef.DiskBus.IDE;
 +        }
 +
 +    }
 +    private void cleanupVMNetworks(final Connect conn, final List<InterfaceDef> nics) {
 +        if (nics != null) {
 +            for (final InterfaceDef nic : nics) {
 +                for (final VifDriver vifDriver : getAllVifDrivers()) {
 +                    vifDriver.unplug(nic);
 +                }
 +            }
 +        }
 +    }
 +
 +    public Domain getDomain(final Connect conn, final String vmName) throws LibvirtException {
 +        return conn.domainLookupByName(vmName);
 +    }
 +
 +    public List<InterfaceDef> getInterfaces(final Connect conn, final String vmName) {
 +        final LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
 +        Domain dm = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            parser.parseDomainXML(dm.getXMLDesc(0));
 +            return parser.getInterfaces();
 +
 +        } catch (final LibvirtException e) {
 +            s_logger.debug("Failed to get dom xml: " + e.toString());
 +            return new ArrayList<InterfaceDef>();
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.trace("Ignoring libvirt error.", e);
 +            }
 +        }
 +    }
 +
 +    public List<DiskDef> getDisks(final Connect conn, final String vmName) {
 +        final LibvirtDomainXMLParser parser = new LibvirtDomainXMLParser();
 +        Domain dm = null;
 +        try {
 +            dm = conn.domainLookupByName(vmName);
 +            parser.parseDomainXML(dm.getXMLDesc(0));
 +            return parser.getDisks();
 +
 +        } catch (final LibvirtException e) {
 +            s_logger.debug("Failed to get dom xml: " + e.toString());
 +            return new ArrayList<DiskDef>();
 +        } finally {
 +            try {
 +                if (dm != null) {
 +                    dm.free();
 +                }
 +            } catch (final LibvirtException e) {
 +                s_logger.trace("Ignoring libvirt error.", e);
 +            }
 +        }
 +    }
 +
 +    private String executeBashScript(final String script) {
 +        final Script command = new Script("/bin/bash", _timeout, s_logger);
 +        command.add("-c");
 +        command.add(script);
 +        return command.execute();
 +    }
 +
 +    public List<VmNetworkStatsEntry> getVmNetworkStat(Connect conn, String vmName) throws LibvirtException {
 +        Domain dm = null;
 +        try {
 +            dm = getDomain(conn, vmName);
 +
 +            List<VmNetworkStatsEntry> stats = new ArrayList<VmNetworkStatsEntry>();
 +
 +            List<InterfaceDef> nics = getInterfaces(conn, vmName);
 +
 +            for (InterfaceDef nic : nics) {
 +                DomainInterfaceStats nicStats = dm.interfaceStats(nic.getDevName());
 +                String macAddress = nic.getMacAddress();
 +                VmNetworkStatsEntry stat = new VmNetworkStatsEntry(vmName, macAddress, nicStats.tx_bytes, nicStats.rx_bytes);
 +                stats.add(stat);
 +            }
 +
 +            return stats;
 +        } finally {
 +            if (dm != null) {
 +                dm.free();
 +            }
 +        }
 +    }
 +
 +    public List<VmDiskStatsEntry> getVmDiskStat(final Connect conn, final String vmName) throws LibvirtException {
 +        Domain dm = null;
 +        try {
 +            dm = getDomain(conn, vmName);
 +
 +            final List<VmDiskStatsEntry> stats = new ArrayList<VmDiskStatsEntry>();
 +
 +            final List<DiskDef> disks = getDisks(conn, vmName);
 +
 +            for (final DiskDef disk : disks) {
 +                if (disk.getDeviceType() != DeviceType.DISK) {
 +                    break;
 +                }
 +                final DomainBlockStats blockStats = dm.blockStats(disk.getDiskLabel());
 +                final String path = disk.getDiskPath(); // for example, path = /mnt/pool_uuid/disk_path/
 +                String diskPath = null;
 +                if (path != null) {
 +                    final String[] token = path.split("/");
 +                    if (token.length > 3) {
 +                        diskPath = token[3];
 +                        final VmDiskStatsEntry stat = new VmDiskStatsEntry(vmName, diskPath, blockStats.wr_req, blockStats.rd_req, blockStats.wr_bytes, blockStats.rd_bytes);
 +                        stats.add(stat);
 +                    }
 +                }
 +            }
 +
 +            return stats;
 +        } finally {
 +            if (dm != null) {
 +                dm.free();
 +            }
 +        }
 +    }
 +
 +    private class VmStats {
 +        long _usedTime;
 +        long _tx;
 +        long _rx;
 +        long _ioRead;
 +        long _ioWrote;
 +        long _bytesRead;
 +        long _bytesWrote;
 +        Calendar _timestamp;
 +    }
 +
 +    public VmStatsEntry getVmStat(final Connect conn, final String vmName) throws LibvirtException {
 +        Domain dm = null;
 +        try {
 +            dm = getDomain(conn, vmName);
 +            if (dm == null) {
 +                return null;
 +            }
 +            DomainInfo info = dm.getInfo();
 +            final VmStatsEntry stats = new VmStatsEntry();
 +
 +            stats.setNumCPUs(info.nrVirtCpu);
 +            stats.setEntityType("vm");
 +
 +            stats.setMemoryKBs(info.maxMem);
 +            stats.setTargetMemoryKBs(info.memory);
 +            stats.setIntFreeMemoryKBs(getMemoryFreeInKBs(dm));
 +
 +            /* get cpu utilization */
 +            VmStats oldStats = null;
 +
 +            final Calendar now = Calendar.getInstance();
 +
 +            oldStats = _vmStats.get(vmName);
 +
 +            long elapsedTime = 0;
 +            if (oldStats != null) {
 +                elapsedTime = now.getTimeInMillis() - oldStats._timestamp.getTimeInMillis();
 +                double utilization = (info.cpuTime - oldStats._usedTime) / ((double)elapsedTime * 1000000);
 +
 +                final NodeInfo node = conn.nodeInfo();
 +                utilization = utilization / node.cpus;
 +                if (utilization > 0) {
 +                    stats.setCPUUtilization(utilization * 100);
 +                }
 +            }
 +
 +            /* get network stats */
 +
 +            final List<InterfaceDef> vifs = getInterfaces(conn, vmName);
 +            long rx = 0;
 +            long tx = 0;
 +            for (final InterfaceDef vif : vifs) {
 +                final DomainInterfaceStats ifStats = dm.interfaceStats(vif.getDevName());
 +                rx += ifStats.rx_bytes;
 +                tx += ifStats.tx_bytes;
 +            }
 +
 +            if (oldStats != null) {
 +                final double deltarx = rx - oldStats._rx;
 +                if (deltarx > 0) {
 +                    stats.setNetworkReadKBs(deltarx / 1024);
 +                }
 +                final double deltatx = tx - oldStats._tx;
 +                if (deltatx > 0) {
 +                    stats.setNetworkWriteKBs(deltatx / 1024);
 +                }
 +            }
 +
 +            /* get disk stats */
 +            final List<DiskDef> disks = getDisks(conn, vmName);
 +            long io_rd = 0;
 +            long io_wr = 0;
 +            long bytes_rd = 0;
 +            long bytes_wr = 0;
 +            for (final DiskDef disk : disks) {
 +                if (disk.getDeviceType() == DeviceType.CDROM || disk.getDeviceType() == DeviceType.FLOPPY) {
 +                    continue;
 +                }
 +                final DomainBlockStats blockStats = dm.blockStats(disk.getDiskLabel());
 +                io_rd += blockStats.rd_req;
 +                io_wr += blockStats.wr_req;
 +                bytes_rd += blockStats.rd_bytes;
 +                bytes_wr += blockStats.wr_bytes;
 +            }
 +
 +            if (oldStats != null) {
 +                final long deltaiord = io_rd - oldStats._ioRead;
 +                if (deltaiord > 0) {
 +                    stats.setDiskReadIOs(deltaiord);
 +                }
 +                final long deltaiowr = io_wr - oldStats._ioWrote;
 +                if (deltaiowr > 0) {
 +                    stats.setDiskWriteIOs(deltaiowr);
 +                }
 +                final double deltabytesrd = bytes_rd - oldStats._bytesRead;
 +                if (deltabytesrd > 0) {
 +                    stats.setDiskReadKBs(deltabytesrd / 1024);
 +                }
 +                final double deltabyteswr = bytes_wr - oldStats._bytesWrote;
 +                if (deltabyteswr > 0) {
 +                    stats.setDiskWriteKBs(deltabyteswr / 1024);
 +                }
 +            }
 +
 +            /* save to Hashmap */
 +            final VmStats newStat = new VmStats();
 +            newStat._usedTime = info.cpuTime;
 +            newStat._rx = rx;
 +            newStat._tx = tx;
 +            newStat._ioRead = io_rd;
 +            newStat._ioWrote = io_wr;
 +            newStat._bytesRead = bytes_rd;
 +            newStat._bytesWrote = bytes_wr;
 +            newStat._timestamp = now;
 +            _vmStats.put(vmName, newStat);
 +            return stats;
 +        } finally {
 +            if (dm != null) {
 +                dm.free();
 +            }
 +        }
 +    }
 +
 +    /**
 +    * This method retrieves the memory statistics from the domain given as parameters.
 +    * If no memory statistic is found, it will return {@link NumberUtils#LONG_ZERO} as the value of free memory in the domain.
 +    * If it can retrieve the domain memory statistics, it will return the free memory statistic; that means, it returns the value at the first position of the array returned by {@link Domain#memoryStats(int)}.
 +    *
 +    * @return the amount of free memory in KBs
 +    */
 +    protected long getMemoryFreeInKBs(Domain dm) throws LibvirtException {
 +        MemoryStatistic[] mems = dm.memoryStats(NUMMEMSTATS);
 +        if (ArrayUtils.isEmpty(mems)) {
 +            return NumberUtils.LONG_ZERO;
 +        }
 +        return mems[0].getValue();
 +    }
 +
 +    private boolean canBridgeFirewall(final String prvNic) {
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("can_bridge_firewall");
 +        cmd.add(prvNic);
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean destroyNetworkRulesForVM(final Connect conn, final String vmName) {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +        String vif = null;
 +        final List<InterfaceDef> intfs = getInterfaces(conn, vmName);
 +        if (intfs.size() > 0) {
 +            final InterfaceDef intf = intfs.get(0);
 +            vif = intf.getDevName();
 +        }
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("destroy_network_rules_for_vm");
 +        cmd.add("--vmname", vmName);
 +        if (vif != null) {
 +            cmd.add("--vif", vif);
 +        }
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean defaultNetworkRules(final Connect conn, final String vmName, final NicTO nic, final Long vmId, final String secIpStr) {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +
 +        final List<InterfaceDef> intfs = getInterfaces(conn, vmName);
 +        if (intfs.size() == 0 || intfs.size() < nic.getDeviceId()) {
 +            return false;
 +        }
 +
 +        final InterfaceDef intf = intfs.get(nic.getDeviceId());
 +        final String brname = intf.getBrName();
 +        final String vif = intf.getDevName();
 +
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("default_network_rules");
 +        cmd.add("--vmname", vmName);
 +        cmd.add("--vmid", vmId.toString());
 +        if (nic.getIp() != null) {
 +            cmd.add("--vmip", nic.getIp());
 +        }
 +        if (nic.getIp6Address() != null) {
 +            cmd.add("--vmip6", nic.getIp6Address());
 +        }
 +        cmd.add("--vmmac", nic.getMac());
 +        cmd.add("--vif", vif);
 +        cmd.add("--brname", brname);
 +        cmd.add("--nicsecips", secIpStr);
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    protected boolean post_default_network_rules(final Connect conn, final String vmName, final NicTO nic, final Long vmId, final InetAddress dhcpServerIp, final String hostIp, final String hostMacAddr) {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +
 +        final List<InterfaceDef> intfs = getInterfaces(conn, vmName);
 +        if (intfs.size() < nic.getDeviceId()) {
 +            return false;
 +        }
 +
 +        final InterfaceDef intf = intfs.get(nic.getDeviceId());
 +        final String brname = intf.getBrName();
 +        final String vif = intf.getDevName();
 +
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("post_default_network_rules");
 +        cmd.add("--vmname", vmName);
 +        cmd.add("--vmid", vmId.toString());
 +        cmd.add("--vmip", nic.getIp());
 +        cmd.add("--vmmac", nic.getMac());
 +        cmd.add("--vif", vif);
 +        cmd.add("--brname", brname);
 +        if (dhcpServerIp != null) {
 +            cmd.add("--dhcpSvr", dhcpServerIp.getHostAddress());
 +        }
 +
 +        cmd.add("--hostIp", hostIp);
 +        cmd.add("--hostMacAddr", hostMacAddr);
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean configureDefaultNetworkRulesForSystemVm(final Connect conn, final String vmName) {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("default_network_rules_systemvm");
 +        cmd.add("--vmname", vmName);
 +        cmd.add("--localbrname", _linkLocalBridgeName);
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean addNetworkRules(final String vmName, final String vmId, final String guestIP, final String guestIP6, final String sig, final String seq, final String mac, final String rules, final String vif, final String brname,
 +            final String secIps) {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +
 +        final String newRules = rules.replace(" ", ";");
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("add_network_rules");
 +        cmd.add("--vmname", vmName);
 +        cmd.add("--vmid", vmId);
 +        cmd.add("--vmip", guestIP);
 +        if (StringUtils.isNotBlank(guestIP6)) {
 +            cmd.add("--vmip6", guestIP6);
 +        }
 +        cmd.add("--sig", sig);
 +        cmd.add("--seq", seq);
 +        cmd.add("--vmmac", mac);
 +        cmd.add("--vif", vif);
 +        cmd.add("--brname", brname);
 +        cmd.add("--nicsecips", secIps);
 +        if (newRules != null && !newRules.isEmpty()) {
 +            cmd.add("--rules", newRules);
 +        }
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean configureNetworkRulesVMSecondaryIP(final Connect conn, final String vmName, final String secIp, final String action) {
 +
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("network_rules_vmSecondaryIp");
 +        cmd.add("--vmname", vmName);
 +        cmd.add("--nicsecips", secIp);
 +        cmd.add("--action", action);
 +
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public boolean cleanupRules() {
 +        if (!_canBridgeFirewall) {
 +            return false;
 +        }
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("cleanup_rules");
 +        final String result = cmd.execute();
 +        if (result != null) {
 +            return false;
 +        }
 +        return true;
 +    }
 +
 +    public String getRuleLogsForVms() {
 +        final Script cmd = new Script(_securityGroupPath, _timeout, s_logger);
 +        cmd.add("get_rule_logs_for_vms");
 +        final OutputInterpreter.OneLineParser parser = new OutputInterpreter.OneLineParser();
 +        final String result = cmd.execute(parser);
 +        if (result == null) {
 +            return parser.getLine();
 +        }
 +        return null;
 +    }
 +
 +    private HashMap<String, Pair<Long, Long>> syncNetworkGroups(final long id) {
 +        final HashMap<String, Pair<Long, Long>> states = new HashMap<String, Pair<Long, Long>>();
 +
 +        final String result = getRuleLogsForVms();
 +        s_logger.trace("syncNetworkGroups: id=" + id + " got: " + result);
 +        final String[] rulelogs = result != null ? result.split(";") : new String[0];
 +        for (final String rulesforvm : rulelogs) {
 +            final String[] log = rulesforvm.split(",");
 +            if (log.length != 6) {
 +                continue;
 +            }
 +            try {
 +                states.put(log[0], new Pair<Long, Long>(Long.parseLong(log[1]), Long.parseLong(log[5])));
 +            } catch (final NumberFormatException nfe) {
 +                states.put(log[0], new Pair<Long, Long>(-1L, -1L));
 +            }
 +        }
 +        return states;
 +    }
 +
 +    /* online snapshot supported by enhanced qemu-kvm */
 +    private boolean isSnapshotSupported() {
 +        final String result = executeBashScript("qemu-img --help|grep convert");
 +        if (result != null) {
 +            return false;
 +        } else {
 +            return true;
 +        }
 +    }
 +
 +    public Pair<Double, Double> getNicStats(final String nicName) {
 +        return new Pair<Double, Double>(readDouble(nicName, "rx_bytes"), readDouble(nicName, "tx_bytes"));
 +    }
 +
 +    static double readDouble(final String nicName, final String fileName) {
 +        final String path = "/sys/class/net/" + nicName + "/statistics/" + fileName;
 +        try {
 +            return Double.parseDouble(FileUtils.readFileToString(new File(path)));
 +        } catch (final IOException ioe) {
 +            s_logger.warn("Failed to read the " + fileName + " for " + nicName + " from " + path, ioe);
 +            return 0.0;
 +        }
 +    }
 +
 +    private String prettyVersion(final long version) {
 +        final long major = version / 1000000;
 +        final long minor = version % 1000000 / 1000;
 +        final long release = version % 1000000 % 1000;
 +        return major + "." + minor + "." + release;
 +    }
 +
 +    @Override
 +    public void setName(final String name) {
 +        // TODO Auto-generated method stub
 +    }
 +
 +    @Override
 +    public void setConfigParams(final Map<String, Object> params) {
 +        // TODO Auto-generated method stub
 +    }
 +
 +    @Override
 +    public Map<String, Object> getConfigParams() {
 +        // TODO Auto-generated method stub
 +        return null;
 +    }
 +
 +    @Override
 +    public int getRunLevel() {
 +        // TODO Auto-generated method stub
 +        return 0;
 +    }
 +
 +    @Override
 +    public void setRunLevel(final int level) {
 +        // TODO Auto-generated method stub
 +    }
 +
 +    public HypervisorType getHypervisorType(){
 +        return _hypervisorType;
 +    }
 +
 +    public String mapRbdDevice(final KVMPhysicalDisk disk){
 +        final KVMStoragePool pool = disk.getPool();
 +        //Check if rbd image is already mapped
 +        final String[] splitPoolImage = disk.getPath().split("/");
 +        String device = Script.runSimpleBashScript("rbd showmapped | grep \""+splitPoolImage[0]+"[ ]*"+splitPoolImage[1]+"\" | grep -o \"[^ ]*[ ]*$\"");
 +        if(device == null) {
 +            //If not mapped, map and return mapped device
 +            Script.runSimpleBashScript("rbd map " + disk.getPath() + " --id " + pool.getAuthUserName());
 +            device = Script.runSimpleBashScript("rbd showmapped | grep \""+splitPoolImage[0]+"[ ]*"+splitPoolImage[1]+"\" | grep -o \"[^ ]*[ ]*$\"");
 +        }
 +        return device;
 +    }
 +
 +    public List<Ternary<String, Boolean, String>> cleanVMSnapshotMetadata(Domain dm) throws LibvirtException {
 +        s_logger.debug("Cleaning the metadata of vm snapshots of vm " + dm.getName());
 +        List<Ternary<String, Boolean, String>> vmsnapshots = new ArrayList<Ternary<String, Boolean, String>>();
 +        if (dm.snapshotNum() == 0) {
 +            return vmsnapshots;
 +        }
 +        String currentSnapshotName = null;
 +        try {
 +            DomainSnapshot snapshotCurrent = dm.snapshotCurrent();
 +            String snapshotXML = snapshotCurrent.getXMLDesc();
 +            snapshotCurrent.free();
 +            DocumentBuilder builder;
 +            try {
 +                builder = DocumentBuilderFactory.newInstance().newDocumentBuilder();
 +
 +                InputSource is = new InputSource();
 +                is.setCharacterStream(new StringReader(snapshotXML));
 +                Document doc = builder.parse(is);
 +                Element rootElement = doc.getDocumentElement();
 +
 +                currentSnapshotName = getTagValue("name", rootElement);
 +            } catch (ParserConfigurationException e) {
 +                s_logger.debug(e.toString());
 +            } catch (SAXException e) {
 +                s_logger.debug(e.toString());
 +            } catch (IOException e) {
 +                s_logger.debug(e.toString());
 +            }
 +        } catch (LibvirtException e) {
 +            s_logger.debug("Fail to get the current vm snapshot for vm: " + dm.getName() + ", continue");
 +        }
 +        int flags = 2; // VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY = 2
 +        String[] snapshotNames = dm.snapshotListNames();
 +        Arrays.sort(snapshotNames);
 +        for (String snapshotName: snapshotNames) {
 +            DomainSnapshot snapshot = dm.snapshotLookupByName(snapshotName);
 +            Boolean isCurrent = (currentSnapshotName != null && currentSnapshotName.equals(snapshotName)) ? true: false;
 +            vmsnapshots.add(new Ternary<String, Boolean, String>(snapshotName, isCurrent, snapshot.getXMLDesc()));
 +        }
 +        for (String snapshotName: snapshotNames) {
 +            DomainSnapshot snapshot = dm.snapshotLookupByName(snapshotName);
 +            snapshot.delete(flags); // clean metadata of vm snapshot
 +        }
 +        return vmsnapshots;
 +    }
 +
 +    private static String getTagValue(String tag, Element eElement) {
 +        NodeList nlList = eElement.getElementsByTagName(tag).item(0).getChildNodes();
 +        Node nValue = nlList.item(0);
 +
 +        return nValue.getNodeValue();
 +    }
 +
 +    public void restoreVMSnapshotMetadata(Domain dm, String vmName, List<Ternary<String, Boolean, String>> vmsnapshots) {
 +        s_logger.debug("Restoring the metadata of vm snapshots of vm " + vmName);
 +        for (Ternary<String, Boolean, String> vmsnapshot: vmsnapshots) {
 +            String snapshotName = vmsnapshot.first();
 +            Boolean isCurrent = vmsnapshot.second();
 +            String snapshotXML = vmsnapshot.third();
 +            s_logger.debug("Restoring vm snapshot " + snapshotName + " on " + vmName + " with XML:\n " + snapshotXML);
 +            try {
 +                int flags = 1; // VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE = 1
 +                if (isCurrent) {
 +                    flags += 2; // VIR_DOMAIN_SNAPSHOT_CREATE_CURRENT = 2
 +                }
 +                dm.snapshotCreateXML(snapshotXML, flags);
 +            } catch (LibvirtException e) {
 +                s_logger.debug("Failed to restore vm snapshot " + snapshotName + ", continue");
 +                continue;
 +            }
 +        }
 +    }
 +
 +    public long getTotalMemory() {
 +        return _totalMemory;
 +    }
 +
 +    public String getHostDistro() {
 +        return _hostDistro;
 +    }
 +
 +    public boolean isHostSecured() {
 +        // Test for host certificates
 +        final File confFile = PropertiesUtil.findConfigFile(KeyStoreUtils.AGENT_PROPSFILE);
 +        if (confFile == null || !confFile.exists() || !new File(confFile.getParent() + "/" + KeyStoreUtils.CERT_FILENAME).exists()) {
 +            return false;
 +        }
 +
 +        // Test for libvirt TLS configuration
 +        try {
 +            new Connect(String.format("qemu+tls://%s/system", _privateIp));
 +        } catch (final LibvirtException ignored) {
 +            return false;
 +        }
 +        return true;
 +    }
 +}
diff --cc plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
index 0000000,5e4ef48..5e4ef48
mode 000000,100644..100644
--- a/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
+++ b/plugins/hypervisors/kvm/src/main/java/com/cloud/hypervisor/kvm/resource/wrapper/LibvirtHandleConfigDriveCommandWrapper.java
diff --cc server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
index af0b2c3,0000000..df157bf
mode 100644,000000..100644
--- a/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
+++ b/server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
@@@ -1,6432 -1,0 +1,6434 @@@
 +// Licensed to the Apache Software Foundation (ASF) under one
 +// or more contributor license agreements.  See the NOTICE file
 +// distributed with this work for additional information
 +// regarding copyright ownership.  The ASF licenses this file
 +// to you under the Apache License, Version 2.0 (the
 +// "License"); you may not use this file except in compliance
 +// with the License.  You may obtain a copy of the License at
 +//
 +//   http://www.apache.org/licenses/LICENSE-2.0
 +//
 +// Unless required by applicable law or agreed to in writing,
 +// software distributed under the License is distributed on an
 +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 +// KIND, either express or implied.  See the License for the
 +// specific language governing permissions and limitations
 +// under the License.
 +package com.cloud.vm;
 +
 +import java.io.UnsupportedEncodingException;
 +import java.net.URLDecoder;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.Date;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.LinkedHashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +import java.util.UUID;
 +import java.util.concurrent.ConcurrentHashMap;
 +import java.util.concurrent.ExecutorService;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.ScheduledExecutorService;
 +import java.util.concurrent.TimeUnit;
 +import java.util.stream.Collectors;
 +
 +import javax.inject.Inject;
 +import javax.naming.ConfigurationException;
 +
 +import org.apache.cloudstack.acl.ControlledEntity.ACLType;
 +import org.apache.cloudstack.acl.SecurityChecker.AccessType;
 +import org.apache.cloudstack.affinity.AffinityGroupService;
 +import org.apache.cloudstack.affinity.AffinityGroupVO;
 +import org.apache.cloudstack.affinity.dao.AffinityGroupDao;
 +import org.apache.cloudstack.affinity.dao.AffinityGroupVMMapDao;
 +import org.apache.cloudstack.api.ApiConstants;
 +import org.apache.cloudstack.api.BaseCmd.HTTPMethod;
 +import org.apache.cloudstack.api.command.admin.vm.AssignVMCmd;
 +import org.apache.cloudstack.api.command.admin.vm.RecoverVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.AddNicToVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.DeployVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.DestroyVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.RebootVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.RemoveNicFromVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.ResetVMPasswordCmd;
 +import org.apache.cloudstack.api.command.user.vm.ResetVMSSHKeyCmd;
 +import org.apache.cloudstack.api.command.user.vm.RestoreVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.ScaleVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.SecurityGroupAction;
 +import org.apache.cloudstack.api.command.user.vm.StartVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.UpdateDefaultNicForVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.UpdateVMCmd;
 +import org.apache.cloudstack.api.command.user.vm.UpdateVmNicIpCmd;
 +import org.apache.cloudstack.api.command.user.vm.UpgradeVMCmd;
 +import org.apache.cloudstack.api.command.user.vmgroup.CreateVMGroupCmd;
 +import org.apache.cloudstack.api.command.user.vmgroup.DeleteVMGroupCmd;
 +import org.apache.cloudstack.api.command.user.volume.ResizeVolumeCmd;
 +import org.apache.cloudstack.context.CallContext;
 +import org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntity;
 +import org.apache.cloudstack.engine.cloud.entity.api.db.dao.VMNetworkMapDao;
 +import org.apache.cloudstack.engine.orchestration.service.NetworkOrchestrationService;
 +import org.apache.cloudstack.engine.orchestration.service.VolumeOrchestrationService;
 +import org.apache.cloudstack.engine.service.api.OrchestrationService;
 +import org.apache.cloudstack.engine.subsystem.api.storage.DataStore;
 +import org.apache.cloudstack.engine.subsystem.api.storage.DataStoreManager;
 +import org.apache.cloudstack.engine.subsystem.api.storage.PrimaryDataStore;
 +import org.apache.cloudstack.engine.subsystem.api.storage.VolumeDataFactory;
 +import org.apache.cloudstack.engine.subsystem.api.storage.VolumeInfo;
 +import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService;
 +import org.apache.cloudstack.engine.subsystem.api.storage.VolumeService.VolumeApiResult;
 +import org.apache.cloudstack.framework.async.AsyncCallFuture;
 +import org.apache.cloudstack.framework.config.ConfigKey;
 +import org.apache.cloudstack.framework.config.Configurable;
 +import org.apache.cloudstack.framework.config.dao.ConfigurationDao;
 +import org.apache.cloudstack.managed.context.ManagedContextRunnable;
 +import org.apache.cloudstack.storage.command.DeleteCommand;
 +import org.apache.cloudstack.storage.command.DettachCommand;
 +import org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDao;
 +import org.apache.cloudstack.storage.datastore.db.StoragePoolVO;
 +import org.apache.cloudstack.storage.datastore.db.TemplateDataStoreDao;
 +import org.apache.cloudstack.storage.datastore.db.TemplateDataStoreVO;
 +import org.apache.commons.codec.binary.Base64;
 +import org.apache.commons.collections.MapUtils;
 +import org.apache.commons.lang3.StringUtils;
 +import org.apache.log4j.Logger;
 +
 +import com.cloud.agent.AgentManager;
 +import com.cloud.agent.api.Answer;
 +import com.cloud.agent.api.Command;
 +import com.cloud.agent.api.GetVmDiskStatsAnswer;
 +import com.cloud.agent.api.GetVmDiskStatsCommand;
 +import com.cloud.agent.api.GetVmIpAddressCommand;
 +import com.cloud.agent.api.GetVmNetworkStatsAnswer;
 +import com.cloud.agent.api.GetVmNetworkStatsCommand;
 +import com.cloud.agent.api.GetVmStatsAnswer;
 +import com.cloud.agent.api.GetVmStatsCommand;
 +import com.cloud.agent.api.GetVolumeStatsAnswer;
 +import com.cloud.agent.api.GetVolumeStatsCommand;
 +import com.cloud.agent.api.ModifyTargetsCommand;
 +import com.cloud.agent.api.PvlanSetupCommand;
 +import com.cloud.agent.api.RestoreVMSnapshotAnswer;
 +import com.cloud.agent.api.RestoreVMSnapshotCommand;
 +import com.cloud.agent.api.StartAnswer;
 +import com.cloud.agent.api.VmDiskStatsEntry;
 +import com.cloud.agent.api.VmNetworkStatsEntry;
 +import com.cloud.agent.api.VmStatsEntry;
 +import com.cloud.agent.api.VolumeStatsEntry;
 +import com.cloud.agent.api.to.DiskTO;
 +import com.cloud.agent.api.to.NicTO;
 +import com.cloud.agent.api.to.VirtualMachineTO;
 +import com.cloud.agent.manager.Commands;
 +import com.cloud.alert.AlertManager;
 +import com.cloud.api.ApiDBUtils;
 +import com.cloud.capacity.Capacity;
 +import com.cloud.capacity.CapacityManager;
 +import com.cloud.configuration.Config;
 +import com.cloud.configuration.ConfigurationManager;
 +import com.cloud.configuration.Resource.ResourceType;
 +import com.cloud.dc.DataCenter;
 +import com.cloud.dc.DataCenter.NetworkType;
 +import com.cloud.dc.DataCenterVO;
 +import com.cloud.dc.DedicatedResourceVO;
 +import com.cloud.dc.HostPodVO;
 +import com.cloud.dc.Vlan;
 +import com.cloud.dc.Vlan.VlanType;
 +import com.cloud.dc.VlanVO;
 +import com.cloud.dc.dao.ClusterDao;
 +import com.cloud.dc.dao.DataCenterDao;
 +import com.cloud.dc.dao.DedicatedResourceDao;
 +import com.cloud.dc.dao.HostPodDao;
 +import com.cloud.dc.dao.VlanDao;
 +import com.cloud.deploy.DataCenterDeployment;
 +import com.cloud.deploy.DeployDestination;
 +import com.cloud.deploy.DeploymentPlanner;
 +import com.cloud.deploy.DeploymentPlanner.ExcludeList;
 +import com.cloud.deploy.DeploymentPlanningManager;
 +import com.cloud.deploy.PlannerHostReservationVO;
 +import com.cloud.deploy.dao.PlannerHostReservationDao;
 +import com.cloud.domain.Domain;
 +import com.cloud.domain.DomainVO;
 +import com.cloud.domain.dao.DomainDao;
 +import com.cloud.event.ActionEvent;
 +import com.cloud.event.ActionEventUtils;
 +import com.cloud.event.EventTypes;
 +import com.cloud.event.UsageEventUtils;
 +import com.cloud.event.UsageEventVO;
 +import com.cloud.event.dao.UsageEventDao;
 +import com.cloud.exception.AgentUnavailableException;
 +import com.cloud.exception.CloudException;
 +import com.cloud.exception.ConcurrentOperationException;
 +import com.cloud.exception.InsufficientAddressCapacityException;
 +import com.cloud.exception.InsufficientCapacityException;
 +import com.cloud.exception.InvalidParameterValueException;
 +import com.cloud.exception.ManagementServerException;
 +import com.cloud.exception.OperationTimedoutException;
 +import com.cloud.exception.PermissionDeniedException;
 +import com.cloud.exception.ResourceAllocationException;
 +import com.cloud.exception.ResourceUnavailableException;
 +import com.cloud.exception.StorageUnavailableException;
 +import com.cloud.exception.VirtualMachineMigrationException;
 +import com.cloud.gpu.GPU;
 +import com.cloud.ha.HighAvailabilityManager;
 +import com.cloud.host.Host;
 +import com.cloud.host.HostVO;
 +import com.cloud.host.Status;
 +import com.cloud.host.dao.HostDao;
 +import com.cloud.hypervisor.Hypervisor.HypervisorType;
 +import com.cloud.hypervisor.HypervisorCapabilitiesVO;
 +import com.cloud.hypervisor.dao.HypervisorCapabilitiesDao;
 +import com.cloud.network.IpAddressManager;
 +import com.cloud.network.Network;
 +import com.cloud.network.Network.IpAddresses;
 +import com.cloud.network.Network.Provider;
 +import com.cloud.network.Network.Service;
 +import com.cloud.network.NetworkModel;
 +import com.cloud.network.Networks.TrafficType;
 +import com.cloud.network.PhysicalNetwork;
 +import com.cloud.network.dao.FirewallRulesDao;
 +import com.cloud.network.dao.IPAddressDao;
 +import com.cloud.network.dao.IPAddressVO;
 +import com.cloud.network.dao.LoadBalancerVMMapDao;
 +import com.cloud.network.dao.LoadBalancerVMMapVO;
 +import com.cloud.network.dao.NetworkDao;
 +import com.cloud.network.dao.NetworkServiceMapDao;
 +import com.cloud.network.dao.NetworkVO;
 +import com.cloud.network.dao.PhysicalNetworkDao;
 +import com.cloud.network.element.UserDataServiceProvider;
 +import com.cloud.network.guru.NetworkGuru;
 +import com.cloud.network.lb.LoadBalancingRulesManager;
 +import com.cloud.network.router.VpcVirtualNetworkApplianceManager;
 +import com.cloud.network.rules.FirewallManager;
 +import com.cloud.network.rules.FirewallRuleVO;
 +import com.cloud.network.rules.PortForwardingRuleVO;
 +import com.cloud.network.rules.RulesManager;
 +import com.cloud.network.rules.dao.PortForwardingRulesDao;
 +import com.cloud.network.security.SecurityGroup;
 +import com.cloud.network.security.SecurityGroupManager;
 +import com.cloud.network.security.dao.SecurityGroupDao;
 +import com.cloud.network.vpc.VpcManager;
 +import com.cloud.offering.DiskOffering;
 +import com.cloud.offering.NetworkOffering;
 +import com.cloud.offering.NetworkOffering.Availability;
 +import com.cloud.offering.ServiceOffering;
 +import com.cloud.offerings.NetworkOfferingVO;
 +import com.cloud.offerings.dao.NetworkOfferingDao;
 +import com.cloud.org.Cluster;
 +import com.cloud.org.Grouping;
 +import com.cloud.resource.ResourceManager;
 +import com.cloud.resource.ResourceState;
 +import com.cloud.server.ManagementService;
 +import com.cloud.service.ServiceOfferingVO;
 +import com.cloud.service.dao.ServiceOfferingDao;
 +import com.cloud.service.dao.ServiceOfferingDetailsDao;
 +import com.cloud.storage.DataStoreRole;
 +import com.cloud.storage.DiskOfferingVO;
 +import com.cloud.storage.GuestOSCategoryVO;
 +import com.cloud.storage.GuestOSVO;
 +import com.cloud.storage.Snapshot;
 +import com.cloud.storage.SnapshotVO;
 +import com.cloud.storage.Storage;
 +import com.cloud.storage.Storage.ImageFormat;
 +import com.cloud.storage.Storage.StoragePoolType;
 +import com.cloud.storage.Storage.TemplateType;
 +import com.cloud.storage.StoragePool;
 +import com.cloud.storage.StoragePoolStatus;
 +import com.cloud.storage.VMTemplateStorageResourceAssoc;
 +import com.cloud.storage.VMTemplateVO;
 +import com.cloud.storage.VMTemplateZoneVO;
 +import com.cloud.storage.Volume;
 +import com.cloud.storage.VolumeApiService;
 +import com.cloud.storage.VolumeVO;
 +import com.cloud.storage.dao.DiskOfferingDao;
 +import com.cloud.storage.dao.GuestOSCategoryDao;
 +import com.cloud.storage.dao.GuestOSDao;
 +import com.cloud.storage.dao.SnapshotDao;
 +import com.cloud.storage.dao.VMTemplateDao;
 +import com.cloud.storage.dao.VMTemplateZoneDao;
 +import com.cloud.storage.dao.VolumeDao;
 +import com.cloud.template.TemplateApiService;
 +import com.cloud.template.TemplateManager;
 +import com.cloud.template.VirtualMachineTemplate;
 +import com.cloud.user.Account;
 +import com.cloud.user.AccountManager;
 +import com.cloud.user.AccountService;
 +import com.cloud.user.ResourceLimitService;
 +import com.cloud.user.SSHKeyPair;
 +import com.cloud.user.SSHKeyPairVO;
 +import com.cloud.user.User;
 +import com.cloud.user.UserStatisticsVO;
 +import com.cloud.user.UserVO;
 +import com.cloud.user.VmDiskStatisticsVO;
 +import com.cloud.user.dao.AccountDao;
 +import com.cloud.user.dao.SSHKeyPairDao;
 +import com.cloud.user.dao.UserDao;
 +import com.cloud.user.dao.UserStatisticsDao;
 +import com.cloud.user.dao.VmDiskStatisticsDao;
 +import com.cloud.uservm.UserVm;
 +import com.cloud.utils.DateUtil;
 +import com.cloud.utils.Journal;
 +import com.cloud.utils.NumbersUtil;
 +import com.cloud.utils.Pair;
 +import com.cloud.utils.component.ManagerBase;
 +import com.cloud.utils.concurrency.NamedThreadFactory;
 +import com.cloud.utils.crypt.DBEncryptionUtil;
 +import com.cloud.utils.crypt.RSAHelper;
 +import com.cloud.utils.db.DB;
 +import com.cloud.utils.db.EntityManager;
 +import com.cloud.utils.db.GlobalLock;
 +import com.cloud.utils.db.SearchCriteria;
 +import com.cloud.utils.db.Transaction;
 +import com.cloud.utils.db.TransactionCallbackNoReturn;
 +import com.cloud.utils.db.TransactionCallbackWithException;
 +import com.cloud.utils.db.TransactionCallbackWithExceptionNoReturn;
 +import com.cloud.utils.db.TransactionStatus;
 +import com.cloud.utils.db.UUIDManager;
 +import com.cloud.utils.exception.CloudRuntimeException;
 +import com.cloud.utils.exception.ExecutionException;
 +import com.cloud.utils.fsm.NoTransitionException;
 +import com.cloud.utils.net.NetUtils;
 +import com.cloud.vm.VirtualMachine.State;
 +import com.cloud.vm.dao.DomainRouterDao;
 +import com.cloud.vm.dao.InstanceGroupDao;
 +import com.cloud.vm.dao.InstanceGroupVMMapDao;
 +import com.cloud.vm.dao.NicDao;
 +import com.cloud.vm.dao.NicExtraDhcpOptionDao;
 +import com.cloud.vm.dao.UserVmDao;
 +import com.cloud.vm.dao.UserVmDetailsDao;
 +import com.cloud.vm.dao.VMInstanceDao;
 +import com.cloud.vm.snapshot.VMSnapshotManager;
 +import com.cloud.vm.snapshot.VMSnapshotVO;
 +import com.cloud.vm.snapshot.dao.VMSnapshotDao;
 +
 +
 +public class UserVmManagerImpl extends ManagerBase implements UserVmManager, VirtualMachineGuru, UserVmService, Configurable {
 +    private static final Logger s_logger = Logger.getLogger(UserVmManagerImpl.class);
 +
 +    /**
 +     * The number of seconds to wait before timing out when trying to acquire a global lock.
 +     */
 +    private static final int ACQUIRE_GLOBAL_LOCK_TIMEOUT_FOR_COOPERATION = 3;
 +
 +    private static final long GiB_TO_BYTES = 1024 * 1024 * 1024;
 +
 +    @Inject
 +    private EntityManager _entityMgr;
 +    @Inject
 +    private HostDao _hostDao;
 +    @Inject
 +    private ServiceOfferingDao _offeringDao;
 +    @Inject
 +    private DiskOfferingDao _diskOfferingDao;
 +    @Inject
 +    private VMTemplateDao _templateDao;
 +    @Inject
 +    private VMTemplateZoneDao _templateZoneDao;
 +    @Inject
 +    private TemplateDataStoreDao _templateStoreDao;
 +    @Inject
 +    private DomainDao _domainDao;
 +    @Inject
 +    private UserVmDao _vmDao;
 +    @Inject
 +    private VolumeDao _volsDao;
 +    @Inject
 +    private DataCenterDao _dcDao;
 +    @Inject
 +    private FirewallRulesDao _rulesDao;
 +    @Inject
 +    private LoadBalancerVMMapDao _loadBalancerVMMapDao;
 +    @Inject
 +    private PortForwardingRulesDao _portForwardingDao;
 +    @Inject
 +    private IPAddressDao _ipAddressDao;
 +    @Inject
 +    private HostPodDao _podDao;
 +    @Inject
 +    private NetworkModel _networkModel;
 +    @Inject
 +    private NetworkOrchestrationService _networkMgr;
 +    @Inject
 +    private AgentManager _agentMgr;
 +    @Inject
 +    private ConfigurationManager _configMgr;
 +    @Inject
 +    private AccountDao _accountDao;
 +    @Inject
 +    private UserDao _userDao;
 +    @Inject
 +    private SnapshotDao _snapshotDao;
 +    @Inject
 +    private GuestOSDao _guestOSDao;
 +    @Inject
 +    private HighAvailabilityManager _haMgr;
 +    @Inject
 +    private AlertManager _alertMgr;
 +    @Inject
 +    private AccountManager _accountMgr;
 +    @Inject
 +    private AccountService _accountService;
 +    @Inject
 +    private ClusterDao _clusterDao;
 +    @Inject
 +    private PrimaryDataStoreDao _storagePoolDao;
 +    @Inject
 +    private SecurityGroupManager _securityGroupMgr;
 +    @Inject
 +    private ServiceOfferingDao _serviceOfferingDao;
 +    @Inject
 +    private NetworkOfferingDao _networkOfferingDao;
 +    @Inject
 +    private InstanceGroupDao _vmGroupDao;
 +    @Inject
 +    private InstanceGroupVMMapDao _groupVMMapDao;
 +    @Inject
 +    private VirtualMachineManager _itMgr;
 +    @Inject
 +    private NetworkDao _networkDao;
 +    @Inject
 +    private NicDao _nicDao;
 +    @Inject
 +    private RulesManager _rulesMgr;
 +    @Inject
 +    private LoadBalancingRulesManager _lbMgr;
 +    @Inject
 +    private SSHKeyPairDao _sshKeyPairDao;
 +    @Inject
 +    private UserVmDetailsDao _vmDetailsDao;
 +    @Inject
 +    private HypervisorCapabilitiesDao _hypervisorCapabilitiesDao;
 +    @Inject
 +    private SecurityGroupDao _securityGroupDao;
 +    @Inject
 +    private CapacityManager _capacityMgr;
 +    @Inject
 +    private VMInstanceDao _vmInstanceDao;
 +    @Inject
 +    private ResourceLimitService _resourceLimitMgr;
 +    @Inject
 +    private FirewallManager _firewallMgr;
 +    @Inject
 +    private ResourceManager _resourceMgr;
 +    @Inject
 +    private NetworkServiceMapDao _ntwkSrvcDao;
 +    @Inject
 +    private PhysicalNetworkDao _physicalNetworkDao;
 +    @Inject
 +    private VpcManager _vpcMgr;
 +    @Inject
 +    private TemplateManager _templateMgr;
 +    @Inject
 +    private GuestOSCategoryDao _guestOSCategoryDao;
 +    @Inject
 +    private UsageEventDao _usageEventDao;
 +    @Inject
 +    private VmDiskStatisticsDao _vmDiskStatsDao;
 +    @Inject
 +    private VMSnapshotDao _vmSnapshotDao;
 +    @Inject
 +    private VMSnapshotManager _vmSnapshotMgr;
 +    @Inject
 +    private AffinityGroupVMMapDao _affinityGroupVMMapDao;
 +    @Inject
 +    private AffinityGroupDao _affinityGroupDao;
 +    @Inject
 +    private DedicatedResourceDao _dedicatedDao;
 +    @Inject
 +    private AffinityGroupService _affinityGroupService;
 +    @Inject
 +    private PlannerHostReservationDao _plannerHostReservationDao;
 +    @Inject
 +    private ServiceOfferingDetailsDao serviceOfferingDetailsDao;
 +    @Inject
 +    private UserStatisticsDao _userStatsDao;
 +    @Inject
 +    private VlanDao _vlanDao;
 +    @Inject
 +    private VolumeService _volService;
 +    @Inject
 +    private VolumeDataFactory volFactory;
 +    @Inject
 +    private UserVmDetailsDao _uservmDetailsDao;
 +    @Inject
 +    private UUIDManager _uuidMgr;
 +    @Inject
 +    private DeploymentPlanningManager _planningMgr;
 +    @Inject
 +    private VolumeApiService _volumeService;
 +    @Inject
 +    private DataStoreManager _dataStoreMgr;
 +    @Inject
 +    private VpcVirtualNetworkApplianceManager _virtualNetAppliance;
 +    @Inject
 +    private DomainRouterDao _routerDao;
 +    @Inject
 +    private VMNetworkMapDao _vmNetworkMapDao;
 +    @Inject
 +    private IpAddressManager _ipAddrMgr;
 +    @Inject
 +    private NicExtraDhcpOptionDao _nicExtraDhcpOptionDao;
 +    @Inject
 +    private TemplateApiService _tmplService;
 +    @Inject
 +    private ConfigurationDao _configDao;
 +
 +    private ScheduledExecutorService _executor = null;
 +    private ScheduledExecutorService _vmIpFetchExecutor = null;
 +    private int _expungeInterval;
 +    private int _expungeDelay;
 +    private boolean _dailyOrHourly = false;
 +    private int capacityReleaseInterval;
 +    private ExecutorService _vmIpFetchThreadExecutor;
 +
 +
 +    private String _instance;
 +    private boolean _instanceNameFlag;
 +    private int _scaleRetry;
 +    private Map<Long, VmAndCountDetails> vmIdCountMap = new ConcurrentHashMap<>();
 +
 +    private static final int MAX_HTTP_GET_LENGTH = 2 * MAX_USER_DATA_LENGTH_BYTES;
 +    private static final int MAX_HTTP_POST_LENGTH = 16 * MAX_USER_DATA_LENGTH_BYTES;
 +
 +    @Inject
 +    private OrchestrationService _orchSrvc;
 +
 +    @Inject
 +    private VolumeOrchestrationService volumeMgr;
 +
 +    @Inject
 +    private ManagementService _mgr;
 +
 +    private static final ConfigKey<Integer> VmIpFetchWaitInterval = new ConfigKey<Integer>("Advanced", Integer.class, "externaldhcp.vmip.retrieval.interval", "180",
 +            "Wait Interval (in seconds) for shared network vm dhcp ip addr fetch for next iteration ", true);
 +
 +    private static final ConfigKey<Integer> VmIpFetchTrialMax = new ConfigKey<Integer>("Advanced", Integer.class, "externaldhcp.vmip.max.retry", "10",
 +            "The max number of retrieval times for shared entwork vm dhcp ip fetch, in case of failures", true);
 +
 +    private static final ConfigKey<Integer> VmIpFetchThreadPoolMax = new ConfigKey<Integer>("Advanced", Integer.class, "externaldhcp.vmipFetch.threadPool.max", "10",
 +            "number of threads for fetching vms ip address", true);
 +
 +    private static final ConfigKey<Integer> VmIpFetchTaskWorkers = new ConfigKey<Integer>("Advanced", Integer.class, "externaldhcp.vmipfetchtask.workers", "10",
 +            "number of worker threads for vm ip fetch task ", true);
 +
 +    private static final ConfigKey<Boolean> AllowDeployVmIfGivenHostFails = new ConfigKey<Boolean>("Advanced", Boolean.class, "allow.deploy.vm.if.deploy.on.given.host.fails", "false",
 +            "allow vm to deploy on different host if vm fails to deploy on the given host ", true);
 +
 +
 +    @Override
 +    public UserVmVO getVirtualMachine(long vmId) {
 +        return _vmDao.findById(vmId);
 +    }
 +
 +    @Override
 +    public List<? extends UserVm> getVirtualMachines(long hostId) {
 +        return _vmDao.listByHostId(hostId);
 +    }
 +
 +    private void resourceLimitCheck(Account owner, Boolean displayVm, Long cpu, Long memory) throws ResourceAllocationException {
 +        _resourceLimitMgr.checkResourceLimit(owner, ResourceType.user_vm, displayVm);
 +        _resourceLimitMgr.checkResourceLimit(owner, ResourceType.cpu, displayVm, cpu);
 +        _resourceLimitMgr.checkResourceLimit(owner, ResourceType.memory, displayVm, memory);
 +    }
 +
 +    private void resourceCountIncrement(long accountId, Boolean displayVm, Long cpu, Long memory) {
 +        _resourceLimitMgr.incrementResourceCount(accountId, ResourceType.user_vm, displayVm);
 +        _resourceLimitMgr.incrementResourceCount(accountId, ResourceType.cpu, displayVm, cpu);
 +        _resourceLimitMgr.incrementResourceCount(accountId, ResourceType.memory, displayVm, memory);
 +    }
 +
 +    private void resourceCountDecrement(long accountId, Boolean displayVm, Long cpu, Long memory) {
 +        _resourceLimitMgr.decrementResourceCount(accountId, ResourceType.user_vm, displayVm);
 +        _resourceLimitMgr.decrementResourceCount(accountId, ResourceType.cpu, displayVm, cpu);
 +        _resourceLimitMgr.decrementResourceCount(accountId, ResourceType.memory, displayVm, memory);
 +    }
 +
 +    public class VmAndCountDetails {
 +        long vmId;
 +        int  retrievalCount = VmIpFetchTrialMax.value();
 +
 +
 +        public VmAndCountDetails() {
 +        }
 +
 +        public VmAndCountDetails (long vmId, int retrievalCount) {
 +            this.vmId = vmId;
 +            this.retrievalCount = retrievalCount;
 +        }
 +
 +        public VmAndCountDetails (long vmId) {
 +            this.vmId = vmId;
 +        }
 +
 +        public int getRetrievalCount() {
 +            return retrievalCount;
 +        }
 +
 +        public void setRetrievalCount(int retrievalCount) {
 +            this.retrievalCount = retrievalCount;
 +        }
 +
 +        public long getVmId() {
 +            return vmId;
 +        }
 +
 +        public void setVmId(long vmId) {
 +            this.vmId = vmId;
 +        }
 +
 +        public void decrementCount() {
 +            this.retrievalCount--;
 +
 +        }
 +    }
 +
 +    private class VmIpAddrFetchThread extends ManagedContextRunnable {
 +
 +
 +        long nicId;
 +        long vmId;
 +        String vmName;
 +        boolean isWindows;
 +        Long hostId;
 +        String networkCidr;
 +
 +        public VmIpAddrFetchThread(long vmId, long nicId, String instanceName, boolean windows, Long hostId, String networkCidr) {
 +            this.vmId = vmId;
 +            this.nicId = nicId;
 +            this.vmName = instanceName;
 +            this.isWindows = windows;
 +            this.hostId = hostId;
 +            this.networkCidr = networkCidr;
 +        }
 +
 +        @Override
 +        protected void runInContext() {
 +            GetVmIpAddressCommand cmd = new GetVmIpAddressCommand(vmName, networkCidr, isWindows);
 +            boolean decrementCount = true;
 +
 +            try {
 +                s_logger.debug("Trying for vm "+ vmId +" nic Id "+nicId +" ip retrieval ...");
 +                Answer answer = _agentMgr.send(hostId, cmd);
 +                NicVO nic = _nicDao.findById(nicId);
 +                if (answer.getResult()) {
 +                    String vmIp = answer.getDetails();
 +
 +                    if (NetUtils.isValidIp4(vmIp)) {
 +                        // set this vm ip addr in vm nic.
 +                        if (nic != null) {
 +                            nic.setIPv4Address(vmIp);
 +                            _nicDao.update(nicId, nic);
 +                            s_logger.debug("Vm "+ vmId +" IP "+vmIp +" got retrieved successfully");
 +                            vmIdCountMap.remove(nicId);
 +                            decrementCount = false;
 +                            ActionEventUtils.onActionEvent(User.UID_SYSTEM, Account.ACCOUNT_ID_SYSTEM,
 +                                    Domain.ROOT_DOMAIN, EventTypes.EVENT_NETWORK_EXTERNAL_DHCP_VM_IPFETCH,
 +                                    "VM " + vmId + " nic id " + nicId + " ip address " + vmIp + " got fetched successfully");
 +                        }
 +                    }
 +                } else {
 +                    //previously vm has ip and nic table has ip address. After vm restart or stop/start
 +                    //if vm doesnot get the ip then set the ip in nic table to null
 +                    if (nic.getIPv4Address() != null) {
 +                        nic.setIPv4Address(null);
 +                        _nicDao.update(nicId, nic);
 +                    }
 +                    if (answer.getDetails() != null) {
 +                        s_logger.debug("Failed to get vm ip for Vm "+ vmId + answer.getDetails());
 +                    }
 +                }
 +            } catch (OperationTimedoutException e) {
 +                s_logger.warn("Timed Out", e);
 +            } catch (AgentUnavailableException e) {
 +                s_logger.warn("Agent Unavailable ", e);
 +            } finally {
 +                if (decrementCount) {
 +                    VmAndCountDetails vmAndCount = vmIdCountMap.get(nicId);
 +                    vmAndCount.decrementCount();
 +                    s_logger.debug("Ip is not retrieved for VM " + vmId +" nic "+nicId + " ... decremented count to "+vmAndCount.getRetrievalCount());
 +                    vmIdCountMap.put(nicId, vmAndCount);
 +                }
 +            }
 +        }
 +    }
 +
- 
- 
- 
- 
 +    @Override
 +    @ActionEvent(eventType = EventTypes.EVENT_VM_RESETPASSWORD, eventDescription = "resetting Vm password", async = true)
 +    public UserVm resetVMPassword(ResetVMPasswordCmd cmd, String password) throws ResourceUnavailableException, InsufficientCapacityException {
 +        Account caller = CallContext.current().getCallingAccount();
 +        Long vmId = cmd.getId();
 +        UserVmVO userVm = _vmDao.findById(cmd.getId());
 +
 +        // Do parameters input validation
 +        if (userVm == null) {
 +            throw new InvalidParameterValueException("unable to find a virtual machine with id " + cmd.getId());
 +        }
 +
 +        _vmDao.loadDetails(userVm);
 +
 +        VMTemplateVO template = _templateDao.findByIdIncludingRemoved(userVm.getTemplateId());
 +        if (template == null || !template.getEnablePassword()) {
 +            throw new InvalidParameterValueException("Fail to reset password for the virtual machine, the template is not password enabled");
 +        }
 +
 +        if (userVm.getState() == State.Error || userVm.getState() == State.Expunging) {
 +            s_logger.error("vm is not in the right state: " + vmId);
 +            throw new InvalidParameterValueException("Vm with id " + vmId + " is not in the right state");
 +        }
 +
++        if (userVm.getState() != State.Stopped) {
++            s_logger.error("vm is not in the right state: " + vmId);
++            throw new InvalidParameterValueException("Vm " + userVm + " should be stopped to do password reset");
++        }
++
 +        _accountMgr.checkAccess(caller, null, true, userVm);
 +
 +        boolean result = resetVMPasswordInternal(vmId, password);
 +
 +        if (result) {
 +            userVm.setPassword(password);
 +            // update the password in vm_details table too
 +            // Check if an SSH key pair was selected for the instance and if so
 +            // use it to encrypt & save the vm password
 +            encryptAndStorePassword(userVm, password);
 +        } else {
 +            throw new CloudRuntimeException("Failed to reset password for the virtual machine ");
 +        }
 +
 +        return userVm;
 +    }
 +
 +    private boolean resetVMPasswordInternal(Long vmId, String password) throws ResourceUnavailableException, InsufficientCapacityException {
 +        Long userId = CallContext.current().getCallingUserId();
 +        VMInstanceVO vmInstance = _vmDao.findById(vmId);
 +
 +        if (password == null || password.equals("")) {
 +            return false;
 +        }
 +
 +        VMTemplateVO template = _templateDao.findByIdIncludingRemoved(vmInstance.getTemplateId());
 +        if (template.getEnablePassword()) {
 +            Nic defaultNic = _networkModel.getDefaultNic(vmId);
 +            if (defaultNic == null) {
 +                s_logger.error("Unable to reset password for vm " + vmInstance + " as the instance doesn't have default nic");
 +                return false;
 +            }
 +
 +            Network defaultNetwork = _networkDao.findById(defaultNic.getNetworkId());
 +            NicProfile defaultNicProfile = new NicProfile(defaultNic, defaultNetwork, null, null, null, _networkModel.isSecurityGroupSupportedInNetwork(defaultNetwork),
 +                    _networkModel.getNetworkTag(template.getHypervisorType(), defaultNetwork));
 +            VirtualMachineProfile vmProfile = new VirtualMachineProfileImpl(vmInstance);
 +            vmProfile.setParameter(VirtualMachineProfile.Param.VmPassword, password);
 +
 +            UserDataServiceProvider element = _networkMgr.getPasswordResetProvider(defaultNetwork);
 +            if (element == null) {
 +                throw new CloudRuntimeException("Can't find network element for " + Service.UserData.getName() + " provider needed for password reset");
 +            }
 +
 +            boolean result = element.savePassword(defaultNetwork, defaultNicProfile, vmProfile);
 +
 +            // Need to reboot the virtual machine so that the password gets
 +            // redownloaded from the DomR, and reset on the VM
 +            if (!result) {
 +                s_logger.debug("Failed to reset password for the virtual machine; no need to reboot the vm");
 +                return false;
 +            } else {
++                final UserVmVO userVm = _vmDao.findById(vmId);
++                _vmDao.loadDetails(userVm);
++                userVm.setPassword(password);
++                // update the password in vm_details table too
++                // Check if an SSH key pair was selected for the instance and if so
++                // use it to encrypt & save the vm password
++                encryptAndStorePassword(userVm, password);
++
 +                if (vmInstance.getState() == State.Stopped) {
 +                    s_logger.debug("Vm " + vmInstance + " is stopped, not rebooting it as a part of password reset");
 +                    return true;
 +                }
 +
 +                if (rebootVirtualMachine(userId, vmId) == null) {
 +                    s_logger.warn("Failed to reboot the vm " + vmInstance);
 +                    return false;
 +                } else {
 +                    s_logger.debug("Vm " + vmInstance + " is rebooted successfully as a part of password reset");
 +                    return true;
 +                }
 +            }
 +        } else {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("Reset password called for a vm that is not using a password enabled template");
 +            }
 +            return false;
 +        }
 +    }
 +
 +    @Override
 +    @ActionEvent(eventType = EventTypes.EVENT_VM_RESETSSHKEY, eventDescription = "resetting Vm SSHKey", async = true)
 +    public UserVm resetVMSSHKey(ResetVMSSHKeyCmd cmd) throws ResourceUnavailableException, InsufficientCapacityException {
 +
 +        Account caller = CallContext.current().getCallingAccount();
 +        Account owner = _accountMgr.finalizeOwner(caller, cmd.getAccountName(), cmd.getDomainId(), cmd.getProjectId());
 +        Long vmId = cmd.getId();
 +
 +        UserVmVO userVm = _vmDao.findById(cmd.getId());
 +        if (userVm == null) {
 +            throw new InvalidParameterValueException("unable to find a virtual machine by id" + cmd.getId());
 +        }
 +
 +        _vmDao.loadDetails(userVm);
 +        VMTemplateVO template = _templateDao.findByIdIncludingRemoved(userVm.getTemplateId());
 +
 +        // Do parameters input validation
 +
 +        if (userVm.getState() == State.Error || userVm.getState() == State.Expunging) {
 +            s_logger.error("vm is not in the right state: " + vmId);
 +            throw new InvalidParameterValueException("Vm with specified id is not in the right state");
 +        }
 +        if (userVm.getState() != State.Stopped) {
 +            s_logger.error("vm is not in the right state: " + vmId);
 +            throw new InvalidParameterValueException("Vm " + userVm + " should be stopped to do SSH Key reset");
 +        }
 +
 +        SSHKeyPairVO s = _sshKeyPairDao.findByName(owner.getAccountId(), owner.getDomainId(), cmd.getName());
 +        if (s == null) {
 +            throw new InvalidParameterValueException("A key pair with name '" + cmd.getName() + "' does not exist for account " + owner.getAccountName()
 +            + " in specified domain id");
 +        }
 +
 +        _accountMgr.checkAccess(caller, null, true, userVm);
 +        String password = null;
 +        String sshPublicKey = s.getPublicKey();
 +        if (template != null && template.getEnablePassword()) {
 +            password = _mgr.generateRandomPassword();
 +        }
 +
 +        boolean result = resetVMSSHKeyInternal(vmId, sshPublicKey, password);
 +
-         if (result) {
-             userVm.setDetail("SSH.PublicKey", sshPublicKey);
-             if (template != null && template.getEnablePassword()) {
-                 userVm.setPassword(password);
-                 //update the encrypted password in vm_details table too
-                 encryptAndStorePassword(userVm, password);
-             }
-             _vmDao.saveDetails(userVm);
-         } else {
++        if (!result) {
 +            throw new CloudRuntimeException("Failed to reset SSH Key for the virtual machine ");
 +        }
 +        return userVm;
 +    }
 +
 +    private boolean resetVMSSHKeyInternal(Long vmId, String sshPublicKey, String password) throws ResourceUnavailableException, InsufficientCapacityException {
 +        Long userId = CallContext.current().getCallingUserId();
 +        VMInstanceVO vmInstance = _vmDao.findById(vmId);
 +
 +        VMTemplateVO template = _templateDao.findByIdIncludingRemoved(vmInstance.getTemplateId());
 +        Nic defaultNic = _networkModel.getDefaultNic(vmId);
 +        if (defaultNic == null) {
 +            s_logger.error("Unable to reset SSH Key for vm " + vmInstance + " as the instance doesn't have default nic");
 +            return false;
 +        }
 +
 +        Network defaultNetwork = _networkDao.findById(defaultNic.getNetworkId());
 +        NicProfile defaultNicProfile = new NicProfile(defaultNic, defaultNetwork, null, null, null, _networkModel.isSecurityGroupSupportedInNetwork(defaultNetwork),
 +                _networkModel.getNetworkTag(template.getHypervisorType(), defaultNetwork));
 +
 +        VirtualMachineProfile vmProfile = new VirtualMachineProfileImpl(vmInstance);
 +
 +        if (template.getEnablePassword()) {
 +            vmProfile.setParameter(VirtualMachineProfile.Param.VmPassword, password);
 +        }
 +
 +        UserDataServiceProvider element = _networkMgr.getSSHKeyResetProvider(defaultNetwork);
 +        if (element == null) {
 +            throw new CloudRuntimeException("Can't find network element for " + Service.UserData.getName() + " provider needed for SSH Key reset");
 +        }
 +        boolean result = element.saveSSHKey(defaultNetwork, defaultNicProfile, vmProfile, sshPublicKey);
 +
 +        // Need to reboot the virtual machine so that the password gets redownloaded from the DomR, and reset on the VM
 +        if (!result) {
 +            s_logger.debug("Failed to reset SSH Key for the virtual machine; no need to reboot the vm");
 +            return false;
 +        } else {
++            final UserVmVO userVm = _vmDao.findById(vmId);
++            _vmDao.loadDetails(userVm);
++            userVm.setDetail("SSH.PublicKey", sshPublicKey);
++            if (template.getEnablePassword()) {
++                userVm.setPassword(password);
++                //update the encrypted password in vm_details table too
++                encryptAndStorePassword(userVm, password);
++            }
++            _vmDao.saveDetails(userVm);
++
 +            if (vmInstance.getState() == State.Stopped) {
 +                s_logger.debug("Vm " + vmInstance + " is stopped, not rebooting it as a part of SSH Key reset");
 +                return true;
 +            }
 +            if (rebootVirtualMachine(userId, vmId) == null) {
 +                s_logger.warn("Failed to reboot the vm " + vmInstance);
 +                return false;
 +            } else {
 +                s_logger.debug("Vm " + vmInstance + " is rebooted successfully as a part of SSH Key reset");
 +                return true;
 +            }
 +        }
 +    }
 +
 +    @Override
 +    public boolean stopVirtualMachine(long userId, long vmId) {
 +        boolean status = false;
 +        if (s_logger.isDebugEnabled()) {
 +            s_logger.debug("Stopping vm=" + vmId);
 +        }
 +        UserVmVO vm = _vmDao.findById(vmId);
 +        if (vm == null || vm.getRemoved() != null) {
 +            if (s_logger.isDebugEnabled()) {
 +                s_logger.debug("VM is either removed or deleted.");
 +            }
 +            return true;
 +        }
 +
 +        _userDao.findById(userId);
 +        try {
 +            VirtualMachineEntity vmEntity = _orchSrvc.getVirtualMachine(vm.getUuid());
 +            status = vmEntity.stop(Long.toString(userId));
 +        } catch (ResourceUnavailableException e) {
 +            s_logger.debug("Unable to stop due to ", e);
 +            status = false;
 +        } catch (CloudException e) {
 +            throw new CloudRuntimeException("Unable to contact the agent to stop the virtual machine " + vm, e);
 +        }
 +        return status;
 +    }
 +
 +    private UserVm rebootVirtualMachine(long userId, long vmId) throws InsufficientCapacityException, ResourceUnavailableException {
 +        UserVmVO vm = _vmDao.findById(vmId);
 +
 +        if (vm == null || vm.getState() == State.Destroyed || vm.getState() == State.Expunging || vm.getRemoved() != null) {
 +            s_logger.warn("Vm id=" + vmId + " doesn't exist");
 +            return null;
 +        }
 +
 +        if (vm.getState() == State.Running && vm.getHostId() != null) {
 +            collectVmDiskStatistics(vm);
 +            collectVmNetworkStatistics(vm);
 +            DataCenterVO dc = _dcDao.findById(vm.getDataCenterId());
 +            try {
 +                if (dc.getNetworkType() == DataCenter.NetworkType.Advanced) {
 +                    //List all networks of vm
 +                    List<Long> vmNetworks = _vmNetworkMapDao.getNetworks(vmId);
 +                    List<DomainRouterVO> routers = new ArrayList<DomainRouterVO>();
 +                    //List the stopped routers
 +                    for(long vmNetworkId : vmNetworks) {
 +                        List<DomainRouterVO> router = _routerDao.listStopped(vmNetworkId);
 +                        routers.addAll(router);
 +                    }
 +                    //A vm may not have many nics attached and even fewer routers might be stopped (only in exceptional cases)
 +                    //Safe to start the stopped router serially, this is consistent with the way how multiple networks are added to vm during deploy
 +                    //and routers are started serially ,may revisit to make this process parallel
 +                    for(DomainRouterVO routerToStart : routers) {
 +                        s_logger.warn("Trying to start router " + routerToStart.getInstanceName() + " as part of vm: " + vm.getInstanceName() + " reboot");
 +                        _virtualNetAppliance.startRouter(routerToStart.getId(),true);
 +                    }
 +                }
 +            } catch (ConcurrentOperationException e) {
 +                throw new CloudRuntimeException("Concurrent operations on starting router. " + e);
 +            } catch (Exception ex){
 +                throw new CloudRuntimeException("Router start failed due to" + ex);
 +            }finally {
 +                s_logger.info("Rebooting vm " + vm.getInstanceName());
 +                _itMgr.reboot(vm.getUuid(), null);
 +            }
 +            return _vmDao.findById(vmId);
 +        } else {
 +            s_logger.error("Vm id=" + vmId + " is not in Running state, failed to reboot");
 +            return null;
 +        }
 +    }
 +
 +    @Override
 +    @ActionEvent(eventType = EventTypes.EVENT_VM_UPGRADE, eventDescription = "upgrading Vm")
 +    /*
 +     * TODO: cleanup eventually - Refactored API call
 +     */
 +    // This method will be deprecated as we use ScaleVMCmd for both stopped VMs and running VMs
 +    public UserVm upgradeVirtualMachine(UpgradeVMCmd cmd) throws ResourceAllocationException {
 +        Long vmId = cmd.getId();
 +        Long svcOffId = cmd.getServiceOfferingId();
 +        Account caller = CallContext.current().getCallingAccount();
 +
 +        // Verify input parameters
 +        //UserVmVO vmInstance = _vmDao.findById(vmId);
 +        VMInstanceVO vmInstance = _vmInstanceDao.findById(vmId);
 +        if (vmInstance == null) {
 +            throw new InvalidParameterValueException("unable to find a virtual machine with id " + vmId);
 +        } else if (!(vmInstance.getState().equals(State.Stopped))) {
 +            throw new InvalidParameterValueException("Unable to upgrade virtual machine " + vmInstance.toString() + " " + " in state " + vmInstance.getState()
 +            + "; make sure the virtual machine is stopped");
 +        }
 +
 +        _accountMgr.checkAccess(caller, null, true, vmInstance);
 +
 +        // Check resource limits for CPU and Memory.
 +        Map<String, String> customParameters = cmd.getDetails();
 +        ServiceOfferingVO newServiceOffering = _offeringDao.findById(svcOffId);
 +        if (newServiceOffering.isDynamic()) {
 +            newServiceOffering.setDynamicFlag(true);
 +            validateCustomParameters(newServiceOffering, cmd.getDetails());
... 5474 lines suppressed ...

-- 
To stop receiving notification emails like this one, please contact
rohit@apache.org.