You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by sateesh-chodapuneedi <gi...@git.apache.org> on 2016/12/23 13:41:33 UTC

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

GitHub user sateesh-chodapuneedi opened a pull request:

    https://github.com/apache/cloudstack/pull/1861

    CLOUDSTACK-9698 Make hardcorded wait timeout for NIC adapter hotplug as configurable

    Jira
    ===
    CLOUDSTACK-9698 Make hardcoded wait timeout for NIC adapter hotplug as configurable
    
    Description
    =========
    Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
    The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
    and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
    in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
    Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
    
    Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
    configuring the wait timeout as fallback mechanism in such scenarios.
    
    Fix
    ===
    Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
    
    Signed-off-by: Sateesh Chodapuneedi <sa...@accelerite.com>

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/sateesh-chodapuneedi/cloudstack pr-cloudstack-9698

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/cloudstack/pull/1861.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1861
    
----
commit 2ea7aadbac386f4d3a0e0062e1042e4266c24e91
Author: Sateesh Chodapuneedi <sa...@accelerite.com>
Date:   2016-12-23T00:51:04Z

    CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
    
    Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
    The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
    and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
    in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
    Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
    
    Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
    configuring the wait timeout as fallback mechanism in such scenarios.
    
    Signed-off-by: Sateesh Chodapuneedi <sa...@accelerite.com>

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by borisstoyanov <gi...@git.apache.org>.
Github user borisstoyanov commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @blueorangutan test centos7 vmware-60u2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by harikrishna-patnala <gi...@git.apache.org>.
Github user harikrishna-patnala commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by borisstoyanov <gi...@git.apache.org>.
Github user borisstoyanov commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    yes @sateesh-chodapuneedi this failure has been a pain for a while... it'll be good to invest some time in fixing it..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/1861#discussion_r94216180
  
    --- Diff: plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java ---
    @@ -230,6 +230,7 @@
     import com.cloud.hypervisor.guru.VMwareGuru;
     import com.cloud.hypervisor.vmware.manager.VmwareHostService;
     import com.cloud.hypervisor.vmware.manager.VmwareManager;
    +import com.cloud.hypervisor.vmware.manager.VmwareManagerImpl;
    --- End diff --
    
    Yes, moved the parameter to VmwareManager itself.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/1861#discussion_r94216151
  
    --- Diff: plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareManagerImpl.java ---
    @@ -123,12 +125,14 @@
     import com.cloud.utils.ssh.SshHelper;
     import com.cloud.vm.DomainRouterVO;
     
    -public class VmwareManagerImpl extends ManagerBase implements VmwareManager, VmwareStorageMount, Listener, VmwareDatacenterService {
    +public class VmwareManagerImpl extends ManagerBase implements VmwareManager, VmwareStorageMount, Listener, VmwareDatacenterService, Configurable {
         private static final Logger s_logger = Logger.getLogger(VmwareManagerImpl.class);
     
         private static final int STARTUP_DELAY = 60000;                 // 60 seconds
         private static final long DEFAULT_HOST_SCAN_INTERVAL = 600000;     // every 10 minutes
     
    +    public static final ConfigKey<Long> s_vmwareNicHotplugWaitTimeout = new ConfigKey<Long>("Advanced", Long.class, "vmware.nic.hotplug.wait.timeout", "20000",
    --- End diff --
    
    Yes, preivously it was intended, as this is just timeout value in cases that need more time for hotplug device detection and doesn't introduce delay in regular scenarios.
    
    But left it as 15000, just keep the default behavior intact. If require admin may modify the configuration parameter per their environment's requirements. Pushed the updated code change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/cloudstack/pull/1861


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @harikrishna-patnala Thanks. Did not make the parameter as dynamic, to avoid unexpected timeout errors upon timeout changes, for those VPC tier configurations in progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

Posted by koushik-das <gi...@git.apache.org>.
Github user koushik-das commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/1861#discussion_r94206556
  
    --- Diff: plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/manager/VmwareManagerImpl.java ---
    @@ -123,12 +125,14 @@
     import com.cloud.utils.ssh.SshHelper;
     import com.cloud.vm.DomainRouterVO;
     
    -public class VmwareManagerImpl extends ManagerBase implements VmwareManager, VmwareStorageMount, Listener, VmwareDatacenterService {
    +public class VmwareManagerImpl extends ManagerBase implements VmwareManager, VmwareStorageMount, Listener, VmwareDatacenterService, Configurable {
         private static final Logger s_logger = Logger.getLogger(VmwareManagerImpl.class);
     
         private static final int STARTUP_DELAY = 60000;                 // 60 seconds
         private static final long DEFAULT_HOST_SCAN_INTERVAL = 600000;     // every 10 minutes
     
    +    public static final ConfigKey<Long> s_vmwareNicHotplugWaitTimeout = new ConfigKey<Long>("Advanced", Long.class, "vmware.nic.hotplug.wait.timeout", "20000",
    --- End diff --
    
    Earlier the hardcoded wait was 15000 ms, now you have made the default as 20000 ms. Is this intended?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by blueorangutan <gi...@git.apache.org>.
Github user blueorangutan commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @borisstoyanov a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by blueorangutan <gi...@git.apache.org>.
Github user blueorangutan commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @borisstoyanov a Trillian-Jenkins test job (centos7 mgmt + vmware-60u2) has been kicked to run smoke tests


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    ping @karuturi @koushik-das 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    Yes, @karuturi rebased with latest master. And changed base branch of this PR to master. Please review.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @borisstoyanov, thanks for running tets.
    
    I see 1 test error in the above results, this test has been failing in many other PRs as well, and doesn't seem related to changes here?
    
    `2017-02-21 00:56:54,625 - CRITICAL - FAILED: test_04_rvpc_privategw_static_routes: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run\n    testMethod()\n', '  File "/marvin/tests/smoke/test_privategw_acl.py", line 295, in test_04_rvpc_privategw_static_routes\n    self.performVPCTests(vpc_off)\n', '  File "/marvin/tests/smoke/test_privategw_acl.py", line 362, in performVPCTests\n    self.check_pvt_gw_connectivity(vm1, public_ip_1, [vm2.nic[0].ipaddress, vm1.nic[0].ipaddress])\n', '  File "/marvin/tests/smoke/test_privategw_acl.py", line 724, in check_pvt_gw_connectivity\n    "Ping to VM on Network Tier N from VM in Network Tier A should be successful at least for 2 out of 3 VMs"\n', '  File "/usr/lib64/python2.7/unittest/case.py", line 462, in assertTrue\n    raise self.failureException(msg)\n', 'AssertionError: Ping to VM on Network Tier N from VM in Network Tier A should be successful at least for 2 out of 3 VMs\n']
    `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @borisstoyanov Can you please run trillian tests for this PR?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by blueorangutan <gi...@git.apache.org>.
Github user blueorangutan commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    Packaging result: \u2714centos6 \u2714centos7 \u2714debian. JID-513


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by borisstoyanov <gi...@git.apache.org>.
Github user borisstoyanov commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    @blueorangutan package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by blueorangutan <gi...@git.apache.org>.
Github user blueorangutan commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    <b>Trillian test result (tid-857)</b>
    Environment: vmware-60u2 (x2), Advanced Networking with Mgmt server 7
    Total time taken: 43900 seconds
    Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1861-t857-vmware-60u2.zip
    Intermitten failure detected: /marvin/tests/smoke/test_password_server.py
    Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
    Intermitten failure detected: /marvin/tests/smoke/test_routers_network_ops.py
    Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
    Test completed. 46 look ok, 3 have error(s)
    
    
    Test | Result | Time (s) | Test File
    --- | --- | --- | ---
    test_04_rvpc_privategw_static_routes | `Failure` | 839.08 | test_privategw_acl.py
    ContextSuite context=TestSnapshotRootDisk>:setup | `Error` | 0.00 | test_snapshots.py
    test_01_vpc_site2site_vpn | Success | 362.16 | test_vpc_vpn.py
    test_01_vpc_remote_access_vpn | Success | 152.29 | test_vpc_vpn.py
    test_01_redundant_vpc_site2site_vpn | Success | 568.38 | test_vpc_vpn.py
    test_02_VPC_default_routes | Success | 339.73 | test_vpc_router_nics.py
    test_01_VPC_nics_after_destroy | Success | 705.08 | test_vpc_router_nics.py
    test_05_rvpc_multi_tiers | Success | 622.89 | test_vpc_redundant.py
    test_04_rvpc_network_garbage_collector_nics | Success | 1524.82 | test_vpc_redundant.py
    test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers | Success | 694.21 | test_vpc_redundant.py
    test_02_redundant_VPC_default_routes | Success | 634.97 | test_vpc_redundant.py
    test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL | Success | 1335.68 | test_vpc_redundant.py
    test_09_delete_detached_volume | Success | 31.39 | test_volumes.py
    test_06_download_detached_volume | Success | 45.51 | test_volumes.py
    test_05_detach_volume | Success | 100.28 | test_volumes.py
    test_04_delete_attached_volume | Success | 10.58 | test_volumes.py
    test_03_download_attached_volume | Success | 15.45 | test_volumes.py
    test_02_attach_volume | Success | 49.22 | test_volumes.py
    test_01_create_volume | Success | 507.32 | test_volumes.py
    test_03_delete_vm_snapshots | Success | 275.25 | test_vm_snapshots.py
    test_02_revert_vm_snapshots | Success | 222.18 | test_vm_snapshots.py
    test_01_test_vm_volume_snapshot | Success | 161.81 | test_vm_snapshots.py
    test_01_create_vm_snapshots | Success | 161.63 | test_vm_snapshots.py
    test_deploy_vm_multiple | Success | 267.76 | test_vm_life_cycle.py
    test_deploy_vm | Success | 0.03 | test_vm_life_cycle.py
    test_advZoneVirtualRouter | Success | 0.02 | test_vm_life_cycle.py
    test_10_attachAndDetach_iso | Success | 26.98 | test_vm_life_cycle.py
    test_09_expunge_vm | Success | 185.31 | test_vm_life_cycle.py
    test_08_migrate_vm | Success | 66.29 | test_vm_life_cycle.py
    test_07_restore_vm | Success | 0.11 | test_vm_life_cycle.py
    test_06_destroy_vm | Success | 5.13 | test_vm_life_cycle.py
    test_03_reboot_vm | Success | 5.15 | test_vm_life_cycle.py
    test_02_start_vm | Success | 20.23 | test_vm_life_cycle.py
    test_01_stop_vm | Success | 5.13 | test_vm_life_cycle.py
    test_CreateTemplateWithDuplicateName | Success | 206.46 | test_templates.py
    test_08_list_system_templates | Success | 0.03 | test_templates.py
    test_07_list_public_templates | Success | 0.04 | test_templates.py
    test_05_template_permissions | Success | 0.06 | test_templates.py
    test_04_extract_template | Success | 10.19 | test_templates.py
    test_03_delete_template | Success | 5.11 | test_templates.py
    test_02_edit_template | Success | 90.11 | test_templates.py
    test_01_create_template | Success | 105.83 | test_templates.py
    test_10_destroy_cpvm | Success | 236.82 | test_ssvm.py
    test_09_destroy_ssvm | Success | 268.80 | test_ssvm.py
    test_08_reboot_cpvm | Success | 156.59 | test_ssvm.py
    test_07_reboot_ssvm | Success | 158.47 | test_ssvm.py
    test_06_stop_cpvm | Success | 206.94 | test_ssvm.py
    test_05_stop_ssvm | Success | 203.91 | test_ssvm.py
    test_04_cpvm_internals | Success | 1.23 | test_ssvm.py
    test_03_ssvm_internals | Success | 3.67 | test_ssvm.py
    test_02_list_cpvm_vm | Success | 0.13 | test_ssvm.py
    test_01_list_sec_storage_vm | Success | 0.14 | test_ssvm.py
    test_04_change_offering_small | Success | 93.45 | test_service_offerings.py
    test_03_delete_service_offering | Success | 0.04 | test_service_offerings.py
    test_02_edit_service_offering | Success | 0.06 | test_service_offerings.py
    test_01_create_service_offering | Success | 0.14 | test_service_offerings.py
    test_02_sys_template_ready | Success | 0.17 | test_secondary_storage.py
    test_01_sys_vm_start | Success | 0.18 | test_secondary_storage.py
    test_09_reboot_router | Success | 145.98 | test_routers.py
    test_08_start_router | Success | 151.01 | test_routers.py
    test_07_stop_router | Success | 20.35 | test_routers.py
    test_06_router_advanced | Success | 0.06 | test_routers.py
    test_05_router_basic | Success | 0.04 | test_routers.py
    test_04_restart_network_wo_cleanup | Success | 5.67 | test_routers.py
    test_03_restart_network_cleanup | Success | 181.59 | test_routers.py
    test_02_router_internal_adv | Success | 1.02 | test_routers.py
    test_01_router_internal_basic | Success | 0.54 | test_routers.py
    test_router_dns_guestipquery | Success | 83.69 | test_router_dns.py
    test_router_dns_externalipquery | Success | 0.06 | test_router_dns.py
    test_router_dhcphosts | Success | 137.03 | test_router_dhcphosts.py
    test_router_dhcp_opts | Success | 21.87 | test_router_dhcphosts.py
    test_01_updatevolumedetail | Success | 0.08 | test_resource_detail.py
    test_01_reset_vm_on_reboot | Success | 25.31 | test_reset_vm_on_reboot.py
    test_createRegion | Success | 0.04 | test_regions.py
    test_create_pvlan_network | Success | 5.23 | test_pvlan.py
    test_dedicatePublicIpRange | Success | 0.46 | test_public_ip_range.py
    test_03_vpc_privategw_restart_vpc_cleanup | Success | 1019.00 | test_privategw_acl.py
    test_02_vpc_privategw_static_routes | Success | 622.07 | test_privategw_acl.py
    test_01_vpc_privategw_acl | Success | 178.07 | test_privategw_acl.py
    test_01_primary_storage_nfs | Success | 36.16 | test_primary_storage.py
    test_createPortablePublicIPRange | Success | 10.18 | test_portable_publicip.py
    test_createPortablePublicIPAcquire | Success | 15.52 | test_portable_publicip.py
    test_isolate_network_password_server | Success | 97.28 | test_password_server.py
    test_UpdateStorageOverProvisioningFactor | Success | 0.14 | test_over_provisioning.py
    test_oobm_zchange_password | Success | 30.98 | test_outofbandmanagement.py
    test_oobm_multiple_mgmt_server_ownership | Success | 11.33 | test_outofbandmanagement.py
    test_oobm_issue_power_status | Success | 10.23 | test_outofbandmanagement.py
    test_oobm_issue_power_soft | Success | 20.66 | test_outofbandmanagement.py
    test_oobm_issue_power_reset | Success | 15.39 | test_outofbandmanagement.py
    test_oobm_issue_power_on | Success | 15.50 | test_outofbandmanagement.py
    test_oobm_issue_power_off | Success | 15.32 | test_outofbandmanagement.py
    test_oobm_issue_power_cycle | Success | 15.30 | test_outofbandmanagement.py
    test_oobm_enabledisable_across_clusterzones | Success | 87.58 | test_outofbandmanagement.py
    test_oobm_enable_feature_valid | Success | 5.16 | test_outofbandmanagement.py
    test_oobm_enable_feature_invalid | Success | 0.09 | test_outofbandmanagement.py
    test_oobm_disable_feature_valid | Success | 5.17 | test_outofbandmanagement.py
    test_oobm_disable_feature_invalid | Success | 0.12 | test_outofbandmanagement.py
    test_oobm_configure_invalid_driver | Success | 0.09 | test_outofbandmanagement.py
    test_oobm_configure_default_driver | Success | 0.20 | test_outofbandmanagement.py
    test_oobm_background_powerstate_sync | Success | 23.50 | test_outofbandmanagement.py
    test_extendPhysicalNetworkVlan | Success | 15.34 | test_non_contigiousvlan.py
    test_01_nic | Success | 490.30 | test_nic.py
    test_releaseIP | Success | 298.49 | test_network.py
    test_reboot_router | Success | 605.50 | test_network.py
    test_public_ip_user_account | Success | 10.25 | test_network.py
    test_public_ip_admin_account | Success | 40.34 | test_network.py
    test_network_rules_acquired_public_ip_3_Load_Balancer_Rule | Success | 76.85 | test_network.py
    test_network_rules_acquired_public_ip_2_nat_rule | Success | 61.71 | test_network.py
    test_network_rules_acquired_public_ip_1_static_nat_rule | Success | 125.44 | test_network.py
    test_delete_account | Success | 338.25 | test_network.py
    test_02_port_fwd_on_non_src_nat | Success | 55.69 | test_network.py
    test_01_port_fwd_on_src_nat | Success | 111.81 | test_network.py
    test_nic_secondaryip_add_remove | Success | 237.87 | test_multipleips_per_nic.py
    login_test_saml_user | Success | 19.37 | test_login.py
    test_assign_and_removal_lb | Success | 148.66 | test_loadbalance.py
    test_02_create_lb_rule_non_nat | Success | 207.67 | test_loadbalance.py
    test_01_create_lb_rule_src_nat | Success | 207.84 | test_loadbalance.py
    test_03_list_snapshots | Success | 0.08 | test_list_ids_parameter.py
    test_02_list_templates | Success | 0.08 | test_list_ids_parameter.py
    test_01_list_volumes | Success | 0.03 | test_list_ids_parameter.py
    test_07_list_default_iso | Success | 0.06 | test_iso.py
    test_05_iso_permissions | Success | 0.06 | test_iso.py
    test_04_extract_Iso | Success | 5.18 | test_iso.py
    test_03_delete_iso | Success | 95.17 | test_iso.py
    test_02_edit_iso | Success | 0.06 | test_iso.py
    test_01_create_iso | Success | 21.08 | test_iso.py
    test_04_rvpc_internallb_haproxy_stats_on_all_interfaces | Success | 501.02 | test_internal_lb.py
    test_03_vpc_internallb_haproxy_stats_on_all_interfaces | Success | 394.52 | test_internal_lb.py
    test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80 | Success | 979.69 | test_internal_lb.py
    test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80 | Success | 788.29 | test_internal_lb.py
    test_dedicateGuestVlanRange | Success | 10.28 | test_guest_vlan_range.py
    test_UpdateConfigParamWithScope | Success | 0.14 | test_global_settings.py
    test_rolepermission_lifecycle_update | Success | 6.22 | test_dynamicroles.py
    test_rolepermission_lifecycle_list | Success | 6.04 | test_dynamicroles.py
    test_rolepermission_lifecycle_delete | Success | 6.04 | test_dynamicroles.py
    test_rolepermission_lifecycle_create | Success | 5.93 | test_dynamicroles.py
    test_rolepermission_lifecycle_concurrent_updates | Success | 6.05 | test_dynamicroles.py
    test_role_lifecycle_update_role_inuse | Success | 5.93 | test_dynamicroles.py
    test_role_lifecycle_update | Success | 5.97 | test_dynamicroles.py
    test_role_lifecycle_list | Success | 5.92 | test_dynamicroles.py
    test_role_lifecycle_delete | Success | 10.94 | test_dynamicroles.py
    test_role_lifecycle_create | Success | 5.97 | test_dynamicroles.py
    test_role_inuse_deletion | Success | 5.88 | test_dynamicroles.py
    test_role_account_acls_multiple_mgmt_servers | Success | 8.13 | test_dynamicroles.py
    test_role_account_acls | Success | 8.53 | test_dynamicroles.py
    test_default_role_deletion | Success | 5.99 | test_dynamicroles.py
    test_04_create_fat_type_disk_offering | Success | 0.07 | test_disk_offerings.py
    test_03_delete_disk_offering | Success | 0.04 | test_disk_offerings.py
    test_02_edit_disk_offering | Success | 0.06 | test_disk_offerings.py
    test_02_create_sparse_type_disk_offering | Success | 0.07 | test_disk_offerings.py
    test_01_create_disk_offering | Success | 0.11 | test_disk_offerings.py
    test_deployvm_userdispersing | Success | 45.77 | test_deploy_vms_with_varied_deploymentplanners.py
    test_deployvm_userconcentrated | Success | 65.92 | test_deploy_vms_with_varied_deploymentplanners.py
    test_deployvm_firstfit | Success | 287.37 | test_deploy_vms_with_varied_deploymentplanners.py
    test_deployvm_userdata_post | Success | 20.43 | test_deploy_vm_with_userdata.py
    test_deployvm_userdata | Success | 136.21 | test_deploy_vm_with_userdata.py
    test_02_deploy_vm_root_resize | Success | 5.87 | test_deploy_vm_root_resize.py
    test_01_deploy_vm_root_resize | Success | 5.87 | test_deploy_vm_root_resize.py
    test_00_deploy_vm_root_resize | Success | 6.03 | test_deploy_vm_root_resize.py
    test_deploy_vm_from_iso | Success | 198.07 | test_deploy_vm_iso.py
    test_DeployVmAntiAffinityGroup | Success | 2050.95 | test_affinity_groups.py
    test_08_resize_volume | Skipped | 10.19 | test_volumes.py
    test_07_resize_fail | Skipped | 10.52 | test_volumes.py
    test_06_copy_template | Skipped | 0.00 | test_templates.py
    test_static_role_account_acls | Skipped | 0.02 | test_staticroles.py
    test_01_scale_vm | Skipped | 66.35 | test_scale_vm.py
    test_01_primary_storage_iscsi | Skipped | 0.04 | test_primary_storage.py
    test_06_copy_iso | Skipped | 0.00 | test_iso.py
    test_deploy_vgpu_enabled_vm | Skipped | 0.01 | test_deploy_vgpu_enabled_vm.py



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

Posted by koushik-das <gi...@git.apache.org>.
Github user koushik-das commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/1861#discussion_r94206888
  
    --- Diff: plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java ---
    @@ -839,7 +840,12 @@ private int findRouterEthDeviceIndex(String domrName, String routerIp, String ma
             // when we dynamically plug in a new NIC into virtual router, it may take time to show up in guest OS
             // we use a waiting loop here as a workaround to synchronize activities in systems
             long startTick = System.currentTimeMillis();
    -        while (System.currentTimeMillis() - startTick < 15000) {
    +        long waitTimeoutMillis = 15000;
    +        Long waitTimeoutMillisLong = VmwareManagerImpl.s_vmwareNicHotplugWaitTimeout.value();
    +        if (waitTimeoutMillisLong != null) {
    +            waitTimeoutMillis = waitTimeoutMillisLong;
    +        }
    +        while (System.currentTimeMillis() - startTick < waitTimeoutMillis) {
    --- End diff --
    
    The config value can be directly read using s_vmwareNicHotplugWaitTimeout.value(), no need for the checks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack issue #1861: CLOUDSTACK-9698 [VMware] Make hardcorded wait timeou...

Posted by sateesh-chodapuneedi <gi...@git.apache.org>.
Github user sateesh-chodapuneedi commented on the issue:

    https://github.com/apache/cloudstack/pull/1861
  
    Hi @karuturi This has test results as well as 2 LGTMs. I think this can be merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request #1861: CLOUDSTACK-9698 Make hardcorded wait timeout ...

Posted by koushik-das <gi...@git.apache.org>.
Github user koushik-das commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/1861#discussion_r94206618
  
    --- Diff: plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java ---
    @@ -230,6 +230,7 @@
     import com.cloud.hypervisor.guru.VMwareGuru;
     import com.cloud.hypervisor.vmware.manager.VmwareHostService;
     import com.cloud.hypervisor.vmware.manager.VmwareManager;
    +import com.cloud.hypervisor.vmware.manager.VmwareManagerImpl;
    --- End diff --
    
    Better to define the config parameter in VmwareManager itself.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---