You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@cloudstack.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2013/09/05 09:12:51 UTC
[jira] [Commented] (CLOUDSTACK-4327) [Storage Maintenance] SSVM,
CPVM and routerVMs are running even after storage entered into maintenance.
[ https://issues.apache.org/jira/browse/CLOUDSTACK-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13758840#comment-13758840 ]
ASF subversion and git services commented on CLOUDSTACK-4327:
-------------------------------------------------------------
Commit 65e85962db462cc9728e204abeaa13bb0e4a9a8f in branch refs/heads/4.2-forward from [~nitinme]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=65e8596 ]
CLOUDSTACK-4327:
Check for the all the transition states for Maintenance. Also corrected the isMaintenance function for StoragePoolVo
Signed off by : nitin mehta<ni...@citrix.com>
> [Storage Maintenance] SSVM, CPVM and routerVMs are running even after storage entered into maintenance.
> -------------------------------------------------------------------------------------------------------
>
> Key: CLOUDSTACK-4327
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4327
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the default.)
> Components: Storage Controller
> Affects Versions: 4.2.0
> Environment: commit id # 8df22d1818c120716bea5fce39854da38f61055b
> Reporter: venkata swamybabu budumuru
> Assignee: Nitin Mehta
> Fix For: 4.2.0
>
> Attachments: logs.tgz
>
>
> Step to reproduce :
> 1. Have latest CloudStack setup with at least 1 advanced zone.
> 2. Above setup was created using Marvin framework using APIs
> 3. During the creation of zone, I have added 2 cluster wide primary storages
> - PS0
> - PS1
> mysql> select * from storage_pool where id<3\G
> *************************** 1. row ***************************
> id: 1
> name: PS0
> uuid: 5458182e-bfcb-351c-97ed-e7223bca2b8e
> pool_type: NetworkFilesystem
> port: 2049
> data_center_id: 1
> pod_id: 1
> cluster_id: 1
> used_bytes: 4218878263296
> capacity_bytes: 5902284816384
> host_address: 10.147.28.7
> user_info: NULL
> path: /export/home/swamy/primary.campo.kvm.1.zone
> created: 2013-08-14 07:10:01
> removed: NULL
> update_time: NULL
> status: Maintenance
> storage_provider_name: DefaultPrimary
> scope: CLUSTER
> hypervisor: NULL
> managed: 0
> capacity_iops: NULL
> *************************** 2. row ***************************
> id: 2
> name: PS1
> uuid: 94634fe1-55f7-3fa8-aad9-5adc25246072
> pool_type: NetworkFilesystem
> port: 2049
> data_center_id: 1
> pod_id: 1
> cluster_id: 1
> used_bytes: 4217960071168
> capacity_bytes: 5902284816384
> host_address: 10.147.28.7
> user_info: NULL
> path: /export/home/swamy/primary.campo.kvm.2.zone
> created: 2013-08-14 07:10:02
> removed: NULL
> update_time: NULL
> status: Maintenance
> storage_provider_name: DefaultPrimary
> scope: CLUSTER
> hypervisor: NULL
> managed: 0
> capacity_iops: NULL
> 2 rows in set (0.00 sec)
> Observations:
> (i) SSVM and CPVM volumes got created on pool_id=1
> 4. Zone got setup without any issues.
> 5. Added following zone wide primary storages
> - test1
> - test2
> mysql> select * from storage_pool where id>7\G
> *************************** 1. row ***************************
> id: 8
> name: test1
> uuid: 4e612995-3cb1-344e-ba19-3992e3d37d3f
> pool_type: NetworkFilesystem
> port: 2049
> data_center_id: 1
> pod_id: NULL
> cluster_id: NULL
> used_bytes: 4214658203648
> capacity_bytes: 5902284816384
> host_address: 10.147.28.7
> user_info: NULL
> path: /export/home/swamy/test1
> created: 2013-08-14 09:49:56
> removed: NULL
> update_time: NULL
> status: Up
> storage_provider_name: DefaultPrimary
> scope: ZONE
> hypervisor: KVM
> managed: 0
> capacity_iops: NULL
> *************************** 2. row ***************************
> id: 9
> name: test2
> uuid: 43a95e23-1ad6-30a9-9903-f68231dacec5
> pool_type: NetworkFilesystem
> port: 2049
> data_center_id: 1
> pod_id: NULL
> cluster_id: NULL
> used_bytes: 4214658793472
> capacity_bytes: 5902284816384
> host_address: 10.147.28.7
> user_info: NULL
> path: /export/home/swamy/test2
> created: 2013-08-14 09:50:12
> removed: NULL
> update_time: NULL
> status: Up
> storage_provider_name: DefaultPrimary
> scope: ZONE
> hypervisor: KVM
> managed: 0
> capacity_iops: NULL
> 6. Have created a non-ROOT domain user and deployed VMs
> 7. Create 5 volumes as above users (volume ids : 23,24,25,26,27)
> 8. Tried to attach 23 & 24 volumes to the above deployed VM
> Observations :
> (ii) user VMs came up on pool_id=1 and router VMs came up on pool_id=2
> (iii) both the DATADISKS (23 & 24) got attached but, they got allocated on pool_id=1. It never picked zone wide storages and it looks like the only way for this to happen is through storageTags.
> 9. Now place the storages (PS0, PS1) in maintenance mode.
> Observations:
> ===========
> a. Both PS0 and PS1 went into maintenance mode successfully but, SSVM, CPVM and routerVMs are still running. Only User VMs went into stopped state.
> b. Since my network.gc.* is set to 60 seconds then it went and stopped my router VMs.
> c. Job-63 and job-64 are the jobs for prepareMaintenanceMode commands.
> Expected Result:
> ============
> - When Storage Says that it is in maintenance mode then it should not have any VMs in running state.
> Attaching all the required logs along with db dump to the bug.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira