You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by nitt10prashant <gi...@git.apache.org> on 2015/08/18 13:24:56 UTC

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

GitHub user nitt10prashant opened a pull request:

    https://github.com/apache/cloudstack/pull/713

    CLOUDSTACK-8745 : verify usage after root disk migration

    put storage in maintenance mode and start ha vm and check usage ... === TestName: test_ha_with_storage_maintenance | Status : SUCCESS ===
    ok
    
    ----------------------------------------------------------------------
    Ran 1 test in 842.294s
    
    OK

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/nitt10prashant/cloudstack pool_maint

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/cloudstack/pull/713.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #713
    
----
commit d276a3579f7f055b5431575a5bb498d96dfc9f45
Author: nitt10prashant <ni...@gmail.com>
Date:   2015-08-18T11:23:54Z

    CLOUDSTACK-8745 : verify usage after root disk migration

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/cloudstack/pull/713


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-191098254
  
    sure 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379748
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    --- End diff --
    
    I think it should be in usage test cases if not we may need to enhance migrate volume test cases.
    If i add data volume , i will have to skip this test for lxc( for data volume it needs rbd storage) .which i can avoid since issue is only with root volume


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by swill <gi...@git.apache.org>.
Github user swill commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-213407542
  
    I need one more LGTM code review on this one.  I will try to test this in my lab today.  Thanks...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379426
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    --- End diff --
    
    issue was with only root volume , usage was getting generated for data disk .



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by swill <gi...@git.apache.org>.
Github user swill commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-213565364
  
    I think this one is ready unless anyone has any final words...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by bhaisaab <gi...@git.apache.org>.
Github user bhaisaab commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-175665150
  
    @nitt10prashant please rebase and meld into a single commit


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37380762
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    --- End diff --
    
    sure



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-132476722
  
    test result of enough storage is not available to perform test 
    
    put storage in maintenance mode and start ha vm and check usage ... SKIP: sufficient storage not available in any cluster for zone 90d85d89-01c4-4a91-b76e-eedf947b40f6
    
    ----------------------------------------------------------------------
    Ran 1 test in 4.059s
    
    OK (SKIP=1)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379405
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    +        vms = VirtualMachine.list(
    +            self.api_client,
    +            id=self.virtual_machine_with_ha.id,
    +            listall=True,
    +        )
    +
    +        self.assertEqual(
    +            validateList(vms)[0],
    +            PASS,
    +            "List VMs should return valid response for deployed VM"
    +        )
    +
    +        vm = vms[0]
    +
    +        self.debug("Deployed VM on host: %s" % vm.hostid)
    +
    +        # Put storage in maintenance  mode
    +
    +        self.list_root_volume = Volume.list(self.api_client,
    +                                            virtualmachineid=vm.id,
    +                                            type='ROOT',
    +                                            account=self.account.name,
    +                                            domainid=self.account.domainid)
    +
    +        self.assertEqual(validateList(self.list_root_volume)[0],
    +                         PASS,
    +                         "check list voume_response for vm id %s" % vm.id)
    +
    +        self.pool_id = self.dbclient.execute(
    +            "select pool_id from volumes where uuid = '%s';"
    +            % self.list_root_volume[0].id)
    +        self.storageid = self.dbclient.execute(
    +            "select uuid from storage_pool where id = '%s';"
    +            % self.pool_id[0][0])
    +
    +        self.pool1 = maintenance(self, storageid=self.storageid[0][0])
    +
    +        self.virtual_machine_with_ha.start(self.api_client)
    +        self.events = self.dbclient.execute(
    +            "select type from usage_event where resource_name='%s';"
    +            % self.list_root_volume[0].name
    +        )
    +        self.assertEqual(len(self.events),
    +                         3,
    +                         "check the usage event table for root disk %s"
    +                         % self.list_root_volume[0].name
    +                         )
    --- End diff --
    
    Do you think there's a way to verify the event names as well (VOLUME.DELETE & VOLUME.CREATE) Since you're already querying the usage_event table, I am guessing event names should be there too? Or am I missing something else?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-213278268
  
    @bhaisaab rebased and  merged into single commit ,@swill @koushik-das  can you please look into this 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by swill <gi...@git.apache.org>.
Github user swill commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r60792255
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    --- End diff --
    
    This is the only case I have tested so far.  :)
    ```
    # ./run_marvin_single_tests.sh /data/shared/marvin/mct-zone3-kvm3-kvm4.cfg 
    + marvinCfg=/data/shared/marvin/mct-zone3-kvm3-kvm4.cfg
    + '[' -z /data/shared/marvin/mct-zone3-kvm3-kvm4.cfg ']'
    + cd /data/git/cs2/cloudstack/test/integration
    + nosetests --with-marvin --marvin-config=/data/shared/marvin/mct-zone3-kvm3-kvm4.cfg -s -a tags=advanced component/maint/test_ha_pool_maintenance.py
    
    ==== Marvin Init Started ====
    
    === Marvin Parse Config Successful ===
    
    === Marvin Setting TestData Successful===
    
    ==== Log Folder Path: /tmp//MarvinLogs//Apr_22_2016_21_33_55_KPLSCU. All logs will be available here ====
    
    === Marvin Init Logging Successful===
    
    ==== Marvin Init Successful ====
    ===final results are now copied to: /tmp//MarvinLogs/test_ha_pool_maintenance_74DSU4===
    [root@cs2 cloudstack]# cat /tmp/MarvinLogs/test_ha_pool_maintenance_74DSU4/results.txt 
    put storage in maintenance mode and start ha vm and check usage ... SKIP: sufficient storage not available in any cluster for zone 47059670-62f8-46a4-9874-2a845f9d1b19
    
    ----------------------------------------------------------------------
    Ran 1 test in 0.227s
    
    OK (SKIP=1)
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37380740
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    +        vms = VirtualMachine.list(
    +            self.api_client,
    +            id=self.virtual_machine_with_ha.id,
    +            listall=True,
    +        )
    +
    +        self.assertEqual(
    +            validateList(vms)[0],
    +            PASS,
    +            "List VMs should return valid response for deployed VM"
    +        )
    +
    +        vm = vms[0]
    +
    +        self.debug("Deployed VM on host: %s" % vm.hostid)
    +
    +        # Put storage in maintenance  mode
    +
    +        self.list_root_volume = Volume.list(self.api_client,
    +                                            virtualmachineid=vm.id,
    +                                            type='ROOT',
    +                                            account=self.account.name,
    +                                            domainid=self.account.domainid)
    +
    +        self.assertEqual(validateList(self.list_root_volume)[0],
    +                         PASS,
    +                         "check list voume_response for vm id %s" % vm.id)
    +
    +        self.pool_id = self.dbclient.execute(
    +            "select pool_id from volumes where uuid = '%s';"
    +            % self.list_root_volume[0].id)
    +        self.storageid = self.dbclient.execute(
    +            "select uuid from storage_pool where id = '%s';"
    +            % self.pool_id[0][0])
    +
    +        self.pool1 = maintenance(self, storageid=self.storageid[0][0])
    +
    +        self.virtual_machine_with_ha.start(self.api_client)
    +        self.events = self.dbclient.execute(
    +            "select type from usage_event where resource_name='%s';"
    +            % self.list_root_volume[0].name
    +        )
    +        self.assertEqual(len(self.events),
    +                         3,
    +                         "check the usage event table for root disk %s"
    +                         % self.list_root_volume[0].name
    +                         )
    --- End diff --
    
    I wrote it with assumption  that volume create and volume delete events will be generated  for sure  for the operations i am performing in script . only how many time there are getting generated need to be verified 
    But now i think its is good to have these checks . i will add these checks 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379847
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    --- End diff --
    
    hmm... I thought something like this should work: StoragePool.enableMaintenance(self.api_client, id=storageself.storageid[0][0]) since it is a classmethod
    I am just trying to confirm you're using the latest base.py since couple of these enhancements went in recently into base.py...
    If it still doesn't fit in, it's ok. You can continue the way you've done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by sanju1010 <gi...@git.apache.org>.
Github user sanju1010 commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-139975477
  
    LGTM!!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379183
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    --- End diff --
    
    As per CLOUDSTACK-8745, both root and data disk seems to have issues. So i think we should verify the events for both ROOT and DATA disk. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379467
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    --- End diff --
    
    Can you please add the result of running the test with not enough pools to ensure the test is getting skipped


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379515
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    --- End diff --
    
    Is there a test already that verifies data volume then? If not, I think we should add one here so that the test is complete..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37381494
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    +
    +def cancelmaintenance(self, storageid):
    +    """cancel maintenance mode of a Storage pool"""
    +
    +    cmd = cancelStorageMaintenance.cancelStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.cancelStorageMaintenance(cmd)
    +
    +
    +class testHaPoolMaintenance(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        try:
    +            cls._cleanup = []
    +            cls.testClient = super(
    +                testHaPoolMaintenance,
    +                cls).getClsTestClient()
    +            cls.api_client = cls.testClient.getApiClient()
    +            cls.services = cls.testClient.getParsedTestDataConfig()
    +            # Get Domain, Zone, Template
    +            cls.domain = get_domain(cls.api_client)
    +            cls.zone = get_zone(
    +                cls.api_client,
    +                cls.testClient.getZoneForTests())
    +            cls.template = get_template(
    +                cls.api_client,
    +                cls.zone.id,
    +                cls.services["ostype"]
    +            )
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services['mode'] = cls.zone.networktype
    +            cls.hypervisor = cls.testClient.getHypervisorInfo()
    +            cls.services["virtual_machine"]["zoneid"] = cls.zone.id
    +            cls.services["virtual_machine"]["template"] = cls.template.id
    +            cls.clusterWithSufficientPool = None
    +            clusters = Cluster.list(cls.api_client, zoneid=cls.zone.id)
    +
    +            if not validateList(clusters)[0]:
    +
    +                cls.debug(
    +                    "check list cluster response for zone id %s" %
    +                    cls.zone.id)
    +
    +            for cluster in clusters:
    +                cls.pool = StoragePool.list(cls.api_client,
    +                                            clusterid=cluster.id,
    +                                            keyword="NetworkFilesystem"
    +                                            )
    +
    +                if not validateList(cls.pool)[0]:
    +
    +                    cls.debug(
    +                        "check list cluster response for zone id %s" %
    +                        cls.zone.id)
    +
    +                if len(cls.pool) >= 2:
    +                    cls.clusterWithSufficientPool = cluster
    +                    break
    +            if not cls.clusterWithSufficientPool:
    +                return
    +
    +            cls.services["service_offerings"][
    +                "tiny"]["offerha"] = "True"
    +
    +            cls.services_off = ServiceOffering.create(
    +                                  cls.api_client,
    +                                  cls.services["service_offerings"]["tiny"])
    +            cls._cleanup.append(cls.services_off)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise Exception("Warning: Exception in setup : %s" % e)
    +        return
    +
    +    def setUp(self):
    +
    +        self.apiClient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +        if not self.clusterWithSufficientPool:
    +            self.skipTest(
    +                "sufficient storage not available in any cluster for zone %s" %
    +                self.zone.id)
    +        self.account = Account.create(
    +            self.api_client,
    +            self.services["account"],
    +            domainid=self.domain.id
    +        )
    +        self.cleanup.append(self.account)
    +
    +    def tearDown(self):
    +        # Clean up, terminate the created resources
    +        cancelmaintenance(self, storageid=self.storageid[0][0])
    +        cleanup_resources(self.apiClient, self.cleanup)
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.api_client, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +        return
    +
    +    @attr(tags=["advanced", "cl", "advancedns", "sg",
    +                "basic", "eip", "simulator", "multihost"])
    +    def test_ha_with_storage_maintenance(self):
    +        """put storage in maintenance mode and start ha vm and check usage"""
    +        # Steps
    +        # 1. Create a Compute service offering with the 'Offer HA' option
    +        # selected.
    +        # 2. Create a Guest VM with the compute service offering created above.
    +        # 3. put PS into maintenance  mode
    +        # 4. vm should go in stop state
    +        # 5. start vm ,vm should come up on another storage
    +        # 6. check usage events are getting generated for root disk
    +
    +        host = list_hosts(
    +            self.api_client,
    +            clusterid=self.clusterWithSufficientPool.id)
    +        self.assertEqual(validateList(host)[0],
    +                         PASS,
    +                         "check list host response for cluster id %s"
    +                         % self.clusterWithSufficientPool.id)
    +
    +        self.virtual_machine_with_ha = VirtualMachine.create(
    +            self.api_client,
    +            self.services["virtual_machine"],
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.services_off.id,
    +            hostid=host[0].id
    +        )
    +
    --- End diff --
    
    Makes sense. Could you please open a task/bug for that so that it can be tracked and worked on separately?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by nitt10prashant <gi...@git.apache.org>.
Github user nitt10prashant commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379373
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    --- End diff --
    
    those methods can be used only with storage pool class object.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by ksowmya <gi...@git.apache.org>.
Github user ksowmya commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/713#discussion_r37379122
  
    --- Diff: test/integration/component/maint/test_ha_pool_maintenance.py ---
    @@ -0,0 +1,229 @@
    +#!/usr/bin/env python
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase
    +from marvin.cloudstackAPI import (enableStorageMaintenance,
    +                                  cancelStorageMaintenance
    +                                  )
    +from marvin.lib.utils import (cleanup_resources,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             VirtualMachine,
    +                             ServiceOffering,
    +                             Cluster,
    +                             StoragePool,
    +                             Volume)
    +from marvin.lib.common import (get_zone,
    +                               get_domain,
    +                               get_template,
    +                               list_hosts
    +                               )
    +from marvin.codes import PASS
    +
    +
    +def maintenance(self, storageid):
    +    """enables maintenance mode of a Storage pool"""
    +
    +    cmd = enableStorageMaintenance.enableStorageMaintenanceCmd()
    +    cmd.id = storageid
    +    return self.api_client.enableStorageMaintenance(cmd)
    +
    --- End diff --
    
    There are enableMaintenance and cancelMaintenance methods available now directly from StoragePool in base.py. It's better to use that instead of repeating?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8745 : verify usage after root...

Posted by pavanb018 <gi...@git.apache.org>.
Github user pavanb018 commented on the pull request:

    https://github.com/apache/cloudstack/pull/713#issuecomment-141880475
  
    The test Looks good to me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---