You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by pritisarap12 <gi...@git.apache.org> on 2015/03/12 10:33:14 UTC

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

GitHub user pritisarap12 opened a pull request:

    https://github.com/apache/cloudstack/pull/117

    CLOUDSTACK-8380: Adding automation test cases for VM/Volume snapshot testpath

    CLOUDSTACK-8380: Adding automation test cases for VM/Volume snapshot testpath

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/pritisarap12/cloudstack CLOUDSTACK-8308-Adding-automation-test-cases-for-VM/Volume-snapshot-testpath

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/cloudstack/pull/117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #117
    
----
commit acf9cf042f32f55ca35d19f785da05872a9f18cb
Author: pritisarap12 <pr...@clogeny.com>
Date:   2015-03-12T09:25:09Z

    CLOUDSTACK-8380: Adding automation test cases for VM/Volume snapshot testpath

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650336
  
    --- Diff: test/integration/testpaths/testpath_storage_migration.py ---
    @@ -479,6 +283,19 @@ def setUpClass(cls):
                 )
                 cls._cleanup.append(cls.disk_offering_cluster1)
     
    +            cls.new_virtual_machine = VirtualMachine.create(
    +                cls.apiclient,
    +                cls.testdata["small"],
    +                templateid=cls.template.id,
    +                accountid=cls.account.name,
    +                domainid=cls.account.domainid,
    +                serviceofferingid=cls.service_offering.id,
    +                zoneid=cls.zone.id,
    +                mode=cls.zone.networktype
    +            )
    +
    +            cls.new_virtual_machine.start(cls.apiclient)
    --- End diff --
    
    Vm is by default in running state as long as we do not pass startvm=False. Hence no need to start VM explicitly after deploying.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27651210
  
    --- Diff: test/integration/testpaths/testpath_volume_recurring_snap.py ---
    @@ -0,0 +1,1015 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume recurring snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              is_snapshot_on_nfs,
    +                              validateList
    +                              )
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             VirtualMachine,
    +                             SnapshotPolicy,
    +                             Snapshot,
    +                             Configurations
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_snapshot_policy
    +                               )
    +
    +from marvin.codes import PASS
    +
    +import time
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +            # Create Service offering
    +            cls.service_offering = ServiceOffering.create(
    +                cls.apiclient,
    +                cls.testdata["service_offering"],
    +            )
    +            cls._cleanup.append(cls.service_offering)
    +            # Create Disk offering
    +            cls.disk_offering = DiskOffering.create(
    +                cls.apiclient,
    +                cls.testdata["disk_offering"],
    +            )
    +            cls._cleanup.append(cls.disk_offering)
    +            # Deploy A VM
    +            cls.vm_1 = VirtualMachine.create(
    +                cls.userapiclient,
    +                cls.testdata["small"],
    +                templateid=cls.template.id,
    +                accountid=cls.account.name,
    +                domainid=cls.account.domainid,
    +                serviceofferingid=cls.service_offering.id,
    +                zoneid=cls.zone.id,
    +                diskofferingid=cls.disk_offering.id,
    +                mode=cls.zone.networktype
    +            )
    +
    +            cls.volume = list_volumes(
    +                cls.apiclient,
    +                virtualmachineid=cls.vm_1.id,
    +                type='ROOT',
    +                listall=True
    +            )
    +
    +            cls.data_volume = list_volumes(
    +                cls.apiclient,
    +                virtualmachineid=cls.vm_1.id,
    +                type='DATADISK',
    +                listall=True
    +            )
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise e
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.apiclient, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +    def setUp(self):
    +        self.apiclient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +
    +    def tearDown(self):
    +        try:
    +            cleanup_resources(self.apiclient, self.cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +        return
    +
    +    @attr(tags=["advanced", "basic"])
    +    def test_01_volume_snapshot(self):
    +        """ Test Volume (root) Snapshot
    +        # 1. Create Hourly, Daily,Weekly recurring snapshot policy for ROOT disk and \
    +                    Verify the presence of the corresponding snapshots on the Secondary Storage
    +        # 2. Delete the snapshot policy and verify the entry as Destroyed in snapshot_schedule
    +        # 3. Verify that maxsnaps should not consider manual snapshots for deletion
    +        # 4. Snapshot policy should reflect the correct timezone
    +        # 5. Verify that listSnapshotPolicies() should return all snapshot policies \
    +                that belong to the account (both manual and recurring snapshots)
    +        # 6. Verify that listSnapshotPolicies() should not return snapshot \
    +                policies that have been deleted
    +        # 7. Verify that snapshot should not be created for VM in Destroyed state
    +        # 8. Verify that snapshot should get created after resuming the VM
    +        # 9. Verify that All the recurring policies associated with the VM should be \
    +                deleted after VM get destroyed.
    +        """
    +        # Step 1
    +        recurring_snapshot = SnapshotPolicy.create(
    +            self.apiclient,
    +            self.volume[0].id,
    +            self.testdata["recurring_snapshot"]
    --- End diff --
    
    What is the snapshot policy interval here? Rather than passing same data as in test data, define the data in the test case itself whichever interval is want, and pass that data to snapshot policy. The test case behavior should not change if the dict in test_data.py is changed in future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by bhaisaab <gi...@git.apache.org>.
Github user bhaisaab commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-83426630
  
    @gauravaradhye is it good to merge?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26833696
  
    --- Diff: tools/marvin/marvin/lib/base.py ---
    @@ -1086,7 +1092,7 @@ def create(cls, apiclient, services, volumeid=None,
         @classmethod
         def register(cls, apiclient, services, zoneid=None,
                      account=None, domainid=None, hypervisor=None,
    -                 projectid=None, details=None):
    --- End diff --
    
    Any reason to remove the existing parameter? Are you sure it won't affect any of the existing test cases?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26833819
  
    --- Diff: tools/marvin/marvin/lib/common.py ---
    @@ -1395,3 +1396,218 @@ def isNetworkDeleted(apiclient, networkid, timeout=600):
             time.sleep(60)
         #end while
         return networkDeleted
    +
    +
    +def createChecksum(testdata, 
    +                   virtual_machine, 
    +                   disk, 
    +                   disk_type):
    +
    +    """ Calculate the MD5 checksum of the disk by writing \
    +		data on the disk where disk_type is either root disk or data disk 
    +	@return: returns the calculated checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception: 
    +        raise Exception("SSH access failed for server with IP address: %s" %
    +                    virtual_machine.ssh_ip)
    +
    +    # Format partition using ext3
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +                testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +                testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        ssh_client.execute(c)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        apiclient,
    +        testdata,
    +        original_checksum,
    +        disk_type,
    +        template_id,
    +        account_name,
    +        account_domainid,
    +        service_offering_id,
    +        zone_id,
    +        zone_networktype,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False,
    +        ):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    --- End diff --
    
    Can we create the VM outside the function and pass it as parameter? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301214
  
    --- Diff: test/integration/smoke/test_vm_snapshots.py ---
    @@ -16,14 +16,16 @@
     # under the License.
    --- End diff --
    
    This file seems to be included by mistake. It already consists in this repo.
    Please remove the relevant commit from PR and include only desired commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650780
  
    --- Diff: test/integration/testpaths/testpath_volume_recurring_snap.py ---
    @@ -0,0 +1,1015 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume recurring snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              is_snapshot_on_nfs,
    +                              validateList
    +                              )
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             VirtualMachine,
    +                             SnapshotPolicy,
    +                             Snapshot,
    +                             Configurations
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_snapshot_policy
    +                               )
    +
    +from marvin.codes import PASS
    +
    +import time
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +            # Create Service offering
    +            cls.service_offering = ServiceOffering.create(
    +                cls.apiclient,
    +                cls.testdata["service_offering"],
    +            )
    +            cls._cleanup.append(cls.service_offering)
    +            # Create Disk offering
    +            cls.disk_offering = DiskOffering.create(
    +                cls.apiclient,
    +                cls.testdata["disk_offering"],
    +            )
    +            cls._cleanup.append(cls.disk_offering)
    +            # Deploy A VM
    +            cls.vm_1 = VirtualMachine.create(
    +                cls.userapiclient,
    +                cls.testdata["small"],
    +                templateid=cls.template.id,
    +                accountid=cls.account.name,
    +                domainid=cls.account.domainid,
    +                serviceofferingid=cls.service_offering.id,
    +                zoneid=cls.zone.id,
    +                diskofferingid=cls.disk_offering.id,
    +                mode=cls.zone.networktype
    +            )
    +
    +            cls.volume = list_volumes(
    +                cls.apiclient,
    +                virtualmachineid=cls.vm_1.id,
    +                type='ROOT',
    +                listall=True
    +            )
    +
    +            cls.data_volume = list_volumes(
    +                cls.apiclient,
    +                virtualmachineid=cls.vm_1.id,
    +                type='DATADISK',
    +                listall=True
    +            )
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise e
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.apiclient, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +    def setUp(self):
    +        self.apiclient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +
    +    def tearDown(self):
    +        try:
    +            cleanup_resources(self.apiclient, self.cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +        return
    +
    +    @attr(tags=["advanced", "basic"])
    +    def test_01_volume_snapshot(self):
    +        """ Test Volume (root) Snapshot
    +        # 1. Create Hourly, Daily,Weekly recurring snapshot policy for ROOT disk and \
    --- End diff --
    
    backward slash not needed in comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26833788
  
    --- Diff: tools/marvin/marvin/lib/common.py ---
    @@ -1395,3 +1396,218 @@ def isNetworkDeleted(apiclient, networkid, timeout=600):
             time.sleep(60)
         #end while
         return networkDeleted
    +
    +
    +def createChecksum(testdata, 
    +                   virtual_machine, 
    +                   disk, 
    +                   disk_type):
    +
    +    """ Calculate the MD5 checksum of the disk by writing \
    +		data on the disk where disk_type is either root disk or data disk 
    +	@return: returns the calculated checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception: 
    +        raise Exception("SSH access failed for server with IP address: %s" %
    +                    virtual_machine.ssh_ip)
    +
    +    # Format partition using ext3
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +                testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                testdata["data_write_paths"]["mount_dir"],
    +                testdata["data_write_paths"]["sub_dir"],
    +                testdata["data_write_paths"]["sub_lvl_dir1"],
    +                testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        ssh_client.execute(c)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        apiclient,
    +        testdata,
    +        original_checksum,
    +        disk_type,
    +        template_id,
    +        account_name,
    +        account_domainid,
    +        service_offering_id,
    +        zone_id,
    +        zone_networktype,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False,
    +        ):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            apiclient,
    +            testdata["small"],
    +            templateid=template_id,
    +            accountid=account_name,
    +            domainid=account_domainid,
    +            serviceofferingid=service_offering_id,
    +            zoneid=zone_id,
    +            mode=zone_networktype
    +        )
    +
    +        new_virtual_machine.start(apiclient)
    +
    +
    +        new_virtual_machine.attach_volume(
    +            apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        new_virtual_machine.reboot(apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(apiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception:
    +        raise Exception("SSH access failed for server with IP address: %s" %
    +                    new_virtual_machine.ssh_ip)
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        ssh.execute(c)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            testdata["data_write_paths"]["mount_dir"],
    +            testdata["data_write_paths"]["sub_dir"],
    +            testdata["data_write_paths"]["sub_lvl_dir1"],
    +            testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +
    +    # Verify returned data
    +    assert original_checksum == ckecksum_returned_data_0, \
    +        "Cheskum does not match with checksum of original data"
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(apiclient)
    +
    +    return
    +
    +
    +def verifyRouterState(apiclient, routerid, state, listall=True):
    --- End diff --
    
    This is unwanted function added in this PR not related to this test path.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by bhaisaab <gi...@git.apache.org>.
Github user bhaisaab commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26293221
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,744 @@
    +# or more contributor license agreements.  See the NOTICE file
    --- End diff --
    
    License header in correct, RAT will complain. Fix it please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26748307
  
    --- Diff: tools/marvin/marvin/lib/common.py ---
    @@ -1395,3 +1399,199 @@ def isNetworkDeleted(apiclient, networkid, timeout=600):
             time.sleep(60)
         #end while
         return networkDeleted
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    --- End diff --
    
    Same changes as above for this function too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-83533242
  
    Also please add the test case execution results.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650440
  
    --- Diff: test/integration/testpaths/testpath_storage_migration.py ---
    @@ -659,11 +478,11 @@ def test_01_migrate_root_and_data_disk_nonlive(self):
             vm_cluster.start(self.userapiclient)
     
             compareChecksum(
    -            self,
    +            self.apiclient,
    +            self.testdata,
                 checksum_random_root_cluster,
    --- End diff --
    
    Please specify parameter names so as to understand what all data are we passing exactly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-88880495
  
    Will review remaining changes tomorrow. A quick question, running the test suite should not leave anything behind in the setup. Is this taken care of?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26748285
  
    --- Diff: tools/marvin/marvin/lib/common.py ---
    @@ -1395,3 +1399,199 @@ def isNetworkDeleted(apiclient, networkid, timeout=600):
             time.sleep(60)
         #end while
         return networkDeleted
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    --- End diff --
    
    Add detailed docstring.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301476
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        self,
    +        original_checksum,
    +        disk_type,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            self.userapiclient,
    +            self.testdata["small"],
    +            templateid=self.template.id,
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.service_offering_cluster1.id,
    +            zoneid=self.zone.id,
    +            mode=self.zone.networktype
    +        )
    +
    +        new_virtual_machine.start(self.userapiclient)
    +
    +        self.debug("Attaching volume: %s to VM: %s" % (
    +            disk.id,
    +            new_virtual_machine.id
    +        ))
    +
    +        new_virtual_machine.attach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        self.debug("Rebooting : %s" % new_virtual_machine.id)
    +        new_virtual_machine.reboot(self.apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(self.userapiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        self.debug(
    +            "SSH into (Public IP: ) %s " % new_virtual_machine.ssh_ip)
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH access failed for VM: %s, Exception: %s" %
    +                  (new_virtual_machine.ipaddress, e))
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh.execute(c)
    +        self.debug(result)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            self.testdata["data_write_paths"]["mount_dir"],
    +            self.testdata["data_write_paths"]["sub_dir"],
    +            self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            self.testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +    self.debug("returned_data_0: %s" % returned_data_0[0])
    +
    +    # Verify returned data
    +    self.assertEqual(
    +        original_checksum,
    +        ckecksum_returned_data_0,
    +        "Cheskum does not match with checksum of original data"
    +    )
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(self.apiclient)
    +
    +    return
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +
    +            # Create Service offering
    +
    +            cls.service_offering_cluster1 = ServiceOffering.create(
    +                cls.apiclient,
    +                cls.testdata["service_offering"],
    +            )
    +            cls._cleanup.append(cls.service_offering_cluster1)
    +
    +            # Create Disk offering
    +            cls.disk_offering_cluster1 = DiskOffering.create(
    +                cls.apiclient,
    +                cls.testdata["disk_offering"],
    +            )
    +            cls._cleanup.append(cls.disk_offering_cluster1)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise e
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.apiclient, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +    def setUp(self):
    +        self.apiclient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +
    +    def tearDown(self):
    +        try:
    +            cleanup_resources(self.apiclient, self.cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +        return
    +
    +    @attr(tags=["advanced", "basic"])
    +    def test_01_volume_snapshot(self):
    +        """ Test Volume (root) Snapshot
    +
    +        # 1. Deploy a VM on cluster wide primary storage.
    --- End diff --
    
    I don't think we need to create VM on CWPS. It can be created on any available storage. Please modify the comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301600
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        self,
    +        original_checksum,
    +        disk_type,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            self.userapiclient,
    +            self.testdata["small"],
    +            templateid=self.template.id,
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.service_offering_cluster1.id,
    +            zoneid=self.zone.id,
    +            mode=self.zone.networktype
    +        )
    +
    +        new_virtual_machine.start(self.userapiclient)
    +
    +        self.debug("Attaching volume: %s to VM: %s" % (
    +            disk.id,
    +            new_virtual_machine.id
    +        ))
    +
    +        new_virtual_machine.attach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        self.debug("Rebooting : %s" % new_virtual_machine.id)
    +        new_virtual_machine.reboot(self.apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(self.userapiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        self.debug(
    +            "SSH into (Public IP: ) %s " % new_virtual_machine.ssh_ip)
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH access failed for VM: %s, Exception: %s" %
    +                  (new_virtual_machine.ipaddress, e))
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh.execute(c)
    +        self.debug(result)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            self.testdata["data_write_paths"]["mount_dir"],
    +            self.testdata["data_write_paths"]["sub_dir"],
    +            self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            self.testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +    self.debug("returned_data_0: %s" % returned_data_0[0])
    +
    +    # Verify returned data
    +    self.assertEqual(
    +        original_checksum,
    +        ckecksum_returned_data_0,
    +        "Cheskum does not match with checksum of original data"
    +    )
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(self.apiclient)
    +
    +    return
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +
    +            # Create Service offering
    +
    +            cls.service_offering_cluster1 = ServiceOffering.create(
    +                cls.apiclient,
    +                cls.testdata["service_offering"],
    +            )
    +            cls._cleanup.append(cls.service_offering_cluster1)
    +
    +            # Create Disk offering
    +            cls.disk_offering_cluster1 = DiskOffering.create(
    --- End diff --
    
    Change the disk offering name. Also it should not be specific to CWPS.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26833707
  
    --- Diff: tools/marvin/marvin/lib/base.py ---
    @@ -1143,9 +1149,6 @@ def register(cls, apiclient, services, zoneid=None,
             elif "projectid" in services:
                 cmd.projectid = services["projectid"]
     
    -        if details:
    --- End diff --
    
    Same as above


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by bhaisaab <gi...@git.apache.org>.
Github user bhaisaab commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-78451273
  
    @pritisarap12 looks good, but not a test guru to review/merge it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301374
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        self,
    +        original_checksum,
    +        disk_type,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            self.userapiclient,
    +            self.testdata["small"],
    +            templateid=self.template.id,
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.service_offering_cluster1.id,
    +            zoneid=self.zone.id,
    +            mode=self.zone.networktype
    +        )
    +
    +        new_virtual_machine.start(self.userapiclient)
    +
    +        self.debug("Attaching volume: %s to VM: %s" % (
    +            disk.id,
    +            new_virtual_machine.id
    +        ))
    +
    +        new_virtual_machine.attach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        self.debug("Rebooting : %s" % new_virtual_machine.id)
    +        new_virtual_machine.reboot(self.apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(self.userapiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        self.debug(
    +            "SSH into (Public IP: ) %s " % new_virtual_machine.ssh_ip)
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH access failed for VM: %s, Exception: %s" %
    +                  (new_virtual_machine.ipaddress, e))
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh.execute(c)
    +        self.debug(result)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            self.testdata["data_write_paths"]["mount_dir"],
    +            self.testdata["data_write_paths"]["sub_dir"],
    +            self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            self.testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +    self.debug("returned_data_0: %s" % returned_data_0[0])
    +
    +    # Verify returned data
    +    self.assertEqual(
    +        original_checksum,
    +        ckecksum_returned_data_0,
    +        "Cheskum does not match with checksum of original data"
    +    )
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(self.apiclient)
    +
    +    return
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +
    +            # Create Service offering
    +
    --- End diff --
    
    Remove unnecessary empty lines


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26748249
  
    --- Diff: tools/marvin/marvin/lib/common.py ---
    @@ -1395,3 +1399,199 @@ def isNetworkDeleted(apiclient, networkid, timeout=600):
             time.sleep(60)
         #end while
         return networkDeleted
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    --- End diff --
    
    Please pass explicit parameters to function than passing self object. It's always good to pass explicit parameters than wrapping the parameters in big object. It also enhances the function definition so that to improve readability of the code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by pritisarap12 <gi...@git.apache.org>.
Github user pritisarap12 closed the pull request at:

    https://github.com/apache/cloudstack/pull/117


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301332
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    --- End diff --
    
    Same for this function


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650755
  
    --- Diff: test/integration/testpaths/testpath_volume_recurring_snap.py ---
    @@ -0,0 +1,1015 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume recurring snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              is_snapshot_on_nfs,
    +                              validateList
    +                              )
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             VirtualMachine,
    +                             SnapshotPolicy,
    +                             Snapshot,
    +                             Configurations
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_snapshot_policy
    +                               )
    +
    +from marvin.codes import PASS
    +
    +import time
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    --- End diff --
    
    Change class name according to functional group of tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-83436486
  
    Hi Rohit,
    
    I am yet to review updated pull request.
    
    Regards,
    Gaurav
    
    On Thu, Mar 19, 2015 at 2:22 PM, Rohit Yadav <no...@github.com>
    wrote:
    
    > @gauravaradhye <https://github.com/gauravaradhye> is it good to merge?
    >
    > —
    > Reply to this email directly or view it on GitHub
    > <https://github.com/apache/cloudstack/pull/117#issuecomment-83426630>.
    >



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650534
  
    --- Diff: test/integration/testpaths/testpath_storage_migration.py ---
    @@ -1771,17 +1666,17 @@ def test_03_migrate_root_and_data_disk_nonlive_cwps_vmware(self):
             vm_cluster.start(self.userapiclient)
     
             compareChecksum(
    -            self,
    --- End diff --
    
    Good to see "self" is not passed to external module function. Good work!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by pritisarap12 <gi...@git.apache.org>.
Github user pritisarap12 commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-99002486
  
    Done with rebasing the branch with upstream master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by pritisarap12 <gi...@git.apache.org>.
Github user pritisarap12 commented on the pull request:

    https://github.com/apache/cloudstack/pull/117#issuecomment-83990945
  
    Integrated review changes:
    Testcase result:
    
    Test Volume (root) Snapshot ... === TestName: test_01_volume_snapshot | Status : SUCCESS ===
    ok
    
    ----------------------------------------------------------------------
    Ran 1 test in 3265.552s
    
    OK


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650484
  
    --- Diff: test/integration/testpaths/testpath_storage_migration.py ---
    @@ -765,7 +597,7 @@ def test_01_migrate_root_and_data_disk_nonlive(self):
     
             # Ensure we can add data to newly added disks
             createChecksum(
    -            self,
    +            self.testdata,
    --- End diff --
    
    Mention parameterName=Value, same change for all functions calls.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301554
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        self,
    +        original_checksum,
    +        disk_type,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            self.userapiclient,
    +            self.testdata["small"],
    +            templateid=self.template.id,
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.service_offering_cluster1.id,
    +            zoneid=self.zone.id,
    +            mode=self.zone.networktype
    +        )
    +
    +        new_virtual_machine.start(self.userapiclient)
    +
    +        self.debug("Attaching volume: %s to VM: %s" % (
    +            disk.id,
    +            new_virtual_machine.id
    +        ))
    +
    +        new_virtual_machine.attach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        self.debug("Rebooting : %s" % new_virtual_machine.id)
    +        new_virtual_machine.reboot(self.apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(self.userapiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        self.debug(
    +            "SSH into (Public IP: ) %s " % new_virtual_machine.ssh_ip)
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH access failed for VM: %s, Exception: %s" %
    +                  (new_virtual_machine.ipaddress, e))
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh.execute(c)
    +        self.debug(result)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            self.testdata["data_write_paths"]["mount_dir"],
    +            self.testdata["data_write_paths"]["sub_dir"],
    +            self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            self.testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +    self.debug("returned_data_0: %s" % returned_data_0[0])
    +
    +    # Verify returned data
    +    self.assertEqual(
    +        original_checksum,
    +        ckecksum_returned_data_0,
    +        "Cheskum does not match with checksum of original data"
    +    )
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(self.apiclient)
    +
    +    return
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +
    +            # Create Service offering
    +
    +            cls.service_offering_cluster1 = ServiceOffering.create(
    --- End diff --
    
    Modify service offering name. Is this specific for CWPS? If not, change to default name.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r27650373
  
    --- Diff: test/integration/testpaths/testpath_storage_migration.py ---
    @@ -536,8 +353,10 @@ def test_01_migrate_root_and_data_disk_nonlive(self):
     
             In addition to this,
             Create snapshot of root and data disk after migration.
    -        For root disk, create template from snapshot, deploy Vm and compare checksum
    -        For data disk, Create volume from snapshot, attach to VM and compare checksum
    +        For root disk, create template from snapshot, \
    --- End diff --
    
    No need of backward slash as it is already in comment block.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301742
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    +    """ Write data on the disk and return the md5 checksum"""
    +
    +    random_data_0 = random_gen(size=100)
    +    # creating checksum(MD5)
    +    m = hashlib.md5()
    +    m.update(random_data_0)
    +    ckecksum_random_data_0 = m.hexdigest()
    +    try:
    +        ssh_client = SshClient(
    +            virtual_machine.ssh_ip,
    +            virtual_machine.ssh_port,
    +            virtual_machine.username,
    +            virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH failed for VM: %s" %
    +                  e)
    +
    +    self.debug("Formatting volume: %s to ext3" % disk.id)
    +    # Format partition using ext3
    +    # Note that this is the second data disk partition of virtual machine
    +    # as it was already containing data disk before attaching the new volume,
    +    # Hence datadiskdevice_2
    +
    +    format_volume_to_ext3(
    +        ssh_client,
    +        self.testdata["volume_write_path"][
    +            virtual_machine.hypervisor][disk_type]
    +    )
    +    cmds = ["fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            "mkdir -p %s/%s/%s " % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            ),
    +            "echo %s > %s/%s/%s/%s" % (
    +                random_data_0,
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            ),
    +            "cat %s/%s/%s/%s" % (
    +                self.testdata["data_write_paths"]["mount_dir"],
    +                self.testdata["data_write_paths"]["sub_dir"],
    +                self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +                self.testdata["data_write_paths"]["random_data"]
    +            )
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh_client.execute(c)
    +        self.debug(result)
    +
    +    # Unmount the storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh_client.execute(c)
    +
    +    return ckecksum_random_data_0
    +
    +
    +def compareChecksum(
    +        self,
    +        original_checksum,
    +        disk_type,
    +        virt_machine=None,
    +        disk=None,
    +        new_vm=False):
    +    """
    +    Create md5 checksum of the data present on the disk and compare
    +    it with the given checksum
    +    """
    +
    +    if disk_type == "datadiskdevice_1" and new_vm:
    +        new_virtual_machine = VirtualMachine.create(
    +            self.userapiclient,
    +            self.testdata["small"],
    +            templateid=self.template.id,
    +            accountid=self.account.name,
    +            domainid=self.account.domainid,
    +            serviceofferingid=self.service_offering_cluster1.id,
    +            zoneid=self.zone.id,
    +            mode=self.zone.networktype
    +        )
    +
    +        new_virtual_machine.start(self.userapiclient)
    +
    +        self.debug("Attaching volume: %s to VM: %s" % (
    +            disk.id,
    +            new_virtual_machine.id
    +        ))
    +
    +        new_virtual_machine.attach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        # Rebooting is required so that newly attached disks are detected
    +        self.debug("Rebooting : %s" % new_virtual_machine.id)
    +        new_virtual_machine.reboot(self.apiclient)
    +
    +    else:
    +        # If the disk is root disk then no need to create new VM
    +        # Just start the original machine on which root disk is
    +        new_virtual_machine = virt_machine
    +        if new_virtual_machine.state != "Running":
    +            new_virtual_machine.start(self.userapiclient)
    +
    +    try:
    +        # Login to VM to verify test directories and files
    +
    +        self.debug(
    +            "SSH into (Public IP: ) %s " % new_virtual_machine.ssh_ip)
    +        ssh = SshClient(
    +            new_virtual_machine.ssh_ip,
    +            new_virtual_machine.ssh_port,
    +            new_virtual_machine.username,
    +            new_virtual_machine.password
    +        )
    +    except Exception as e:
    +        self.fail("SSH access failed for VM: %s, Exception: %s" %
    +                  (new_virtual_machine.ipaddress, e))
    +
    +    # Mount datadiskdevice_1 because this is the first data disk of the new
    +    # virtual machine
    +    cmds = ["blkid",
    +            "fdisk -l",
    +            "mkdir -p %s" % self.testdata["data_write_paths"]["mount_dir"],
    +            "mount -t ext3 %s1 %s" % (
    +                self.testdata["volume_write_path"][
    +                    new_virtual_machine.hypervisor][disk_type],
    +                self.testdata["data_write_paths"]["mount_dir"]
    +            ),
    +            ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        result = ssh.execute(c)
    +        self.debug(result)
    +
    +    returned_data_0 = ssh.execute(
    +        "cat %s/%s/%s/%s" % (
    +            self.testdata["data_write_paths"]["mount_dir"],
    +            self.testdata["data_write_paths"]["sub_dir"],
    +            self.testdata["data_write_paths"]["sub_lvl_dir1"],
    +            self.testdata["data_write_paths"]["random_data"]
    +        ))
    +
    +    n = hashlib.md5()
    +    n.update(returned_data_0[0])
    +    ckecksum_returned_data_0 = n.hexdigest()
    +
    +    self.debug("returned_data_0: %s" % returned_data_0[0])
    +
    +    # Verify returned data
    +    self.assertEqual(
    +        original_checksum,
    +        ckecksum_returned_data_0,
    +        "Cheskum does not match with checksum of original data"
    +    )
    +
    +    # Unmount the Sec Storage
    +    cmds = [
    +        "umount %s" % (self.testdata["data_write_paths"]["mount_dir"]),
    +    ]
    +
    +    for c in cmds:
    +        self.debug("Command: %s" % c)
    +        ssh.execute(c)
    +
    +    if new_vm:
    +        new_virtual_machine.detach_volume(
    +            self.apiclient,
    +            disk
    +        )
    +
    +        new_virtual_machine.delete(self.apiclient)
    +
    +    return
    +
    +
    +class TestVolumeSnapshot(cloudstackTestCase):
    +
    +    @classmethod
    +    def setUpClass(cls):
    +        testClient = super(TestVolumeSnapshot, cls).getClsTestClient()
    +        cls.apiclient = testClient.getApiClient()
    +        cls.testdata = testClient.getParsedTestDataConfig()
    +        cls.hypervisor = cls.testClient.getHypervisorInfo()
    +
    +        # Get Zone, Domain and templates
    +        cls.domain = get_domain(cls.apiclient)
    +        cls.zone = get_zone(cls.apiclient, testClient.getZoneForTests())
    +
    +        cls.template = get_template(
    +            cls.apiclient,
    +            cls.zone.id,
    +            cls.testdata["ostype"])
    +
    +        cls._cleanup = []
    +
    +        if cls.hypervisor.lower() not in [
    +                "vmware",
    +                "kvm",
    +                "xenserver"]:
    +            raise unittest.SkipTest(
    +                "Storage migration not supported on %s" %
    +                cls.hypervisor)
    +
    +        try:
    +
    +            # Create an account
    +            cls.account = Account.create(
    +                cls.apiclient,
    +                cls.testdata["account"],
    +                domainid=cls.domain.id
    +            )
    +            cls._cleanup.append(cls.account)
    +
    +            # Create user api client of the account
    +            cls.userapiclient = testClient.getUserApiClient(
    +                UserName=cls.account.name,
    +                DomainName=cls.account.domain
    +            )
    +
    +            # Create Service offering
    +
    +            cls.service_offering_cluster1 = ServiceOffering.create(
    +                cls.apiclient,
    +                cls.testdata["service_offering"],
    +            )
    +            cls._cleanup.append(cls.service_offering_cluster1)
    +
    +            # Create Disk offering
    +            cls.disk_offering_cluster1 = DiskOffering.create(
    +                cls.apiclient,
    +                cls.testdata["disk_offering"],
    +            )
    +            cls._cleanup.append(cls.disk_offering_cluster1)
    +
    +        except Exception as e:
    +            cls.tearDownClass()
    +            raise e
    +        return
    +
    +    @classmethod
    +    def tearDownClass(cls):
    +        try:
    +            cleanup_resources(cls.apiclient, cls._cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +
    +    def setUp(self):
    +        self.apiclient = self.testClient.getApiClient()
    +        self.dbclient = self.testClient.getDbConnection()
    +        self.cleanup = []
    +
    +    def tearDown(self):
    +        try:
    +            cleanup_resources(self.apiclient, self.cleanup)
    +        except Exception as e:
    +            raise Exception("Warning: Exception during cleanup : %s" % e)
    +        return
    +
    +    @attr(tags=["advanced", "basic"])
    +    def test_01_volume_snapshot(self):
    +        """ Test Volume (root) Snapshot
    +
    +        # 1. Deploy a VM on cluster wide primary storage.
    +        # 2. Take snapshot on root disk
    +        # 3. Create Template from a Snapshot
    +        # 4. Deploy a VM using the Template -T1
    +        # 5. Delete Snapshot and Deploy a Linux VM from the \
    --- End diff --
    
    Please add verification steps as what exactly are we trying to verify.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] cloudstack pull request: CLOUDSTACK-8380: Adding automation test c...

Posted by gauravaradhye <gi...@git.apache.org>.
Github user gauravaradhye commented on a diff in the pull request:

    https://github.com/apache/cloudstack/pull/117#discussion_r26301316
  
    --- Diff: test/integration/testpaths/testpath_volume_snapshot.py ---
    @@ -0,0 +1,745 @@
    +# Licensed to the Apache Software Foundation (ASF) under one
    +# or more contributor license agreements.  See the NOTICE file
    +# distributed with this work for additional information
    +# regarding copyright ownership.  The ASF licenses this file
    +# to you under the Apache License, Version 2.0 (the
    +# "License"); you may not use this file except in compliance
    +# with the License.  You may obtain a copy of the License at
    +#
    +#   http://www.apache.org/licenses/LICENSE-2.0
    +#
    +# Unless required by applicable law or agreed to in writing,
    +# software distributed under the License is distributed on an
    +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    +# KIND, either express or implied.  See the License for the
    +# specific language governing permissions and limitations
    +# under the License.
    +""" Test cases for VM/Volume snapshot Test Path
    +"""
    +from nose.plugins.attrib import attr
    +from marvin.cloudstackTestCase import cloudstackTestCase, unittest
    +from marvin.lib.utils import (cleanup_resources,
    +                              random_gen,
    +                              format_volume_to_ext3,
    +                              is_snapshot_on_nfs,
    +                              validateList)
    +from marvin.lib.base import (Account,
    +                             ServiceOffering,
    +                             DiskOffering,
    +                             Template,
    +                             VirtualMachine,
    +                             Snapshot
    +                             )
    +from marvin.lib.common import (get_domain,
    +                               get_zone,
    +                               get_template,
    +                               list_volumes,
    +                               list_snapshots,
    +                               list_events,
    +                               )
    +
    +
    +import hashlib
    +from marvin.sshClient import SshClient
    +
    +from marvin.codes import PASS
    +
    +
    +def createChecksum(self, virtual_machine, disk, disk_type):
    --- End diff --
    
    Can we move this function to common.py file so that it can be used my multiple test cases? It is also used in storage migration test path.
    Please move to common.py and make changes in all test paths accordingly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---