You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@libcloud.apache.org by GitBox <gi...@apache.org> on 2020/08/04 08:19:12 UTC

[GitHub] [libcloud] Eis-D-Z opened a new pull request #1481: V sphere drv

Eis-D-Z opened a new pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481


   ## Driver for VMware's vSphere cloud
   
   ### Description
   This contains two drivers, one that works with VMware's SOAP API through a python client called pyVmomi, and one that works with VMware's newly introduced REST API. The more complete is the former since the REST API is lacking functionality as of yet, still all common driver functions beyond list_images will work. Images for the scope of this work are considered either ovf files or vm templates, some of these templates are not returned by the REST endpoint and thus the choice was made to use the SOAP driver as well, instead of having a lacking method. Also snapshot and console methods require the SOAP driver. If the pyVmomi dependency is missing, using these methods will result in an ImportError. The driver also contains async code, during the driver's design and testing, an environment with over 3k VM-s was used among others. This lead to the realization that the usual way of writing list_nodes gets slow very fast for large number of nodes, especially when there is a need of a _t
 o_node helper method that does more requests for each node. There is a "flag" argument in the list_nodes method for whether to proceed asynchronously or not, but the existence of async code requires python 3.5+. The included tests are checking only the REST driver methods. Despite these drawbacks, the authors decided to publish this driver, since vSphere is popular and it works very well. 
    
   
   ### Status
   Replace this: describe the PR status. Examples:
   
   - done, ready for review
   
   ### Checklist (tick everything that applies)
   
   - [x] [Code linting](http://libcloud.readthedocs.org/en/latest/development.html#code-style-guide) (required, can be done after the PR checks)
   - [ ] Documentation
   - [x] [Tests](http://libcloud.readthedocs.org/en/latest/testing.html)
   - [x] [ICLA](http://libcloud.readthedocs.org/en/latest/development.html#contributing-bigger-changes) (required for bigger changes)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Eis-D-Z commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Eis-D-Z commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-705488736


   I tried to have a clean upstream branch before adding the driver, I had a look at changes and only driver related files for changed, I'm sorry to hear you had conflicts. Thank you very much!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Eis-D-Z commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Eis-D-Z commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-680119726


   I personally guess that ideally at some point even the requests logic will become async for libcloud, until then yes, I agree that `list_nodes()` should have an `async_list_nodes` and the event loop better to be set at driver level, since the driver might not be on the main thread, or someone might want two drivers on separate threads etc...


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami edited a comment on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami edited a comment on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-674256104


   Thanks for the contribution.
   
   For now, this in-line async code should probably work, but we should come up with a standardized convention for handling async across different drivers in a consistent manner.
   
   Perhaps another set of methods which have ``async_`` (e.g. ``async_list_nodes``, ``async_reboot_node``, etc.). And we also need a way for user to specify the event loop to use.
   
   This could either be done on the global level (e.g. ``libcloud.async.set_event_loop(loop)``) and perhaps also on driver level, if user wishes to use different loop for different drivers (probably that likely).
   
   What do you think?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on a change in pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on a change in pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#discussion_r479781978



##########
File path: libcloud/compute/drivers/vsphere.py
##########
@@ -0,0 +1,1992 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+VMware vSphere driver. Uses pyvmomi - https://github.com/vmware/pyvmomi
+Code inspired by https://github.com/vmware/pyvmomi-community-samples
+
+Authors: Dimitris Moraitis, Alex Tsiliris, Markos Gogoulos
+"""
+
+import time
+import logging
+import json
+import base64
+import warnings
+import asyncio
+import ssl
+import functools
+import itertools
+import hashlib
+
+try:
+    from pyVim import connect
+    from pyVmomi import vim, vmodl, VmomiSupport
+    from pyVim.task import WaitForTask
+except ImportError:
+    pyvmomi = None
+
+import atexit
+
+
+from libcloud.common.types import InvalidCredsError, LibcloudError
+from libcloud.compute.base import NodeDriver
+from libcloud.compute.base import Node, NodeSize
+from libcloud.compute.base import NodeImage, NodeLocation
+from libcloud.compute.types import NodeState, Provider
+from libcloud.utils.networking import is_public_subnet
+from libcloud.utils.py3 import httplib
+from libcloud.common.types import ProviderError
+from libcloud.common.exceptions import BaseHTTPError
+from libcloud.common.base import JsonResponse, ConnectionKey
+
+logger = logging.getLogger('libcloud.compute.drivers.vsphere')
+
+
+def recurse_snapshots(snapshot_list):
+    ret = []
+    for s in snapshot_list:
+        ret.append(s)
+        ret += recurse_snapshots(getattr(s, 'childSnapshotList', []))
+    return ret
+
+
+def format_snapshots(snapshot_list):
+    ret = []
+    for s in snapshot_list:
+        ret.append({
+            'id': s.id,
+            'name': s.name,
+            'description': s.description,
+            'created': s.createTime.strftime('%Y-%m-%d %H:%M'),
+            'state': s.state})
+    return ret
+
+
+# 6.5 and older, probably won't work on anything earlier than 4.x
+class VSphereNodeDriver(NodeDriver):
+    name = 'VMware vSphere'
+    website = 'http://www.vmware.com/products/vsphere/'
+    type = Provider.VSPHERE
+
+    NODE_STATE_MAP = {
+        'poweredOn': NodeState.RUNNING,
+        'poweredOff': NodeState.STOPPED,
+        'suspended': NodeState.SUSPENDED,
+    }
+
+    def __init__(self, host, username, password, port=443, ca_cert=None):
+        """Initialize a connection by providing a hostname,
+        username and password
+        """
+        if pyvmomi is None:
+            raise ImportError('Missing "pyvmomi" dependency. '
+                              'You can install it '
+                              'using pip - pip install pyvmomi')
+        self.host = host
+        try:
+            if ca_cert is None:
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                )
+            else:
+                context = ssl.create_default_context(cafile=ca_cert)
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                    sslContext=context
+                )
+            atexit.register(connect.Disconnect, self.connection)
+        except Exception as exc:
+            error_message = str(exc).lower()
+            if 'incorrect user name' in error_message:
+                raise InvalidCredsError('Check your username and '
+                                        'password are valid')
+            if 'connection refused' in error_message or 'is not a vim server' \
+                                                        in error_message:
+                raise LibcloudError('Check that the host provided is a '
+                                    'vSphere installation')
+            if 'name or service not known' in error_message:
+                raise LibcloudError(
+                    'Check that the vSphere host is accessible')
+            if 'certificate verify failed' in error_message:
+                # bypass self signed certificates
+                try:
+                    context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
+                    context.verify_mode = ssl.CERT_NONE
+                except ImportError:
+                    raise ImportError('To use self signed certificates, '
+                                      'please upgrade to python 2.7.11 and '
+                                      'pyvmomi 6.0.0+')
+
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                    sslContext=context
+                )
+                atexit.register(connect.Disconnect, self.connection)
+            else:
+                raise LibcloudError('Cannot connect to vSphere')
+
+    def list_locations(self, ex_show_hosts_in_drs=True):
+        """
+        Lists locations
+        """
+        content = self.connection.RetrieveContent()
+
+        potential_locations = [dc for dc in
+                               content.viewManager.CreateContainerView(
+                                   content.rootFolder, [
+                                       vim.ClusterComputeResource,
+                                       vim.HostSystem],
+                                   recursive=True).view]
+
+        # Add hosts and clusters with DRS enabled
+        locations = []
+        hosts_all = []
+        clusters = []
+        for location in potential_locations:
+            if isinstance(location, vim.HostSystem):
+                hosts_all.append(location)
+            elif isinstance(location, vim.ClusterComputeResource):
+                if location.configuration.drsConfig.enabled:
+                    clusters.append(location)
+        if ex_show_hosts_in_drs:
+            hosts = hosts_all
+        else:
+            hosts_filter = [host for cluster in clusters
+                            for host in cluster.host]
+            hosts = [host for host in hosts_all if host not in hosts_filter]
+
+        for cluster in clusters:
+            locations.append(self._to_location(cluster))
+        for host in hosts:
+            locations.append(self._to_location(host))
+        return locations
+
+    def _to_location(self, data):
+        try:
+            if isinstance(data, vim.HostSystem):
+                extra = {
+                    "type": "host",
+                    "state": data.runtime.connectionState,
+                    "hypervisor": data.config.product.fullName,
+                    "vendor": data.hardware.systemInfo.vendor,
+                    "model": data.hardware.systemInfo.model,
+                    "ram": data.hardware.memorySize,
+                    "cpu": {
+                        "packages": data.hardware.cpuInfo.numCpuPackages,
+                        "cores": data.hardware.cpuInfo.numCpuCores,
+                        "threads": data.hardware.cpuInfo.numCpuThreads,
+                    },
+                    "uptime": data.summary.quickStats.uptime,
+                    "parent": str(data.parent)
+                }
+            elif isinstance(data, vim.ClusterComputeResource):
+                extra = {
+                    "type": "cluster",
+                    "overallStatus": data.overallStatus,
+                    "drs": data.configuration.drsConfig.enabled,
+                    'hosts': [host.name for host in data.host],
+                    'parent': str(data.parent)
+                }
+        except AttributeError as exc:
+            logger.error('Cannot convert location %s: %r' % (data.name, exc))
+            extra = {}
+        return NodeLocation(id=data.name, name=data.name, country=None,
+                            extra=extra, driver=self)
+
+    def ex_list_networks(self):
+        """
+        List networks
+        """
+        content = self.connection.RetrieveContent()
+        networks = content.viewManager.CreateContainerView(
+            content.rootFolder,
+            [vim.Network],
+            recursive=True
+        ).view
+
+        return [self._to_network(network) for network in networks]
+
+    def _to_network(self, data):
+        summary = data.summary
+        extra = {
+            'hosts': [h.name for h in data.host],
+            'ip_pool_name': summary.ipPoolName,
+            'ip_pool_id': summary.ipPoolId,
+            'accessible': summary.accessible
+        }
+        return VSphereNetwork(id=data.name, name=data.name, extra=extra)
+
+    def list_sizes(self):
+        """
+        Returns sizes
+        """
+        return []
+
+    def list_images(self, location=None, folder_ids=[]):
+        """
+        Lists VM templates as images.
+        If folder is given then it will list images contained
+        in that folder only.
+        """
+
+        images = []
+        if folder_ids:
+            vms = []
+            for folder_id in folder_ids:
+                folder_object = self._get_item_by_moid('Folder', folder_id)
+                vms.extend(folder_object.childEntity)
+        else:
+            content = self.connection.RetrieveContent()
+            vms = content.viewManager.CreateContainerView(
+                content.rootFolder,
+                [vim.VirtualMachine],
+                recursive=True
+            ).view
+
+        for vm in vms:
+            if vm.config and vm.config.template:
+                images.append(self._to_image(vm))
+
+        return images
+
+    def _to_image(self, data):
+        summary = data.summary
+        name = summary.config.name
+        uuid = summary.config.instanceUuid
+        memory = summary.config.memorySizeMB
+        cpus = summary.config.numCpu
+        operating_system = summary.config.guestFullName
+        os_type = 'unix'
+        if 'Microsoft' in str(operating_system):
+            os_type = 'windows'
+        annotation = summary.config.annotation
+        extra = {
+            "path": summary.config.vmPathName,
+            "operating_system": operating_system,
+            "os_type": os_type,
+            "memory_MB": memory,
+            "cpus": cpus,
+            "overallStatus": str(summary.overallStatus),
+            "metadata": {},
+            "type": "template_6_5",
+            "disk_size": int(summary.storage.committed) // (1024**3),
+            'datastore': data.datastore[0].info.name
+        }
+
+        boot_time = summary.runtime.bootTime
+        if boot_time:
+            extra['boot_time'] = boot_time.isoformat()
+        if annotation:
+            extra['annotation'] = annotation
+
+        for custom_field in data.customValue:
+            key_id = custom_field.key
+            key = self.find_custom_field_key(key_id)
+            extra["metadata"][key] = custom_field.value
+
+        return NodeImage(id=uuid, name=name, driver=self,
+                         extra=extra)
+
+    def _collect_properties(self, content, view_ref, obj_type, path_set=None,
+                            include_mors=False):
+        """
+        Collect properties for managed objects from a view ref
+        Check the vSphere API documentation for example on retrieving
+        object properties:
+            - http://goo.gl/erbFDz
+        Args:
+            content     (ServiceInstance): ServiceInstance content
+            view_ref (pyVmomi.vim.view.*): Starting point of inventory
+                                           navigation
+            obj_type      (pyVmomi.vim.*): Type of managed object
+            path_set               (list): List of properties to retrieve
+            include_mors           (bool): If True include the managed objects
+                                        refs in the result
+        Returns:
+            A list of properties for the managed objects
+        """
+        collector = content.propertyCollector
+
+        # Create object specification to define the starting point of
+        # inventory navigation
+        obj_spec = vmodl.query.PropertyCollector.ObjectSpec()
+        obj_spec.obj = view_ref
+        obj_spec.skip = True
+
+        # Create a traversal specification to identify the path for collection
+        traversal_spec = vmodl.query.PropertyCollector.TraversalSpec()
+        traversal_spec.name = 'traverseEntities'
+        traversal_spec.path = 'view'
+        traversal_spec.skip = False
+        traversal_spec.type = view_ref.__class__
+        obj_spec.selectSet = [traversal_spec]
+
+        # Identify the properties to the retrieved
+        property_spec = vmodl.query.PropertyCollector.PropertySpec()
+        property_spec.type = obj_type
+
+        if not path_set:
+            property_spec.all = True
+
+        property_spec.pathSet = path_set
+
+        # Add the object and property specification to the
+        # property filter specification
+        filter_spec = vmodl.query.PropertyCollector.FilterSpec()
+        filter_spec.objectSet = [obj_spec]
+        filter_spec.propSet = [property_spec]
+
+        # Retrieve properties
+        props = collector.RetrieveContents([filter_spec])
+
+        data = []
+        for obj in props:
+            properties = {}
+            for prop in obj.propSet:
+                properties[prop.name] = prop.val
+
+            if include_mors:
+                properties['obj'] = obj.obj
+
+            data.append(properties)
+        return data
+
+    def list_nodes(self, enhance=True, max_properties=20):
+        """
+        List nodes, excluding templates
+        """
+
+        vm_properties = [
+            'config.template',
+            'summary.config.name', 'summary.config.vmPathName',
+            'summary.config.memorySizeMB', 'summary.config.numCpu',
+            'summary.storage.committed', 'summary.config.guestFullName',
+            'summary.runtime.host', 'summary.config.instanceUuid',
+            'summary.config.annotation', 'summary.runtime.powerState',
+            'summary.runtime.bootTime', 'summary.guest.ipAddress',
+            'summary.overallStatus', 'customValue', 'snapshot'
+        ]
+        content = self.connection.RetrieveContent()
+        view = content.viewManager.CreateContainerView(
+            content.rootFolder, [vim.VirtualMachine], True)
+        i = 0
+        vm_dict = {}
+        while i < len(vm_properties):
+            vm_list = self._collect_properties(content, view,
+                                               vim.VirtualMachine,
+                                               path_set=vm_properties[
+                                                   i:i + max_properties],
+                                               include_mors=True)
+            i += max_properties
+            for vm in vm_list:
+                if not vm_dict.get(vm['obj']):
+                    vm_dict[vm['obj']] = vm
+                else:
+                    vm_dict[vm['obj']].update(vm)
+
+        vm_list = [vm_dict[k] for k in vm_dict]
+        loop = asyncio.new_event_loop()
+        asyncio.set_event_loop(loop)
+        nodes = loop.run_until_complete(self._to_nodes(vm_list))
+        if enhance:
+            nodes = self._enhance_metadata(nodes, content)
+
+        return nodes
+
+    def list_nodes_recursive(self, enhance=True):
+        """
+        Lists nodes, excluding templates
+        """
+        nodes = []
+        content = self.connection.RetrieveContent()
+        children = content.rootFolder.childEntity
+        # this will be needed for custom VM metadata
+        if content.customFieldsManager:
+            self.custom_fields = content.customFieldsManager.field
+        else:
+            self.custom_fields = []
+        for child in children:
+            if hasattr(child, 'vmFolder'):
+                datacenter = child
+                vm_folder = datacenter.vmFolder
+                vm_list = vm_folder.childEntity
+                nodes.extend(self._to_nodes_recursive(vm_list))
+
+        if enhance:
+            nodes = self._enhance_metadata(nodes, content)
+
+        return nodes
+
+    def _enhance_metadata(self, nodes, content):
+        nodemap = {}
+        for node in nodes:
+            node.extra['vSphere version'] = content.about.version
+            nodemap[node.id] = node
+
+        # Get VM deployment events to extract creation dates & images
+        filter_spec = vim.event.EventFilterSpec(
+            eventTypeId=['VmBeingDeployedEvent']
+        )
+        deploy_events = content.eventManager.QueryEvent(filter_spec)
+        for event in deploy_events:
+            try:
+                uuid = event.vm.vm.config.instanceUuid
+            except Exception:
+                continue
+            if uuid in nodemap:
+                node = nodemap[uuid]
+                try:  # Get source template as image
+                    source_template_vm = event.srcTemplate.vm
+                    image_id = source_template_vm.config.instanceUuid
+                    node.extra['image_id'] = image_id
+                except Exception:
+                    logger.error('Cannot get instanceUuid '
+                                 'from source template')
+                try:  # Get creation date
+                    node.created_at = event.createdTime
+                except AttributeError:
+                    logger.error('Cannot get creation date from VM '
+                                 'deploy event')
+
+        return nodes
+
+    async def _to_nodes(self, vm_list):
+        vms = []
+        for vm in vm_list:
+            if vm.get('config.template'):
+                continue  # Do not include templates in node list
+            vms.append(vm)
+        loop = asyncio.get_event_loop()
+        vms = [
+            loop.run_in_executor(None, self._to_node, vms[i])
+            for i in range(len(vms))
+        ]
+        return await asyncio.gather(*vms)
+
+    def _to_nodes_recursive(self, vm_list):
+        nodes = []
+        for virtual_machine in vm_list:
+            if hasattr(virtual_machine, 'childEntity'):
+                # If this is a group it will have children.
+                # If it does, recurse into them and then return
+                nodes.extend(self._to_nodes_recursive(
+                    virtual_machine.childEntity))
+            elif isinstance(virtual_machine, vim.VirtualApp):
+                # If this is a vApp, it likely contains child VMs
+                # (vApps can nest vApps, but it is hardly
+                # a common usecase, so ignore that)
+                nodes.extend(self._to_nodes_recursive(virtual_machine.vm))
+            else:
+                if not hasattr(virtual_machine, 'config') or \
+                    (virtual_machine.config and
+                     virtual_machine.config.template):
+                    continue  # Do not include templates in node list
+                nodes.append(self._to_node_recursive(virtual_machine))
+        return nodes
+
+    def _to_node(self, vm):
+        name = vm.get('summary.config.name')
+        path = vm.get('summary.config.vmPathName')
+        memory = vm.get('summary.config.memorySizeMB')
+        cpus = vm.get('summary.config.numCpu')
+        disk = vm.get('summary.storage.committed', 0) // (1024 ** 3)
+        id_to_hash = str(memory) + str(cpus) + str(disk)
+        size_id = hashlib.md5(id_to_hash.encode("utf-8")).hexdigest()
+        size_name = name + "-size"
+        size_extra = {'cpus': cpus}
+        driver = self
+        size = NodeSize(id=size_id, name=size_name, ram=memory, disk=disk,
+                        bandwidth=0, price=0, driver=driver, extra=size_extra)
+        operating_system = vm.get('summary.config.guestFullName')
+        host = vm.get('summary.runtime.host')
+
+        os_type = 'unix'
+        if 'Microsoft' in str(operating_system):
+            os_type = 'windows'
+        uuid = vm.get('summary.config.instanceUuid') or \
+            (vm.get('obj').config and vm.get('obj').config.instanceUuid)
+        if not uuid:
+            logger.error('No uuid for vm: {}'.format(vm))
+        annotation = vm.get('summary.config.annotation')
+        state = vm.get('summary.runtime.powerState')
+        status = self.NODE_STATE_MAP.get(state, NodeState.UNKNOWN)
+        boot_time = vm.get('summary.runtime.bootTime')
+
+        ip_addresses = []
+        if vm.get('summary.guest.ipAddress'):
+            ip_addresses.append(vm.get('summary.guest.ipAddress'))
+
+        overall_status = str(vm.get('summary.overallStatus'))
+        public_ips = []
+        private_ips = []
+
+        extra = {
+            'path': path,
+            'operating_system': operating_system,
+            'os_type': os_type,
+            'memory_MB': memory,
+            'cpus': cpus,
+            'overall_status': overall_status,
+            'metadata': {},
+            'snapshots': []
+        }
+
+        if disk:
+            extra['disk'] = disk
+
+        if host:
+            extra['host'] = host.name
+            parent = host.parent
+            while parent:
+                if isinstance(parent, vim.ClusterComputeResource):
+                    extra['cluster'] = parent.name
+                    break
+                parent = parent.parent
+
+        if boot_time:
+            extra['boot_time'] = boot_time.isoformat()
+        if annotation:
+            extra['annotation'] = annotation
+
+        for ip_address in ip_addresses:
+            try:
+                if is_public_subnet(ip_address):
+                    public_ips.append(ip_address)
+                else:
+                    private_ips.append(ip_address)
+            except Exception:
+                # IPV6 not supported
+                pass
+        if vm.get('snapshot'):
+            extra['snapshots'] = format_snapshots(
+                recurse_snapshots(vm.get('snapshot').rootSnapshotList))
+
+        for custom_field in vm.get('customValue', []):
+            key_id = custom_field.key
+            key = self.find_custom_field_key(key_id)
+            extra['metadata'][key] = custom_field.value
+
+        node = Node(id=uuid, name=name, state=status, size=size,
+                    public_ips=public_ips, private_ips=private_ips,
+                    driver=self, extra=extra)
+        node._uuid = uuid
+        return node
+
+    def _to_node_recursive(self, virtual_machine):
+        summary = virtual_machine.summary
+        name = summary.config.name
+        path = summary.config.vmPathName
+        memory = summary.config.memorySizeMB
+        cpus = summary.config.numCpu
+        disk = ''
+        if summary.storage.committed:
+            disk = summary.storage.committed / (1024 ** 3)
+        id_to_hash = str(memory) + str(cpus) + str(disk)
+        size_id = hashlib.md5(id_to_hash.encode("utf-8")).hexdigest()
+        size_name = name + "-size"
+        size_extra = {'cpus': cpus}
+        driver = self
+        size = NodeSize(id=size_id, name=size_name, ram=memory, disk=disk,
+                        bandwidth=0, price=0, driver=driver, extra=size_extra)
+        operating_system = summary.config.guestFullName
+        host = summary.runtime.host
+
+        # mist.io needs this metadata
+        os_type = 'unix'
+        if 'Microsoft' in str(operating_system):
+            os_type = 'windows'
+        uuid = summary.config.instanceUuid
+        annotation = summary.config.annotation
+        state = summary.runtime.powerState
+        status = self.NODE_STATE_MAP.get(state, NodeState.UNKNOWN)
+        boot_time = summary.runtime.bootTime
+        ip_addresses = []
+        if summary.guest is not None:
+            ip_addresses.append(summary.guest.ipAddress)
+
+        overall_status = str(summary.overallStatus)
+        public_ips = []
+        private_ips = []
+
+        extra = {
+            "path": path,
+            "operating_system": operating_system,
+            "os_type": os_type,
+            "memory_MB": memory,
+            "cpus": cpus,
+            "overallStatus": overall_status,
+            "metadata": {},
+            "snapshots": []
+        }
+
+        if disk:
+            extra['disk'] = disk
+
+        if host:
+            extra['host'] = host.name
+            parent = host.parent
+            while parent:
+                if isinstance(parent, vim.ClusterComputeResource):
+                    extra['cluster'] = parent.name
+                    break
+                parent = parent.parent
+        if boot_time:
+            extra['boot_time'] = boot_time.isoformat()
+        if annotation:
+            extra['annotation'] = annotation
+
+        for ip_address in ip_addresses:
+            try:
+                if is_public_subnet(ip_address):
+                    public_ips.append(ip_address)
+                else:
+                    private_ips.append(ip_address)
+            except Exception:
+                # IPV6 not supported
+                pass
+        if virtual_machine.snapshot:
+            snapshots = [{
+                'id': s.id,
+                'name': s.name,
+                'description': s.description,
+                'created': s.createTime.strftime('%Y-%m-%d %H:%M'),
+                'state': s.state}
+                for s in virtual_machine.snapshot.rootSnapshotList]
+            extra['snapshots'] = snapshots
+
+        for custom_field in virtual_machine.customValue:
+            key_id = custom_field.key
+            key = self.find_custom_field_key(key_id)
+            extra["metadata"][key] = custom_field.value
+
+        node = Node(id=uuid, name=name, state=status, size=size,
+                    public_ips=public_ips, private_ips=private_ips,
+                    driver=self, extra=extra)
+        node._uuid = uuid
+        return node
+
+    def reboot_node(self, node):
+        """
+        """
+        vm = self.find_by_uuid(node.id)
+        return self.wait_for_task(vm.RebootGuest())
+
+    def destroy_node(self, node):
+        """
+        """
+        vm = self.find_by_uuid(node.id)
+        if node.state != NodeState.STOPPED:
+            self.stop_node(node)
+        return self.wait_for_task(vm.Destroy())
+
+    def stop_node(self, node):
+        """
+        """
+        vm = self.find_by_uuid(node.id)
+        return self.wait_for_task(vm.PowerOff())
+
+    def start_node(self, node):
+        """
+        """
+        vm = self.find_by_uuid(node.id)
+        return self.wait_for_task(vm.PowerOn())
+
+    def ex_list_snapshots(self, node):
+        """
+        List node snapshots
+        """
+        vm = self.find_by_uuid(node.id)
+        if not vm.snapshot:
+            return []
+        return format_snapshots(
+            recurse_snapshots(vm.snapshot.rootSnapshotList))
+
+    def ex_create_snapshot(self, node, snapshot_name, description='',
+                           dump_memory=False, quiesce=False):
+        """
+        Create node snapshot
+        """
+        vm = self.find_by_uuid(node.id)
+        return WaitForTask(
+            vm.CreateSnapshot(snapshot_name, description, dump_memory, quiesce)
+        )
+
+    def ex_remove_snapshot(self, node, snapshot_name=None,
+                           remove_children=True):
+        """
+        Remove a snapshot from node.
+        If snapshot_name is not defined remove the last one.
+        """
+        vm = self.find_by_uuid(node.id)
+        if not vm.snapshot:
+            raise LibcloudError(
+                "Remove snapshot failed. No snapshots for node %s" % node.name)
+        snapshots = recurse_snapshots(vm.snapshot.rootSnapshotList)
+        if not snapshot_name:
+            snapshot = snapshots[-1].snapshot
+        else:
+            for s in snapshots:
+                if snapshot_name == s.name:
+                    snapshot = s.snapshot
+                    break
+            else:
+                raise LibcloudError("Snapshot `%s` not found" % snapshot_name)
+        return self.wait_for_task(snapshot.RemoveSnapshot_Task(
+            removeChildren=remove_children))
+
+    def ex_revert_to_snapshot(self, node, snapshot_name=None):
+        """
+        Revert node to a specific snapshot.
+        If snapshot_name is not defined revert to the last one.
+        """
+        vm = self.find_by_uuid(node.id)
+        if not vm.snapshot:
+            raise LibcloudError("Revert failed. No snapshots "
+                                "for node %s" % node.name)
+        snapshots = recurse_snapshots(vm.snapshot.rootSnapshotList)
+        if not snapshot_name:
+            snapshot = snapshots[-1].snapshot
+        else:
+            for s in snapshots:
+                if snapshot_name == s.name:
+                    snapshot = s.snapshot
+                    break
+            else:
+                raise LibcloudError("Snapshot `%s` not found" % snapshot_name)
+        return self.wait_for_task(snapshot.RevertToSnapshot_Task())
+
+    def _find_template_by_uuid(self, template_uuid):
+        # on version 5.5 and earlier search index won't return a VM
+        try:
+            template = self.find_by_uuid(template_uuid)
+        except LibcloudError:
+            content = self.connection.RetrieveContent()
+            vms = content.viewManager.CreateContainerView(
+                content.rootFolder,
+                [vim.VirtualMachine],
+                recursive=True
+            ).view
+
+            for vm in vms:
+                if vm.config.instanceUuid == template_uuid:
+                    template = vm
+        except Exception as exc:
+            raise LibcloudError("Error while searching for template, ", exc)
+        if not template:
+            raise LibcloudError("Unable to locate VirtualMachine.")
+
+        return template
+
+    def find_by_uuid(self, node_uuid):
+        """Searches VMs for a given uuid
+        returns pyVmomi.VmomiSupport.vim.VirtualMachine
+        """
+        vm = self.connection.content.searchIndex.FindByUuid(None, node_uuid,
+                                                            True, True)
+        if not vm:
+            # perhaps it is a moid
+            vm = self._get_item_by_moid('VirtualMachine', node_uuid)
+            if not vm:
+                raise LibcloudError("Unable to locate VirtualMachine.")
+        return vm
+
+    def find_custom_field_key(self, key_id):
+        """Return custom field key name, provided it's id
+        """
+        if not hasattr(self, "custom_fields"):
+            content = self.connection.RetrieveContent()
+            if content.customFieldsManager:
+                self.custom_fields = content.customFieldsManager.field
+            else:
+                self.custom_fields = []
+        for k in self.custom_fields:
+            if k.key == key_id:
+                return k.name
+        return None
+
+    def get_obj(self, vimtype, name):
+        """
+        Return an object by name, if name is None the
+        first found object is returned
+        """
+        obj = None
+        content = self.connection.RetrieveContent()
+        container = content.viewManager.CreateContainerView(
+            content.rootFolder, vimtype, True)
+        for c in container.view:
+            if name:
+                if c.name == name:
+                    obj = c
+                    break
+            else:
+                obj = c
+                break
+        return obj
+
+    def wait_for_task(self, task, timeout=1800, interval=10):
+        """ wait for a vCenter task to finish """
+        start_time = time.time()
+        task_done = False
+        while not task_done:
+            if (time.time() - start_time >= timeout):
+                raise LibcloudError('Timeout while waiting '
+                                    'for import task Id %s'
+                                    % task.info.id)
+            if task.info.state == 'success':
+                if task.info.result and str(task.info.result) != 'success':
+                    return task.info.result
+                return True
+
+            if task.info.state == 'error':
+                raise LibcloudError(task.info.error.msg)
+            time.sleep(interval)
+
+    def create_node(self, **kwargs):
+        """
+        Creates and returns node.
+
+        :keyword    ex_network: Name of a "Network" to connect the VM to ",
+        :type       ex_network: ``str``
+
+        """
+        name = kwargs['name']

Review comment:
       Please directly specify all the required and possible arguments in the method signature instead of using ``**kwargs``.
   
   This makes it much more user-friendly for the end user.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] asfgit merged pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
asfgit merged pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Eis-D-Z commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Eis-D-Z commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-705488736


   I tried to have a clean upstream branch before adding the driver, I had a look at changes and only driver related files for changed, I'm sorry to hear you had conflicts. Thank you very much!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on a change in pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on a change in pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#discussion_r479781870



##########
File path: libcloud/compute/drivers/vsphere.py
##########
@@ -0,0 +1,1992 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+VMware vSphere driver. Uses pyvmomi - https://github.com/vmware/pyvmomi
+Code inspired by https://github.com/vmware/pyvmomi-community-samples
+
+Authors: Dimitris Moraitis, Alex Tsiliris, Markos Gogoulos
+"""
+
+import time
+import logging
+import json
+import base64
+import warnings
+import asyncio
+import ssl
+import functools
+import itertools
+import hashlib
+
+try:
+    from pyVim import connect
+    from pyVmomi import vim, vmodl, VmomiSupport
+    from pyVim.task import WaitForTask
+except ImportError:
+    pyvmomi = None
+
+import atexit
+
+
+from libcloud.common.types import InvalidCredsError, LibcloudError
+from libcloud.compute.base import NodeDriver
+from libcloud.compute.base import Node, NodeSize
+from libcloud.compute.base import NodeImage, NodeLocation
+from libcloud.compute.types import NodeState, Provider
+from libcloud.utils.networking import is_public_subnet
+from libcloud.utils.py3 import httplib
+from libcloud.common.types import ProviderError
+from libcloud.common.exceptions import BaseHTTPError
+from libcloud.common.base import JsonResponse, ConnectionKey
+
+logger = logging.getLogger('libcloud.compute.drivers.vsphere')
+
+
+def recurse_snapshots(snapshot_list):
+    ret = []
+    for s in snapshot_list:
+        ret.append(s)
+        ret += recurse_snapshots(getattr(s, 'childSnapshotList', []))
+    return ret
+
+
+def format_snapshots(snapshot_list):
+    ret = []
+    for s in snapshot_list:
+        ret.append({
+            'id': s.id,
+            'name': s.name,
+            'description': s.description,
+            'created': s.createTime.strftime('%Y-%m-%d %H:%M'),
+            'state': s.state})
+    return ret
+
+
+# 6.5 and older, probably won't work on anything earlier than 4.x
+class VSphereNodeDriver(NodeDriver):
+    name = 'VMware vSphere'
+    website = 'http://www.vmware.com/products/vsphere/'
+    type = Provider.VSPHERE
+
+    NODE_STATE_MAP = {
+        'poweredOn': NodeState.RUNNING,
+        'poweredOff': NodeState.STOPPED,
+        'suspended': NodeState.SUSPENDED,
+    }
+
+    def __init__(self, host, username, password, port=443, ca_cert=None):
+        """Initialize a connection by providing a hostname,
+        username and password
+        """
+        if pyvmomi is None:
+            raise ImportError('Missing "pyvmomi" dependency. '
+                              'You can install it '
+                              'using pip - pip install pyvmomi')
+        self.host = host
+        try:
+            if ca_cert is None:
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                )
+            else:
+                context = ssl.create_default_context(cafile=ca_cert)
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                    sslContext=context
+                )
+            atexit.register(connect.Disconnect, self.connection)
+        except Exception as exc:
+            error_message = str(exc).lower()
+            if 'incorrect user name' in error_message:
+                raise InvalidCredsError('Check your username and '
+                                        'password are valid')
+            if 'connection refused' in error_message or 'is not a vim server' \
+                                                        in error_message:
+                raise LibcloudError('Check that the host provided is a '
+                                    'vSphere installation')
+            if 'name or service not known' in error_message:
+                raise LibcloudError(
+                    'Check that the vSphere host is accessible')
+            if 'certificate verify failed' in error_message:
+                # bypass self signed certificates
+                try:
+                    context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
+                    context.verify_mode = ssl.CERT_NONE
+                except ImportError:
+                    raise ImportError('To use self signed certificates, '
+                                      'please upgrade to python 2.7.11 and '
+                                      'pyvmomi 6.0.0+')
+
+                self.connection = connect.SmartConnect(
+                    host=host, port=port, user=username, pwd=password,
+                    sslContext=context
+                )
+                atexit.register(connect.Disconnect, self.connection)
+            else:
+                raise LibcloudError('Cannot connect to vSphere')
+
+    def list_locations(self, ex_show_hosts_in_drs=True):
+        """
+        Lists locations
+        """
+        content = self.connection.RetrieveContent()
+
+        potential_locations = [dc for dc in
+                               content.viewManager.CreateContainerView(
+                                   content.rootFolder, [
+                                       vim.ClusterComputeResource,
+                                       vim.HostSystem],
+                                   recursive=True).view]
+
+        # Add hosts and clusters with DRS enabled
+        locations = []
+        hosts_all = []
+        clusters = []
+        for location in potential_locations:
+            if isinstance(location, vim.HostSystem):
+                hosts_all.append(location)
+            elif isinstance(location, vim.ClusterComputeResource):
+                if location.configuration.drsConfig.enabled:
+                    clusters.append(location)
+        if ex_show_hosts_in_drs:
+            hosts = hosts_all
+        else:
+            hosts_filter = [host for cluster in clusters
+                            for host in cluster.host]
+            hosts = [host for host in hosts_all if host not in hosts_filter]
+
+        for cluster in clusters:
+            locations.append(self._to_location(cluster))
+        for host in hosts:
+            locations.append(self._to_location(host))
+        return locations
+
+    def _to_location(self, data):
+        try:
+            if isinstance(data, vim.HostSystem):
+                extra = {
+                    "type": "host",
+                    "state": data.runtime.connectionState,
+                    "hypervisor": data.config.product.fullName,
+                    "vendor": data.hardware.systemInfo.vendor,
+                    "model": data.hardware.systemInfo.model,
+                    "ram": data.hardware.memorySize,
+                    "cpu": {
+                        "packages": data.hardware.cpuInfo.numCpuPackages,
+                        "cores": data.hardware.cpuInfo.numCpuCores,
+                        "threads": data.hardware.cpuInfo.numCpuThreads,
+                    },
+                    "uptime": data.summary.quickStats.uptime,
+                    "parent": str(data.parent)
+                }
+            elif isinstance(data, vim.ClusterComputeResource):
+                extra = {
+                    "type": "cluster",
+                    "overallStatus": data.overallStatus,
+                    "drs": data.configuration.drsConfig.enabled,
+                    'hosts': [host.name for host in data.host],
+                    'parent': str(data.parent)
+                }
+        except AttributeError as exc:
+            logger.error('Cannot convert location %s: %r' % (data.name, exc))
+            extra = {}
+        return NodeLocation(id=data.name, name=data.name, country=None,
+                            extra=extra, driver=self)
+
+    def ex_list_networks(self):
+        """
+        List networks
+        """
+        content = self.connection.RetrieveContent()
+        networks = content.viewManager.CreateContainerView(
+            content.rootFolder,
+            [vim.Network],
+            recursive=True
+        ).view
+
+        return [self._to_network(network) for network in networks]
+
+    def _to_network(self, data):
+        summary = data.summary
+        extra = {
+            'hosts': [h.name for h in data.host],
+            'ip_pool_name': summary.ipPoolName,
+            'ip_pool_id': summary.ipPoolId,
+            'accessible': summary.accessible
+        }
+        return VSphereNetwork(id=data.name, name=data.name, extra=extra)
+
+    def list_sizes(self):
+        """
+        Returns sizes
+        """
+        return []
+
+    def list_images(self, location=None, folder_ids=[]):

Review comment:
       Please don't default to a mutable value and do ``folder_ids=None`` and then inside the method ``folder_ids = folder_ids or []``.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-703291744


   @Eis-D-Z Thanks and sorry for the delay - I will have a look today.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Eis-D-Z commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Eis-D-Z commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-684633410


   All the issues you point out have been resolved as indicated.
   Thank you!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-703298426


   I finally merged this into trunk - thanks for the contribution!
   
   Merging was quite painful since there were a bunch of conflicts (it appears that your branch is not based directly on top of upstream/trunk).
   
   In addition to that, I added missing providers.py entry (bc7b6fe2c1b01d088232ad52d4bc478c9e88cc3c) and added missing driver argument to ``LibcloudError`` class - c690a0efb333062e6ec3e79cfc86f4e131648868.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-683432887


   @Eis-D-Z
   
   > I personally guess that ideally at some point even the requests logic will become async for libcloud, until then yes, I agree that list_nodes() should have an async_list_nodes and the event loop better to be set at driver level, since the driver might not be on the main thread, or someone might want two drivers on separate threads etc...
   
   Would you mind making that change? Aka adding ``async_list_nodes`` method instead of using ``async_`` kwargs.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [libcloud] Kami commented on pull request #1481: V sphere drv

Posted by GitBox <gi...@apache.org>.
Kami commented on pull request #1481:
URL: https://github.com/apache/libcloud/pull/1481#issuecomment-674256104


   Thanks for the contribution.
   
   For now, this in-line async code should probably work, but we should come up with a standardized convention for handling async across different drivers in a consistent manner.
   
   Perhaps another set of methods which have ``async_`` (e.g. ``async_list_nodes``, ``async_reboot_node``, etc.). And we also need a way for user to specify the event loop to use.
   
   This could either be done on the global level (e.g. ``libcloud.async.set_event_loop(loop)``) and perhaps also on driver level, if user wishes to use different loop for different drivers (probably that likely).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org