You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Sanjay Tripathi <sa...@citrix.com> on 2013/09/30 14:32:27 UTC

RE: PCI-Passthrough with CloudStack

Hi Pawit,

It's some nice work that you have done here: https://reviews.apache.org/r/12098/ .

I am planning to enable the support of GPU/VGPU in VMWare/XenServer and started a discussion with the community: http://apache.markmail.org/message/peusnk6n6iz3tvaz .
In your patch, you have already written a framework for pci-passthrough and the same design I can leverage for XenServer and VMWare.

Can you rebase your patch to work with master then we can collaborate to take it forward.
Let me know if you need any help in rebasing.

--Sanjay

> -----Original Message-----
> From: Pawit Pornkitprasan [mailto:p.pawit@gmail.com]
> Sent: Wednesday, June 12, 2013 7:35 AM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano; Edison Su; Kelven Yang
> Subject: Re: PCI-Passthrough with CloudStack
> 
> On 6/11/13 09:35 PM, "Edison Su" <Ed...@citrix.com> wrote:
> 
> > If change vm's xml is enough, then how about use libvirt's hook system:
> > http://www.libvirt.org/hooks.html
> > I think, the issue is that, how to let cloudstack only create one VM
> > per KVM host, or few VMs per host(based on the available PCI devices on
> the host).
> > If we think PCI devices are the resource CloudStack should to take
> > care of during the resource allocation, then we need a framework:
> > 1. During host discovering, host can report whatever resources it can detect
> to mgt server.
> > RAM/CPU freq/local storage are the resources, that currently supported
> > by kvm agent. Here we may need to add PCI devices as another resource.
> > Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd
> along as with other startupRouteringcmd/StartStorage*cmd etc, during the
> startup.
> > 2. There will be a listener on the mgt server, which can listen on
> > StartupAuxiliaryDevicesReportCmd, then records available PCI devices into
> DB,  such as host_pci_device_ref table.
> > 3. Need to extend FirstFitAllocator, take PCI devices as another resource
> during the allocation.
> > And also need to find a place to mark the PCI device as used in
> > host_pci_device_ref table, so the pci device won't be allocated to more
> than one VM.
> > 4. Have api to create a customized computing offering, the offering
> > can contain info about PCI device, such as how many PCI devices plugged
> into a VM.
> > 5. If user chooses above customized computing offering during the VM
> > deployment, then the allocator in step 3 will be triggered, which will
> > choose a KVM host which has enough PCI devices to fulfill the computing
> offering.
> > 6. In the startupcommand, the mgt server send to kvm host, it should
> > contain the PCI devices allocated to this VM.
> > 7. At the KVM agent code, change VM's xml file properly based on the
> startupcommand.
> > How do you think?
> 
> I think this is a very good idea. Maybe we can do further generalization (to
> support Paul's case) by tagging each PCI device with a name, and we can
> store the name inside the compute offering instead of the ID. Then
> management will look up inside host_pci_device_ref and find the ID of a
> suitable PCI device. This would allow multiple VMs to be allocated to one
> host of the host has multiple PCI cards providing the same function.
> 
> > > >
> > > > -----Original Message-----
> > > > From: Kelven Yang [mailto:kelven.yang@citrix.com]
> > > > Sent: 11 June 2013 18:10
> > > > To: dev@cloudstack.apache.org
> > > > Cc: Ryousei Takano
> > > > Subject: Re: PCI-Passthrough with CloudStack
> > > >
> > > > VirtualMachineTO.params is designed to carry generic VM specific
> > > configurations, these configuration parameters can either be
> > > statically linked with the VM or dynamically populated based on other
> factors like this one.
> > > Are you passing PCI ID using VirtualMachineTO.params?
> 
> I've created PciTO and pass an array similar to VolumeTO and NicTO.
> This there anything wrong with this approach?
> 
> > > >
> > > > Anything that affects VM placement could have impact to
> > > > HA/migration,
> > > we probably need some graceful error-handling in these code paths,
> > > hopefully these have been taken care of.
> > > >
> 
> Migration is prevented by libvirt and cloudstack displays "Failed to migrate
> vm" if the user attempts to migrate a VM. I have not investigated HA yet.

Re: PCI-Passthrough with CloudStack

Posted by Pawit Pornkitprasan <p....@gmail.com>.
Dear Sanjay,

Thank you for your interest.

I did the work as a part of an internship which has already ended, so
unfortunately, I no longer have time to continue it. Please feel free to
take it and base your work on it. :)

Best regards,
Pawit


On Mon, Sep 30, 2013 at 7:32 PM, Sanjay Tripathi <sanjay.tripathi@citrix.com
> wrote:

> Hi Pawit,
>
> It's some nice work that you have done here:
> https://reviews.apache.org/r/12098/ .
>
> I am planning to enable the support of GPU/VGPU in VMWare/XenServer and
> started a discussion with the community:
> http://apache.markmail.org/message/peusnk6n6iz3tvaz .
> In your patch, you have already written a framework for pci-passthrough
> and the same design I can leverage for XenServer and VMWare.
>
> Can you rebase your patch to work with master then we can collaborate to
> take it forward.
> Let me know if you need any help in rebasing.
>
> --Sanjay
>
> > -----Original Message-----
> > From: Pawit Pornkitprasan [mailto:p.pawit@gmail.com]
> > Sent: Wednesday, June 12, 2013 7:35 AM
> > To: dev@cloudstack.apache.org
> > Cc: Ryousei Takano; Edison Su; Kelven Yang
> > Subject: Re: PCI-Passthrough with CloudStack
> >
> > On 6/11/13 09:35 PM, "Edison Su" <Ed...@citrix.com> wrote:
> >
> > > If change vm's xml is enough, then how about use libvirt's hook system:
> > > http://www.libvirt.org/hooks.html
> > > I think, the issue is that, how to let cloudstack only create one VM
> > > per KVM host, or few VMs per host(based on the available PCI devices on
> > the host).
> > > If we think PCI devices are the resource CloudStack should to take
> > > care of during the resource allocation, then we need a framework:
> > > 1. During host discovering, host can report whatever resources it can
> detect
> > to mgt server.
> > > RAM/CPU freq/local storage are the resources, that currently supported
> > > by kvm agent. Here we may need to add PCI devices as another resource.
> > > Such as, KVM agent host returns a StartupAuxiliaryDevicesReportCmd
> > along as with other startupRouteringcmd/StartStorage*cmd etc, during the
> > startup.
> > > 2. There will be a listener on the mgt server, which can listen on
> > > StartupAuxiliaryDevicesReportCmd, then records available PCI devices
> into
> > DB,  such as host_pci_device_ref table.
> > > 3. Need to extend FirstFitAllocator, take PCI devices as another
> resource
> > during the allocation.
> > > And also need to find a place to mark the PCI device as used in
> > > host_pci_device_ref table, so the pci device won't be allocated to more
> > than one VM.
> > > 4. Have api to create a customized computing offering, the offering
> > > can contain info about PCI device, such as how many PCI devices plugged
> > into a VM.
> > > 5. If user chooses above customized computing offering during the VM
> > > deployment, then the allocator in step 3 will be triggered, which will
> > > choose a KVM host which has enough PCI devices to fulfill the computing
> > offering.
> > > 6. In the startupcommand, the mgt server send to kvm host, it should
> > > contain the PCI devices allocated to this VM.
> > > 7. At the KVM agent code, change VM's xml file properly based on the
> > startupcommand.
> > > How do you think?
> >
> > I think this is a very good idea. Maybe we can do further generalization
> (to
> > support Paul's case) by tagging each PCI device with a name, and we can
> > store the name inside the compute offering instead of the ID. Then
> > management will look up inside host_pci_device_ref and find the ID of a
> > suitable PCI device. This would allow multiple VMs to be allocated to one
> > host of the host has multiple PCI cards providing the same function.
> >
> > > > >
> > > > > -----Original Message-----
> > > > > From: Kelven Yang [mailto:kelven.yang@citrix.com]
> > > > > Sent: 11 June 2013 18:10
> > > > > To: dev@cloudstack.apache.org
> > > > > Cc: Ryousei Takano
> > > > > Subject: Re: PCI-Passthrough with CloudStack
> > > > >
> > > > > VirtualMachineTO.params is designed to carry generic VM specific
> > > > configurations, these configuration parameters can either be
> > > > statically linked with the VM or dynamically populated based on other
> > factors like this one.
> > > > Are you passing PCI ID using VirtualMachineTO.params?
> >
> > I've created PciTO and pass an array similar to VolumeTO and NicTO.
> > This there anything wrong with this approach?
> >
> > > > >
> > > > > Anything that affects VM placement could have impact to
> > > > > HA/migration,
> > > > we probably need some graceful error-handling in these code paths,
> > > > hopefully these have been taken care of.
> > > > >
> >
> > Migration is prevented by libvirt and cloudstack displays "Failed to
> migrate
> > vm" if the user attempts to migrate a VM. I have not investigated HA yet.
>