[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH][HVM] pass-through PCI device hotplug support

On Fri, Feb 15, 2008 at 02:36:49PM +0000, Keir Fraser wrote:
> On 15/2/08 13:32, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:
> > This patch is the new version against 17051 to enable HVM guest VT-d device
> > hotplug.
> > 
> > ** Currently only 2 virtual pci slots(6~7) are made as being capable of
> > hotplug, 
> > so more than 2 vtd dev can't be hotplugged, but we can easily extend it in
> > future.
> Now applied, but perhaps too hastily. I found it broke the
> !CONFIG_PASSTHROUGH build and in fixing that I noticed that you dumped code

I assumed CONFIG_PASSTHROUGH is always on by default:(
So maybe need more #ifdef.

> in a bunch of random places in qemu. Perhaps all passthrough stuff should be
> gathered in one place? Alternatively at least the device model changes (in
> piix4acpi.c) should be decoupled a bit from the backend logic in

Agree with you. passthrough.c manage all the pass-through device info, while
piix4acpi.c manage GPE & hotplug controller and call into passthough.c in some 
condition(say a IO indicating hot removal arrive). 

So at least one pair of passthrough function should be in the piix4acpi and 
ifdef'ed, but we can move all the xenstore logic to it as you said.

> passthrough.c, so the former can cleanly build without the latter. I also
> killed pt_uninit() because I couldn't even find where pci_cleanup() was

pci_cleanup() is in the libpci, which should be called for clean up in theory.  
But things goes well without it.

> defined. No passthrough function should be in vl.c: I #ifdef'ed in the for

Our plan is making this hotplug generic besides passthough device, even for 
virtual PCI device on native QEMU. So leaving do_pci_add/del in vl.c should be 
okay like do_usb_xxx.

> now but the functions should probably be moved. And you did a big dump of
> random crap into vl.h.
>  -- Keir

best rgds,

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.