[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Paravirtualised drivers for fully virtualised domains
(The list appears to have eaten my previous attempt to send this. Apologies if you receive multiple copies.) The attached patches allow you to use paravirtualised network and block interfaces from fully virtualised domains, based on Intel's patches from a few months ago. These are significantly faster than the equivalent ioemu devices, sometimes by more than an order of magnitude. These drivers are explicitly not considered by XenSource to be an alternative to improving the performance of the ioemu devices. Rather, work on both will continue in parallel. To build, apply the three patches to a clean checkout of xen-unstable and then build Xen, dom0, and the tools in the usual way. To build the drivers themselves, you first need to build a native kernel for the guest, and then go cd xen-unstable.hg/unmodified-drivers/linux-2.6 ./mkbuildtree make -C /usr/src/linux-2.6.16 M=$PWD modules where /usr/src/linux-2.6.16 is the path to the area where you built the guest kernel. This should be a native kernel, and not a xenolinux one. You should end up with four modules. xen-evtchn.ko should be loaded first, followed by xenbus.ko, and then whichever of xen-vnif.ko and xen-vbd.ko you need. None of the modules need any arguments. The xm configuration syntax is exactly the same as it would be for paravirtualised devices in a paravirtualised domain. For a network interface, you take your line vif= [ 'type=ioemu,mac=00:16:3E:C1:CA:78' ] (or whatever) and replace it with vif= [ 'type=ioemu,mac=00:16:3E:C1:CA:78', 'bridge=xenbr0' ] where bridge=xenbr0 should be some suitable netif configuration string, as it would be in the PV-on-PV case. Disk is likewise fairly simple: disk = [ 'file:/path/to/image,ioemu:hda,w' ] becomes disk = [ 'file:/path/to/image,ioemu:hda,w', 'file:/path/to/some/other/image,hde,w' ] There is a slight complication in that the paravirtualised block device can't share an IDE controller with an ioemu device, so if you have an ioemu hda, the paravirtualised device must be hde or later. This is to avoid confusing the Linux IDE driver. Note that having a PV device doesn't imply having a corresponding ioemu device, and vice versa. Configuring a single backing store to appear as both an IDE device and a paravirtualised block device is likely to cause problems; don't do it. The patches consist of a number of big parts: -- A version of netback and netfront which can copy packets into domains rather than doing page flipping. It's much easier to make this work well with qemu, since the P2M table doesn't need to change, and it can be faster for some workloads. The copying interface has been confirmed to work in paravirtualised domains, but is currently disabled there. -- Reworking the device model and hypervisor support so that iorequest completion notifications no longer go to the HVM guest's event channel mask. This avoids a whole slew of really quite nasty race conditions -- Adding a new device to the qemu PCI bus which is used for bootstrapping the devices and getting an IRQ. -- Support for hypercalls from HVM domains -- Various shims and fixes to the frontends so that they work without the rest of the xenolinux infrastructure. The patches still have a few rough edges, and they're not as easy to understand as I'd like, but I think they should be mostly comprehensible and reasonably stable. The plan is to add them to xen-unstable over the next few weeks, probably before 3.0.3, so any testing which anyone can do would be helpful. The Xen and tools changes are also available as a series of smaller patches at http://www.cl.cam.ac.uk/~sos22/pv-on-hvm/hvm_xen . The composition of these gives hvm_xen_unstable.diff. Steven. Attachment:
copy_netif.diff Attachment:
frontend_changes.diff Attachment:
hvm_xen_unstable.diff Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |