[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Introduction to VirtIO on Xen project


  • To: Wei Liu <liuw@xxxxxxxxx>
  • From: Takeshi HASEGAWA <hasegaw@xxxxxxxxx>
  • Date: Wed, 27 Apr 2011 23:05:28 +0900
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 27 Apr 2011 07:06:30 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=ilS5GAj4LWOiWK+gojIOlZ26Dqv5lsm6rexvCHC46NNkozjPrhbkkdmlFrb0o9EKQ6 +JOFgbx2YLMEohMNuxbXaKaK19ZMuW/ifRfXw/sTQWRgLtngJ5Cgld7oK7wKaEG6ZH+3 zo6d4cJC5UYjC//4u6PA4aW7jyxwT4Pqwg5bk=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

That's why I am trying to run Fedora 14 on upstream-qemu + xen-unstable.

On HVM domain, as SPICE worked with some libxl patches, I guess virtio-pci
should work if xl command launch qemu with appropriate command arguments.
virtio-pci is just a virtual PCI device.

Takeshi

2011/4/27 Wei Liu <liuw@xxxxxxxxx>:
> Hi, all.
>
> I'm Wei Liu, a graduate student from Wuhan University, Hubei, China.
> I'm accepted to GSoC 2011 for Xen and responsible for the project
> VirtIO on Xen. It's my honor to get accepted and involved in this
> wonderful community. I've been doing Xen development for my lab since
> late 2009.
>
> As you all know, VirtIO is a generic paravirtualized mainly used in
> KVM now. But it should not be too hard to port VirtIO to Xen. When
> done, Xen will have access to Linux kernel's VirtIO interfaces and
> developers will have an alternative way to deliver PV drivers besides
> from the original ring buffer flavor. This project requires: Modify
> upstream QEMU, replace KVM-specific interface with generic QEMU
> function; Modify Xen / Xentools to support VirtIO; Modify Linux
> kernel's VirtIO interfaces.
>
> We must take two usage scenarios into consideration:
>
> 1. PV-on-HVM;
> 2. Normal PV.
>
> These two scenarios require working on different set of functions:
>
> 1. XenBus vs VirtualPCI, it's about how to create a channel;
> 2. PV vs HVM, it's about how events are handled.
>
> Most of the code in VirtIO will be left as-it-is. But the notification
> mechanism should be replaced with Xen's event channel. This applies to
> QEMU's porting as well.
>
> In the PV on HVM case, QEMU needs to use event channel to get / send
> notification and foreign mapping / grant table functions in libxc
> /libxl to map memory pages. Virtual PCI bus will be used to establish
> a channel between Dom0 and DomU. In some sense, it makes no
> differences on the Linux kernel side.
>
> In the normal PV case, QEMU needs to use event channel to get / send
> notification, and foreign mapping functions in libxc / libxl to map
> memory pages. XenBus / Xenstore will be used to establish a channel
> between Dom0 and DomU. Linux VirtIO driver should use Xen's event
> channel as kick / notify function.
>
> When the porting is finished, I will carry on some performance tests
> with standardized tools such as ioperf, netperf and kernbench.
> Testsuites will be run on five different configurations:
>
> 1. Native Linux
> 2. Xen with PV-on-HVM VirtIO support
> 3. Xen with normal PV VirtIO support
> 4. Xen with original PV driver support
> 5. KVM with VirtIO support
>
> A short report will be written based on the results.
>
> This is a brief introduction to the project. Any comments are welcomed.
>
>
> --
> Best regards
> Wei Liu
> Twitter: @iliuw
> Site: http://liuw.name
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>



-- 
Takeshi HASEGAWA <hasegaw@xxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.