[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] mini-guest io emulation


  • To: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Sun, 12 Mar 2006 21:26:26 -0000
  • Delivery-date: Sun, 12 Mar 2006 21:27:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcZGDAxq9Jt2k5rVSlWxxMIWrrSajA==
  • Thread-topic: mini-guest io emulation

Folks,

At the last summit I presented a proposal for rearchitecting the way we
do io emulation for fully-virtualized (hvm) guests. I'd really like to
try and get the work to implement this underway, as it cleans up a bunch
of mess, is a prerequisite for save/restore/relocation of hvm guests,
and is a precursor to some significant performance improvements. It
involves a fair chunk of work, so we really want to try and get multiple
folk working on it.

The plan is to move the io emulation code (qemu-dm) from running as a
user-space app in domain 0 into a 'mini guest' that is effectively a
small paravirtualized guest in the root hardware context associated with
each hvm domain. 

I guess a very high-level work plan would look something like this:

* get minios running well on x86_64; add a few simple infrastructure
functions e.g. simple memory allocator. No need for any 'user space' mmu
support
* port (simplified)xenbus/netfront/blkfront to minios; test simple
net/disk IO
* implement enough infrastructure to allow qemu-dm to be compiled into
minios, calling into net/blkfront for IO.
* plumb the vmexit entry points from MMIO and in/out into minios and
hence qemu-dm

Once the above is working we'll be in good shape. We can remove all the
skany qemu-dm support from the tools as from their POV paravirt and hvm
guests will look identical. It should also be easy to implement
save/restore of hvm guests -- just save the miniguest as part of the hvm
guests', memory image. The next stage would then be to improve
performance by enhancing the device models, e.g. adding a network card
that suports jumbo frames and csum offload, and requires fewer vmexits
in operation.

How best to move forward on this? Any volunteers?

Thanks,
Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.