[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Improving hvm IO performance by using self IO emulator (YA io-emu?)



> > The big problem with disk emulation isn't IO latency, but the fact that
> > the IDE emulation can only have one outstanding request at a time.  The
> > SCSI emulation helps this a lot.
>
> IIRC, a real IDE can only have one outstanding request too (this may have
> changed with AHCI).  This is really IIRC :-(

Can SATA drives queue multiple outstanding requests?  Thought some newer rev 
could, but I may well be misremembering - in any case we'd want something 
that was well supported.

> > I don't know what the bottle neck is in network emulation, but I suspect
> > the number of copies we have in the path has a great deal to do with it.
>
> This reason seems obvious.

Latency may matter more to the network performance than it did to block, 
actually (especially given our current setup is fairly pessimal wrt 
latency!).  It would be interesting to see how much difference this makes.

In any case, copies are bad too :-)  Presumably, hooking directly into the 
paravirt network channel would improve this situation too.

Perhaps the network device ought to be the first to move?

> > There's a lot to like about this sort of approach.  It's not a silver
> > bullet wrt performance but I think the model is elegant in many ways.
> > An interesting place to start would be lapic/pit emulation.  Removing
> > this code from the hypervisor would be pretty useful and there is no
> > need to address PV-on-HVM issues.
>
> Indeed this is the simpler code to move.  But why would it be useful ?

It might be a good proof of concept, and it simplifies the hypervisor (and the 
migration / suspend process) at the same time.

> > Does the firmware get loaded as an option ROM or is it a special portion
> > of guest memory that isn't normally reachable?
>
> IMHO it should come with hvmload.  No needs to make it unreachable.

Mmmm.  It's not like the guest can break security if it tampers with the 
device models in its own memory space.

Question: how does this compare with using a "stub domain" to run the device 
models?  The previous proposed approach was to automatically switch to the 
stub domain on trapping an IO by the HVM guest, and have that stub domain run 
the device models, etc.

You seem to be actually proposing running the code within the HVM guest 
itself.  The two approaches aren't actually that different, IMO, since the 
guest still effectively has two different execution contexts.  It does seem 
to me that running within the HVM guest itself might be more flexible.

A cool little trick that this strategy could enable is to run a full Qemu 
instruction emulator within the device model - I'd imagine this could be 
useful on IA64, for instance, in order to provide support for running legacy 
OSes (e.g. for x86, or *cough* PPC ;-))

Cheers,
Mark

-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.