[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Improving hvm IO performance by using self IO emulator (YA io-emu?)

> > Can SATA drives queue multiple outstanding requests?  Thought some newer
> > rev could, but I may well be misremembering - in any case we'd want
> > something that was well supported.
> SATA can, yes.  However, as you mention, SATA is very poorly supported.
> The LSI scsi adapter seems to work quite nicely with Windows and Linux.
> And it supports TCQ.  And it's already implemented :-)  Can't really
> beat that :-)

LSI wins :-)  Supporting TCQ is cool too (but can we actually leverage that 
through the PV interface?)

> > Perhaps the network device ought to be the first to move?
> Can't say.  I haven't done much research on network performance.

Network was the hard device to virtualise anyway, so I suspect efficiency may 
matter more here....  although we'd have to test whether it was significant 
compared to other factors (is the device we're emulating at least well suited 
to efficient batching behaviour or should we be looking at that too?)

> Reflecting is a bit more expensive than doing a stub domain.  There is
> no way to wire up the VMEXITs to go directly into the guest so you're
> always going to have to pay the cost of going from guest => host =>
> guest => host => guest for every PIO.  The guest is incapable of
> reenabling PG on its own hence the extra host => guest transition.

VMEXITs still go to ring 0 though, right?  So you still need the ring 
transition into the guest and back?

What you wouldn't need if leveraging HVM is the pagetable switch - although I 
don't know if this is the case for VT-i which is somewhat different to VT-x 
in design.

> I know that guest => host => guest typically costs *at least* 1000 nsecs
> on SVM.  A null sysenter syscall (that's host/3 => host/0 => host/3) is
> roughly 75 nsecs.
> So my expectation is that stub domain can actually be made to be faster
> than reflecting.

Interesting.  The code should be fairly common to both though, so maybe we can 
do a bakeoff!


> Regards,
> Anthony Liguori
> > You seem to be actually proposing running the code within the HVM guest
> > itself.  The two approaches aren't actually that different, IMO, since
> > the guest still effectively has two different execution contexts.  It
> > does seem to me that running within the HVM guest itself might be more
> > flexible.
> >
> > A cool little trick that this strategy could enable is to run a full Qemu
> > instruction emulator within the device model - I'd imagine this could be
> > useful on IA64, for instance, in order to provide support for running
> > legacy OSes (e.g. for x86, or *cough* PPC ;-))
> >
> > Cheers,
> > Mark

Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.