[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Questioning the Xen Design of the VMM



On Thu, 2006-08-10 at 17:57 +0300, Al Boldi wrote:

> > > So HVM solves the problem, but why can't this layer be implemented in
> > > software?
> >
> > the short answer at the cpu level is "because of the arcane nature of
> > the x86 architecture" :/
> 
> Which AMDV/IntelVT supposedly solves?

regarding the virtualization issue, yes.

> > once the cpu problem has been solved, you'd need to emulate hardware
> > resources an unmodified guest system attempts to drive. that again takes
> > additional cycles. elimination of the peripheral hardware interfaces by
> > putting the I/O layers on top of an abstract low-level path into the VMM
> > is one of the reasons why xen is faster than others. many systems do
> > this quite successfully, even for 'non-modified' guests like e.g.
> > windows, by installing dedicated, virtualization aware drivers once the
> > base installation went ok.
> 
> You mean "virtualization aware" drivers in the guest-OS?  Wouldn't this 
> amount to a form of patching?

yes, strictly speaking it is a modification. but one based upon usually
well-defined interfaces, and it does not require parsing opcodes and
patching code segments.

otoh, one which obviously needs to be reiterated for any additional
guest os family.

> > > I'm sure there can't be a performance issue, as this virtualization
> > > doesn't occur on the physical resource level, but is (should be) rather
> > > implemented as some sort of a multiplexed routing algorithm, I think :)
> >
> > few device classes support resource sharing in that manner efficiently.
> > peripheral devices in commodity platforms are inherently single-hosted
> > and won't support unfiltered access by multiple driver instances in
> > several guests.
> 
> Would this be due to the inability of the peripheral to switch contexts fast 
> enough?

maybe. more important: commodity peripherals typically wouldn't
sufficiently implement security and isolation. you're certainly won't
'route' arbitraty block I/O from a guest system to your disk controller
without further investigation and translation. it may gladly overwrite
your host partition or whatever resource you granted elsewhere.

> If so, how about a "AMDV/IntelVT" for peripherals?

good idea, and actually practical. unfortunately, this is where it's
getting expensive.

> > from the vmm perspective, it always boils down to emulating the device.
> > howerver, with varying degrees of complexity regarding the translation
> > of guest requests to physical access. it depends. ide, afaik is known to
> > work comparatively well.
> 
> Probably because IDE follows a well defined API?

yes. however, i'm not an ide guy. 
 
> > an example of an area where it's getting more
> > sportive would be network adapters.
> >
> > this is basically the whole problem when building virtualization layers
> > for cots platforms: the device/driver landscape spreads to infinity :)
> > since you'll have a hard time driving any possible combination by
> > yourself, you need something else to do it. one solution are hosted
> > vmms, running on top of an existing operating system. a second solution
> > is what xen does: offload drivers to a modified guest system which can
> > then carry the I/O load from the additional, nonprivileged guests as
> > well.
> 
> Agreed; so let me rephrase the dilemma like this:
> The PC platform was never intended to be used in a virtualizing scenario, and 
> therefore does not contain the infrastructure to support this kind of a 
> scenario efficiently, but this could easily be rectified by introducing 
> simple extensions, akin to AMDV/IntelVT, on all levels of the PC hardware.
> 
> Is this a correct reading?

yes, with restrictions. at this point in time, correct not from an
economical standpoint. the whole "virtualization renaissance", we've
been experiencing for the last 3 years or so builts upon the fact that
PC hardware has become

        1. terribly powerful, compared to the workloads most software 
           systems then run actually require.

        2. remained comparatively cheap, as it always used to.

if you start to redesign the I/O system, you're likely to raise the cost
for the overall system.

I/O virtualization down to the device level may come, but like with
processor prices, it's all a "economy of scale".

hardware-assisted virtualization at various places in the architecture,
however, including I/O, is a topic as well understood.

may i again point you to some reading matter in that area:

nair/smith: virtual machines.
http://www.amazon.de/gp/product/1558609105/028-2651277-1478934?v=glance&n=52044011

excellent textbook on many aspects of system virtualization, including
those covered by this conversation so far.

> If so, has this been considered in the Xen design, so as to accommodate any 
> future hwV/VT/VMX extensions easily and quickly?

vmx is all about processor virtualization. addtional topics would
include memory virtualization (required, and available in the form of
regular virtual memory; but might see additional improvements.) and I/O
virtualization. i see no reasons why those could not be supported by
xen. as they are subsystems which have been backed in a portable and
scalable fashion in the operating system landscape for many year now. so
the topic of how to accomodate changes in that area is not particularly
new.

regards,
daniel

 
-- 
Daniel Stodden
LRR     -      Lehrstuhl fÃr Rechnertechnik und Rechnerorganisation
Institut fÃr Informatik der TU MÃnchen             D-85748 Garching
http://www.lrr.in.tum.de/~stodden         mailto:stodden@xxxxxxxxxx
PGP Fingerprint: F5A4 1575 4C56 E26A 0B33  3D80 457E 82AE B0D8 735B

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.