[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Device Drivers in xen, looking also for a white book about how xen works exactly‏



On Sunday 01 March 2009 17:13:16 Venefax wrote:
> Your explanation misses to describe the difference between
> fully-virtualized VM's and Paravirtualized ones, and why the performance of
> the later is much, much better.

This is true and the original question does touch on it since it asks about 
virtual devices.

> Please kindly continue. You are doing a great job summarizing the ongoing
> discussion in the science and art of virtualization.

OK, thanks ;-)

Well...

Paravirtualised VMs on Xen are traditionally VMs that run a fully Xen-aware 
OS.  The kernel (usually Linux but also NetBSD, sometimes Netware, sometimes 
others) has been modified deeply to understand that it is running on a 
hypervisor.  As a result it knows it shouldn't be looking for real physical 
devices like PCI network cards, graphics cards, etc.  Instead, when a 
paravirtualised OS boots, it establishes shared memory communications with an 
already-running VM which is able to provide it with block and network IO.  
Usually this involves connecting to virtual devices provided by dom0.

Dom0 runs a kernel module called a "backend driver".  There's one of these for 
block devices (i.e. virtual disks) and one for virtual network devices.  The 
backend driver receives requests from paravirtualised domUs and multiplexes 
them onto the real hardware.  The backend driver allows the domUs to *safely* 
share the real hardware in the machine so that they can correctly access 
storage and network when they need to.  To make sure that the guests don't try 
to do bad things with the system, the backend driver must check (and sometimes 
alter) their requests.  The backend driver is a kind of low level "proxy" for 
devices.

On the other hand, fully virtualised VMs are ones that believe they are 
running on a real PC.  The VMM stack (Xen+dom0 in this case) must somehow 
provide the illusion of this by emulating real-world devices for disk, network 
card, graphics card, etc.  Fully virtualised VMs don't know how to establish 
shared memory connections to dom0 because that does not exist in a real PC.  
Instead they issue MMIO accesses and port IO to try and discover what 
"hardware" they have available to them.  These accesses can be caught by Xen 
and are passed to a "device model".  The device model is a program running in 
another VM which can emulate the devices the guest OS is expecting.  It 
simulates these devices, then generates the responses the guest OS would 
expect from a real device.

Amongst the devices that are emulated by the Device Model are the disk and 
network devices that the guest will use to do IO.  When the Device Model gets 
requests to read or write these emulated devices, it issues requests "behind 
the scenes" to do *real* IO.  The guest itself does not know how this happens, 
as it believes it is talking directly to the hardware.

There are two ways that the Device Model usually runs.  Originally the Device 
Models were simply a userspace process running in domain 0.  This is separate 
from the "backend" drivers that paravirtualised guests talked to.  The device 
models did IO by accessing files and devices using normal system calls - the 
same as any other process would do.  Some special calls were made available to 
the device model so that it could communicate back to the fully virtualised 
guests.  More recently the "stub domain" or "stubdom" option has been created.  
This this model the device model is a effectively a separate virtual machine 
that is solely responsible for providing emulated devices to the fully 
virtualised VM it is paired with.  The stubdom services the IO by using normal 
paravirtualised shared memory to talk to dom0.  Think of the stubdom as a 
"proxy" that turns emulated device requests into paravirtual device requests.

In either case, the device model is based very heavily on the PC emulation 
code from the Qemu (http://www.qemu.org) project.

Things can be *slightly* more complicated than this, however, since it is 
possible to run paravirtualised drivers inside a full virtualised VM.  This 
blurs the lines between the two classes of VM and includes some of the 
strengths (and weaknesses) of both.  If paravirtualised drivers are made to 
run within a fully virtualised OS then you can get better IO performance since 
the overheads of emulating real devices are removed.  The OS itself does not 
need to be fully modified for Xen in order take to take advantage of this, so 
it is an ideal technique to apply to Windows guests.  The GPLPV Windows 
drivers do this, for instance.

I hope this helps to summarise the differences in IO between the different 
kinds 
of VMs Xen might run.  This description is mostly Xen-specific - there are some 
common concepts with other VMM stacks but they tend to vary greatly in exactly 
how they accomplish these goals.

Cheers,
Mark

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Mark Williamson
> Sent: Sunday, March 01, 2009 11:39 AM
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Cc: Patrick Archibal
> Subject: Re: [Xen-users] Device Drivers in xen, looking also for a white
> book about how xen works exactly‏
>
> Hi there,
>
> > I want to know how it works on xen.
> > I think that :
> >
> >  All drivers are in the hypervisor, when a guest requires to use a
> > device,
> >
> > guest uses drivers which has been rewriten for xen in order to contact
> > the hypervisor. Then, the hypervisor which has all real drivers (not
> > modified) call the real device ?
>
> That's how Xen 1.x worked.  Porting drivers to Xen was ugly and time
> consuming, so we moved it to the other model you describe.  Let's call this
> approach #1.
>
> > or that :
> > The real device driver is contained in the dom0 system, the dom0 manage
> > driver for all domU.
>
> This is how Xen worked from Xen 2.x onwards.  Xen 3.x uses a very similar
> architecture but moves *even more* driver stuff into dom0.  Xen knows very
> little about devices.  Lets call this approach #2.
>
> > I have three questions :
> >
> > 1 -  Which is the good scenario ? (if one is good ) :o)
>
> As usual in Operating Systems (and everything else!) the answer is probably
> "It depends" ;-)
>
> If you were making an embedded device, where you have a small selection of
> hardware to expect then it might make sense to take approach #1 and put the
> drivers in the hypervisor.
>
> In Xen's case, the goal is to have a fairly small hypervisor but be able to
> run on a wide variety of PCs.  Running on a wide variety of PCs means that
> lots of drivers need to be available and the only sensible way to do this
> was
> to run the drivers in a Linux guest, instead of porting them all to Xen.
>
> Another potentially benefit of approach #2 is that your device drivers are
> contained in VMs.  If you go a step further than what you described and
> make
>
> it possible to run drivers in domains *other* than dom0 then if a driver
> crashes, it can be killed and restarted.  You can't do this when the
> drivers
>
> are in the hypervisor itself (or dom0).
>
> For Xen's purposes, approach #2 has been much better.  Microsoft's Hyper-V
> is
> taking this approach too, I believe.
>
> Note, however, that VMware ESX - a *very* strong and successful product -
> uses
> approach #1 with great success.  Arguably Linux-based approaches like KVM
> use
> approach #1 also but the difference there is that *Linux is the hypervisor*
> so
> drivers do not need to be ported to it.  For this reason, approach #1 is
> not a
> disadvantage to them (it's even a strength!).
>
> > 2 - If drivers for guests are rewritten to call hypervisor or dom0
> > instead of the device directly, who devellops driver ? (xen devellopers ?
> > or developper for kernel mainline)
>
> The Xen patches are designed so that the *real* device drivers work
> unmodified
> (they just need to be compiled for Xen).  To get device access to domUs, a
> set
> of "Virtual Device" drivers are written.  These are written by the Xen
> developers and are used to give network and block (and framebuffer) access
> to
> domUs.
>
> > 3- Which is the real utility of dom0 system ? (just I/O and
> > administration of virtual machines)
>
> You can run what you want in dom0 - some people run X.org and use it as
> their
> "desktop" OS.  For a secure deployment you want a minimal set of drivers
> and
>
> administration tools in it.
>
> > Last question, is someone have find a good white paper or doc which
> > explains how xen works ? (and where i can find probably answers for my
> > questions ??)
>
> Look at these:
> http://wiki.xensource.com/xenwiki/XenArchitecture?action=AttachFile&do=get&;
>t arget=Xen+Architecture_Q1+2008.pdf
> http://www.cl.cam.ac.uk/research/srg/netos/papers/2003-xensosp.pdf
> http://www.cl.cam.ac.uk/netos/papers/2004-oasis-ngio.pdf
> http://www.cl.cam.ac.uk/research/srg/netos/xen/architecture.html
>
> "Xen and the art of virtualization" describes the Xen 1.x driver
> architecture
> but it is a useful guide to the rest of the system.  "Safe hardware
> access..."
> describes the newer architecture.
>
> > Thanks for all this details.
> > Best regards
> >
> > Patrick Archibal
>
> Hope this helps,
> Cheers!
> Mark
>
> > Inédit ! Des Emoticônes Déjantées! Installez les dans votre Messenger !
> > http://www.ilovemessenger.fr/Emoticones/EmoticonesDejantees.aspx
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.