[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Are drivers in Dom0 virtualized in any way



On Thu, 2007-11-01 at 09:47 -0400, David Stone wrote:
> Hi guys, I have this question: are the drivers (say for a hard disk)
> running in Dom0 virtualized in any way?  For instance if a driver
> wants to set up a DMA transfer, does it have to make a call to the
> hypervisor in order to translate the guest-physical address it wants
> to use as the destination of the DMA into the host-physical address
> that the hypervisor has associated with that guest-physical address?

> I'm not asking about the xen back drivers in Dom0, but "real" drivers
> that drive the hardware.

regular PV domains like dom0 see machine addresses, as opposed to a
'pseudo-physical' (i.e. linear) virtualized machine address space.

this is different from full virtualization. PV domains are
virtualization-aware in that respect. they don't care if part view on
physical memory is fragmented. (they do work a lot on a linear
representation, but that's solely the domain's business)

note that this does not mean that dom0 is capable of just mapping these
pages. generally, the real page tables belong to the VMM. this applies
to domUs as it does to dom0, as a a basic security constraint.
all mappings are validated by xen, including those to I/O memory. the
difference is whether they succeed (for a sufficiently privileged domain
or dom0) or don't (for an unprivileged domain).

that said, there are variants regarding who is mapping which address
space to what. but the above is the regular case which applies to a
dom0.

> Or, does this translation happen lower down, in the Dom0 operating
> system itself, which is of course virtualized?  Like maybe in some
> kind of the DMA API that Linux provides?

uhm, well, both is true, depending on what you look at.

the linux code generally applies some variable degree of translation
between a page and the the 'bus address' delivered to the device. this
is due to the fact that, even for native kernels, on some architectures
the host view to physical memory can be different from the 'bus view'. 

xen exploits that by re-defining some of the macros in a pv-specific
fashion. and some translations can be as complex as they look like.

if you're studying the dom0 code, you'll see lots of variables named
'mfn' and 'pfn', and macros like phys_to_machine() and the like.
pfn is the linear representation, equivalent properties to what a native
kernel would a 'pfn'. even dom0 needs such a linear representation, e.g.
as an index into the pageinfo vector. but you'd also see calls to macros
like 'pfn_to_mfn(..)' in the preparation of hypercall arguments.

the general rule of thumb of thumb is: for a bare dom0 on the i386, the
memory is managed in terms of pfns, to keep the core of the kernel
happy, communicated in terms of mfns, translation is rather trivial, and
there's not much additional translation going on beyond that.

regards,
daniel

-- 
Daniel Stodden
LRR     -      Lehrstuhl fÃr Rechnertechnik und Rechnerorganisation
Institut fÃr Informatik der TU MÃnchen             D-85748 Garching
http://www.lrr.in.tum.de/~stodden         mailto:stodden@xxxxxxxxxx
PGP Fingerprint: F5A4 1575 4C56 E26A 0B33  3D80 457E 82AE B0D8 735B



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.