[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Still struggling to understand Xen



On 02.07.20 14:00, Biff Eros wrote:
Xen seems to be different to most other forms of virtualisation in the
way it presents hardware to the guest.  For so-called HVM guests I
understand everything:

Hypervisor in conjunction with dom0 provides disk and network devices
on PCI busses that can be viewed, enumerated with standard
off-the-shelf Linux drivers and tools.  This is all good.  My
confusion kicks in when the subject of PV drivers comes up.

 From what I understand (not clearly documented anywhere that I could
find) the hypervisor/dom0 combination somehow switches mode in
response to something the DomU guest does.  What exactly?  Don't know.
But by the time you've booted using the HVM hardware it seems the door
is shut, and any attempt to load front-end drivers will then result in
'device not found' messages or whatever.  That is, assuming my kernel
is configured correctly.

So this is presumably why most guests 'connect' to the PV back-end in
the initrd.  I couldn't really understand if it's the loading of the
conventional SCSI driver, or the detection of a SCSI device, or the
opening of a conventional SCSI device to mount as root that shuts the
above 'door'.  Unfortunately there isn't much documentation about
kernel configurations for Xen and what documentation I found seemed to
be out of date.

A typical HVM domain is booted with emulated devices being active (e.g.
hda, hdb, ...). The switch to pv devices is normally done before
mounting root in order to mount root on the pv device (for performance
reasons).

In case pv-devices are active the guest kernel will write to a special
IO-port emulated by qemu in order to deactivate the emulated devices.
This makes sure there are no ambiguous devices (otherwise each device
with a pv-driver would show up twice, once via the original driver and
once via the pv-driver). Disconnecting the emulated devices can be
avoided via the guest kernel boot parameter "xen_emul_unplug".

New pv-devices can be added at runtime, but they need to be assigned to
the guest from dom0 before. New devices and their parameters are
advertised via Xenstore (you need the xenbus driver for that purpose in
the guest).

It's also unclear to me if the back-end drivers in a typical dom0 that
you might get from for example XCP-ng, or XenServer, or even AWS can
somehow be incompatible with the latest and greatest domU Linux
kernels.  Is there some kind of interface versioning or are all
versions forward and backward compatible?

The basic protocol is compatible, some features are advertised by the
backend in Xenstore, the frontend knows that way which features are
allowed. The frontend will then set feature values in Xenstore to tell
the backend how it wants to operate the device.

I've been through pretty much all drivers related to Xen, compiled
them into my kernel and selected /dev/xvda1 device on boot, but it's
still not working for me, the Xen 'hardware' is not being detected, so
would appreciate any guidance you can offer.

Are you using Linux or another OS?

In Linux you need to use the xen pci device (see the source
drivers/xen/platform-pci.c in the kernel tree). platform_pci_probe()
contains all function calls to initialize the basic environment for
pv-drivers.


Juergen



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.