[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xen_emul_unplug on xen 4.1, HVM guest 2.6.38
On 26.10.2011 18:43, Ian Campbell wrote: > On Wed, 2011-10-26 at 17:25 +0100, Alex Bligh wrote: >> Ian, >> >>> No, this will disable PV drivers. >> >> I can confirm that our testing illustrates this :-( >> >>> The decision to unplug is a kernel side decision and in PVHVM Linux >>> kernels is not currently possible to have both types of devices by >>> default due to the risk of dataloss if the guest is not correctly >>> configured (i.e. the kernel can't tell if it is mounting the same >>> filesystem via two paths). The xen_emul_unplug option is the current way >>> you can override this once you have confirmed that your guest >>> configuration is not dangerous. I'm afraid this necessarily involves >>> guest config and guest admin interaction. >>> >>> In principal we might be able to extend the unplug protocol (which would >>> involve patches to qemu, the kernel(s) and the toolstack) to allow >>> devices to be marked as being not necessary to unplug. Someone would >>> have to send patches though and it would be opening up a way for people >>> to lose data so we'd need to be careful. >>> >>> I'm sure that the unplug protocol is documented somewhere in the source >>> tree but I can't for the life of me find it :-( >> >> So, the issue is this. We have thousands (literally) of disks in use >> by third parties on xen 3.3. Some are Windows, some are ancient linux, >> some are modern linux, etc. The hypervisor has no way of whether the >> images are going to use /dev/sda or /dev/xvda (i.e. PV or emulated) >> drivers. Indeed the most common linux case is that grub uses the >> emulated devices to load the kernel, then uses /dev/xvda as a root >> device, i.e. both are used (but not simultaneously). >> >> We need to have the xen pci stuff on, so PV drivers operate (in both >> new and old kernels). But as modern linux kernels detect the unplug >> functionality, they will unplug the emulated devices and then fail to >> boot because (for instance) under Xen3.3 using /dev/sdaX to access >> (say) your /boot partition worked perfectly well. What we need is a >> switch to revert to the old Xen3.3 (pre-unplug) behaviour, so any >> Linux kernel will see the same set of devices. I cannot believe this >> is a unique requirement for people attempting to do a Xen3.3 to >> Xen4.1 migration. > > I'm a bit fuzzy on the details but I'm not sure what this has to do with > the host, the device naming and behaviour on unplug are kernel side > things, I'd expect that if /dev/sdaX as /boot worked on 3.3 it'll work > on 4.1 too. (I believe you that it doesn't work, I'm just wondering > aloud what I'm missing). > > Can you give us the specifics of a setup which fails, e.g. a complete > guest cfg file, the kernel version, command line options, /etc/fstab, > dmesg etc. > >> I think this is in xen_unplug_emulated_devices() in >> arch/x86/xen/platform-pci-unplug.c >> >> This uses check_platform_magic(), which I have appended. In order >> to avoid unplugging (without relying on the boot line), I need >> this to return a non-zero value (XEN_PLATFORM_ERR_MAGIC is >> irrelevant as xen_emul_unplug is 0 by assumption). >> >> I can achieve that by either (a) returning a bad magic number, >> (b) making the host 'blacklist' the product (how does that work?) >> or (c) using a protocol value of (say) 0. I take it Xen 3.3 simply >> returns a bad magic number as I don't think XEN_IOPORT_MAGIC existed >> in 3.3. As far as I can tell, XEN_IOPORT_MAGIC is only use for >> PCI unplug. >> >> So, is the correct approach to disable XEN_IOPORT_MAGIC (or rather >> make it return a different value) depending on a configuration option? >> If so, I am happy to submit a patch to do that. Or can I do this >> without a patch by "blacklisting" everything? (not sure how that is done). > > Hmm, yes I think the special treatment of XEN_IOPORT_MAGIC mismatch on > the kernel side is what I was missing. > > It might make sense to have a guest level config option which disables > these magic ports, i.e. makes them return 0xffff like they would have > done in 3.3 (I think 0xffff is what you'll get from an invalid port in > general). > >> Out of interest, with a default guest Ubuntu Natty install CD, using the >> default Xen 4.1 settings, we are seeing the guest (a) unplugging the >> emulated devices (fine), then (b) failing to find the emulated devices, >> and (c) the install failing. Is that to be expected? > > Sounds like an Ubuntu bug to me, but I don't follow Ubuntu closely > enough to know if it is known or not. > At least one part is not Ubuntu specific. And that is that the unplug logic decides to unplug emulated devices based on having the pci and the blkfront driver *available* (built-in or module). But later on the blkfront driver ignores all devices that are not *named* in a way to map to the xvd major. Which leaves you without any usable devices when you named your disk hda in the config file and you do not prevent unplugging. The other part of the problem is that even when you name the disk xvda in the config file, the installer does not know about blkfront. This is Ubuntu specific and we either need to have blkfront built-in or put them into a special udeb which is a special handling just for the installer. Still I would love to see this unplug handling become a bit more obvious. If unplug was successful, then blkfront should not ignore the devices. Or maybe just make the config more what-you-write-is-what-you-get and having hd or sd there only gives you emulated devices and xvd gives you pv devices. -Stefan > Ian. > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |