[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] opensolaris/illumos on PVHVM under 4.x.y?



[I will inline just your comments, given the length of the original email...]

> PVHVM is not just getting a PV guest booting under a HVM container.
> There are some differences in the way PV or PVHVM perform certain
> operations, so it's not just a matter of putting the PV drivers, for
> example the event channel callback used in PVHVM is different from the
> one used in PV, and also the way to map grant frames.

Although distinguishing the various flavors, including the upcoming PVH,
may not be easy, I think my post was consistent with this.  Just to be clear,
I am looking for something like running Windows under Xen: running the OS
in an HVM DomU, and using at least netfront and blkfront PV drivers to
improve I/O performance.

Any discussion of PV in my earlier post was merely to document where
I had succeeded or failed to get Illumos running under Xen 4.3 unstable.

> AFAIK Illumos
> doesn't have PVHVM support, so it will take some work to get it running.

I do not think that is correct, or at least not too long ago it would seem that
PVHVM was working under Illumos.  For example, in October 2010, someone
commented:

"Are you looking for Solaris as a dom0?  If not then why not just run it
in an HVM container as a domU. The PV drivers should 'just work'."
(http://comments.gmane.org/gmane.os.solaris.opensolaris.xen/5890)

In my attempts to run Illumos under HVM, it is clear that the kernel has and
executes code to identify whether it is running under PV, under HVM, or
natively.  Based on which of these three platforms it decides it is running on,
it appears to load in modules/drivers from the respective
i86xpv, i86hvm, and i86pc subdirectories of /platform.

More recently, it would seem that someone is running Illumos in HVM with
XenServer 6.1 (which I think is based on Xen 4.1.x):
"Xen HVM hangs during boot on Citrix XenServer 6.1 (probably older versions
have the same problem) if apix is enabled. . . . Workaround is to
enable_apix=0."
(https://www.illumos.org/issues/3605)
https://illumos.org/issues/3551 also reflects recent use under HVM.  I do not
know if XenServer might have some "secret sauce" responsible for improved
compatability with HVM Illumos.

Anyways, I figured my XL config file might just need a tweak, and that
someone who is getting Illumos to work under a recent version of Xen would
have a working config to share.

> If you are interested in running ZFS backends why don't you take a look
> at FreeBSD PVHVM? It comes with ZFS, frontends and backends, and runs in
> PVHVM mode without problems.

It appears that is the only option, absent trying out the efforts at a
native Linux port.
I was hoping to have more than one option to choose from, particularly
as the ZFS
codebase is more "dialed in" to Opensolaris.  Stability and speed seem to be
fairly recent features for ZFS under FreeBSD, although a working blkback
implementation is helpful.

> There was someone from the Illumos community working on getting back the
> full Xen PV support to Illumos, both as Dom0/DomU, but I think the work
> is stalled right now.

Those efforts seem to have been directed primarily at reincorporating older Dom0
code that was ripped out, and getting Dom0 support up and running on Illumos.
However, they seem beyond stalled.  From what I gather from the Illumnos IRC
logs, that person switched employers and abandoned their efforts on
this project,
and may have all but given up on even more general development work on Illumos.
It also appears that the most recent Xen 4.3 development update dropped this
from the list of possible upcoming features.

Thank you,
Eric

On Mon, Mar 18, 2013 at 5:20 AM, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> On 14/03/13 23:30, Eric Shelton wrote:
>> [sorry for the dupe, the previous message was sent out prematurely...]
>>
>> I was hoping to implement a storage domain using blkback to serve up
>> ZFS zvols, to see how it works out.  Is anyone successfully running
>> opensolaris with PV drivers under HVM on Xen 4.x.y?
>
> PVHVM is not just getting a PV guest booting under a HVM container.
> There are some differences in the way PV or PVHVM perform certain
> operations, so it's not just a matter of putting the PV drivers, for
> example the event channel callback used in PVHVM is different from the
> one used in PV, and also the way to map grant frames. AFAIK Illumos
> doesn't have PVHVM support, so it will take some work to get it running.
>
> If you are interested in running ZFS backends why don't you take a look
> at FreeBSD PVHVM? It comes with ZFS, frontends and backends, and runs in
> PVHVM mode without problems.
>
>> I am running on an
>> AMD IOMMU capable computer.  The closest I have gotten is:
>> (1) coaxing openindiana into booting up as a PV guest (not a smooth
>> out-of-the-box type of install)
>> (2) with acpi=0, an HVM boot will go pretty far, but there is a bug
>> present in 151a7, in which the hvm_sd and sd modules fail to load due
>> to being improperly built and take down the boot.  A patch for this
>> issue was entered into the illumos-gate repository, but does not
>> appear to have been used for any current distribution.
>> (3) with acpi=0 and use of the kernel bugger (::bp get_hwenv, :c,
>> platform_type/W 0, ::delete 1, :c), I can get openindiana to boot up a
>> plain PC kernel.  I think I recall network and/or PV driver issues
>> being a problem.
>> (4) I tried out some illumos-gate kernel build/installs, but the
>> typical response from a new build is some kind of a hang or lockup
>> that is unresponsive to the F1-A kernel debugger.  However, I am
>> definitely stumbling my way through this, so I may not have run the
>> kernel build or install correctly.
>>
>> OmniOS dies pretty quickly, whether under PV or HVM.
>>
>> I am beginning to get the impression that there was an active PV and
>> functional driver under Xen 3, but that the illumos PV drivers did not
>> keep up with various changes required to interoperate with Xen 4 (or
>> at least the more recent Xen releases).
>
> There was someone from the Illumos community working on getting back the
> full Xen PV support to Illumos, both as Dom0/DomU, but I think the work
> is stalled right now.
>
>> Even if I were to consider
>> KVM, although virtio-blk looks like it may be OK, a proposed
>> virtio-net driver has not been accepted into illumos-gate, and is of
>> unknown quality (it appears to be a slightly tweaked version of a
>> prototype driver that was admitted to be incomplete by its original
>> author).  There appear to be suggestions that PVHVM has been
>> maintained and works in Solaris 11, but I would much more strongly
>> prefer using one of the illumos-based distributions.
>>
>>
>> If there is anyone running opensolaris outside of PV (in other words,
>> under HVM) under Xen 4.x.y, what is your Xen guest config, and which
>> distribution are you using?
>>
>>
>> Also, in sorting out how I might boot a plain non-HVM kernel (see use
>> of the kernel debugger under (3) above), it looks like the
>> "xen_platform_pci" parameter in xl.cfg-type files does nothing as the
>> code presently stands - defeinitely when using qemu-upstream, and I
>> think also under qemu-traditional.  The xenpci device continues to
>> show up on the virtual PCI bus under "xen_platform_pci=0".  However,
>> even if the xenpci device was toggled on/off, it turns out illumos
>> would still boot an HVM kernel, as it uses the availability of the
>> 0x40000000 cpuid info to identify when it is running under Xen (hence
>> the debugger procedure set out in (3) above).  If xen_platform_pci is
>> set to 0, is this Xen cpuid functionality supposed to be diabled?
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-users
>>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.