[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session



On 07/17/2017 11:37 AM, Roger Pau Monné wrote:
> On Mon, Jul 17, 2017 at 11:10:50AM +0100, Andrew Cooper wrote:
>> On 17/07/17 10:36, Roger Pau Monné wrote:
>>> Hello,
>>>
>>> I didn't actually take notes, so this is from the top of my head. If
>>> anyone took notes or remember something different, please feel free to
>>> correct it.
>>>
>>> This is the output from the PVH toolstack interface session. The
>>> participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
>>> Legout and myself.
>>>
>>> We agreed on the following interface for xl configuration files:
>>>
>>>     type = "hvm | pv | pvh"
>>>
>>> This is going to supersede the "builder" option present in xl. Both
>>> options are mutually exclusive. The "builder" option is going to be
>>> marked as deprecated once the new "type" option is implemented.
>>>
>>> In order to decide how to boot the guest the following options will be
>>> available. Note that they are mutually exclusive.
>>
>> I presume you mean the kernel/ramdisk/cmdline are mutually exclusive
>> with firmware?
> 
> Yes, sorry that's confusing. Either you use kernel, firmware or
> bootloader.
> 
>>>     kernel = "<path>"
>>>     ramdisk = "<path>"
>>>     cmdline = "<string>"
>>>
>>> <path>: relative or full path in the filesystem.
>>
>> Please can xl or libxl's (not entirely sure which) path handling be
>> fixed as part of this work.  As noted in
>> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
>> inconsistent as to whether it allows paths relative to the .cfg file. 
>> All paths should support being relative to the cfg file, as that is the
>> most convenient for the end user to use.
>>
>>> Boot directly into the kernel/ramdisk provided. In this case the
>>> kernel must be available somewhere in the toolstack filesystem
>>> hierarchy.
>>>
>>>     firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
>>
>> What is the purpose of having uefi and bios in there?  ovmf is the uefi
>> implementation, and {rom,sea}bios are the bios implementations.
>>
>> How does someone specify ovmf + seabios as a CSM?
> 
> Hm, I have no idea. How is this done usually, is ovmf built with
> seabios support, or is it fetched by ovmf from the uefi partition?
> 
>>> This allows to load a firmware inside of the guest and run it in guest
>>> mode. Note that the firmware needs to support booting in PVH mode.
>>>
>>> There's no plan to support any bios or pvgrub ATM for PVH, those
>>> options are simply listed for completeness. Also, generic options like
>>> uefi or bios would be aliases to a concrete implementation by the
>>> toolstack, ie: uefi -> ovmf, bios -> seabios most likely.
>>
>> Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
>> let people choose options which are liable to change under their feet if
>> they were to boot the same .cfg file on a newer version of Xen, as their
>> VM will inevitable break.
> 
> Noted, I think not allowing bios or uefi is fine, I would rather
> document in the man page that our recommended bios implementation is
> seabios and the uefi one ovmf.

We need both "I don't care much just choose the best one" options, and
"I want this specific version and not have it change" options.

You accurately describe the problem with having *only* "This is the
general idea but the implementation can change under my feet" options.
But there's also a problem with having only "I want this specific
version" options: Namely, that a lot of people really don't care much
and want the most reasonably up-to-date version, and don't want to know
the details below.

Having both allows us to be reasonably user-friendly to both "just make
it work" people and people who want to "get their hands greasy" knowing
all the technical inner workings.


>> Where does hvmloader fit into this mix?
> 
> Right, I wasn't planning anyone using hvmloader, but there's no reason
> to prevent it. I guess it would fit into the "firmware" option, but
> then you should be able to use something like: firmware = "hvmloader +
> ovmf".
> 
> What would be the purpose of using hvmloader inside of a PVH guest?
> Hardware initialization?

AFAICT hvmloader is an internal implementation detail; the user should,
in general, not need to know anything about it (except in cases like
XTF, where you're deliberately abusing the system).

And as Roger said, the `firmware=` option should allow a user to specify
their own binary.

>> Instead of kernel= and ramdisk=, it would be better to generalise to
>> something like modules=[...], perhaps with kernel being an alias for
>> module[0] etc.  hvmloader already takes multiple binaries using the PVH
>> module system, and PV guests are perfectly capable of multiple modules
>> as well.  One specific example where an extra module would be very
>> helpful is for providing the cloudinit install config file.
> 
> I might prefer to keep the current kernel = "..." and convert ramdisk
> into a list named modules. Do you think (this also applies to xl/libxl
> maintainers) we could simply not support the ramdisk option for PVH?

Well since we have to parse the `ramdisk=` option indefinitely anyway,
one suggestion might be to have `ramdisk=` be an alias for `modules=`,
but restricted to a single element.

The disadvantage of that is you'd have to make sure to sort out the
ambiguity of what happens when you specify both ramdisk and modules; I
would vote for having xl throw an error in that case.

> IMHO that might cause some headache for people converting from classic
> PV to PVH. In which case (if we have to support ramdisk anyway) I
> wouldn't make the introduction of the modules option mandatory for
> this work. I'm trying to limit this to something sensible that
> hopefully can be merged into 4.10.

I tend to agree.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.