[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen PVH support in grub2



On 11/03/2017 10:59 AM, Juergen Gross wrote:
> On 03/11/17 15:36, Boris Ostrovsky wrote:
>> On 11/03/2017 10:24 AM, Juergen Gross wrote:
>>> On 03/11/17 15:07, Roger Pau Monné wrote:
>>>> On Fri, Nov 03, 2017 at 01:50:11PM +0100, Juergen Gross wrote:
>>>>> On 03/11/17 13:17, Roger Pau Monné wrote:
>>>>>> On Fri, Nov 03, 2017 at 01:00:46PM +0100, Juergen Gross wrote:
>>>>>>> On 29/09/17 17:51, Roger Pau Monné wrote:
>>>>>>>> On Fri, Sep 29, 2017 at 03:33:58PM +0000, Juergen Gross wrote:
>>>>>>>>> On 29/09/17 17:24, Roger Pau Monné wrote:
>>>>>>>>>> On Fri, Sep 29, 2017 at 02:46:53PM +0000, Juergen Gross wrote:
>>>>>>>>>> Then, I also wonder whether it would make sense for this grub to load
>>>>>>>>>> the kernel using the PVH entry point or the native entry point. Would
>>>>>>>>>> it be possible to boot a Linux kernel up to the point where cpuid can
>>>>>>>>>> be used inside of a PVH container?
>>>>>>>>> I don't think today's Linux allows that. This has been discussed
>>>>>>>>> very thoroughly at the time Boris added PVH V2 support to the kernel.
>>>>>>>> OK, I'm not going to insist on that, but my plans for FreeBSD is to
>>>>>>>> make the native entry point capable of booting inside of a PVH
>>>>>>>> container up to the point where cpuid (or whatever method) can be used
>>>>>>>> to detect the environment.
>>>>>>> Looking more thoroughly into the Linux boot code I think this could
>>>>>>> work for Linux, too. But only if we can tell PVH from HVM in the guest.
>>>>>>> How would you do that in FreeBSD? Via flags in the boot params? This
>>>>>>> would the have to be done in the boot loader (e.g. grub or OVMF).
>>>>>> My plan was not to differentiate between HVM and PVH, but rather to
>>>>>> make use of the ACPI information in order to decide which devices are
>>>>>> available and which are not inside of a PVH guest.
>>>>>>
>>>>>> For example in the FADT "IA-PC Boot Architecture Flags" field for PVH
>>>>>> we already set "VGA Not Present" and "CMOS RTC Not Present". There
>>>>>> might be other flags/fields that must be set, but I would like to
>>>>>> avoid having a CPUID bit or similar saying "PVH", because then Xen
>>>>>> will be tied to always providing the same set of devices in PVH
>>>>>> containers.
>>>>> Why? This would depend on the semantics tied to the flag. It could just
>>>>> mean "don't assume availability of legacy stuff" (e.g. BIOS calls).
>>>>>
>>>>> Linux would have a problem with the ACPI approach as it would try BIOS
>>>>> calls way before it is initializing its ACPI handling. So in Linux I'd
>>>>> need another way to tell I'm running in PVH mode, e.g. a "no legacy"
>>>>> bit in the Xen HVM cpuid leaf.
>>>> If you are booted from the PVH entry point, there's no BIOS or UEFI
>>>> (ie: no firmware), if you are booted from the BIOS entry point there's
>>>> a BIOS and the same applies to UEFI. How does Linux differentiate
>>>> whether it's booted from BIOS or UEFI?
>>> They use different entries.
>> In fact, we had a discussion with Matt Fleming (Linux EFI maintainer) to
>> see if we can use EFI entry point to also be able to boot PVH guest but
>> found some issues with that approach, which is why we ended up with a
>> dedicated PVH entry point.
>>
>> I am curious though, Juergen --- what do we need besides zeropage to
>> allow us to boot PVH from startup_64?
> Oh, you are right. I managed to get lost in the early boot paths.
>
> Only setting up the hyperpage seems to be missing, but this should be
> doable. And setting xen_pvh, of course.

That last part was actually my question --- do we need to have xen_pvh
set before we get to xen-specific code for the first time (which I think
is init_hypervisor_platform()) from startup_64?

Because if we do --- who will set it?


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.