[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v1 4/8] x86/init: add linker table support



On January 20, 2016 2:12:49 PM PST, "Luis R. Rodriguez" 
<mcgrof@xxxxxxxxxxxxxxxx> wrote:
>On Wed, Jan 20, 2016 at 1:41 PM, H. Peter Anvin <hpa@xxxxxxxxx> wrote:
>> On 01/20/16 13:33, Luis R. Rodriguez wrote:
>>>
>>> That's correct for PV and PVH, likewise when qemu is required for
>HVM
>>> qemu could set it. I have the qemu change done but that should only
>>> cover HVM. A common place to set this as well could be the
>hypervisor,
>>> but currently the hypervisor doesn't set any boot_params, instead a
>>> generic struct is passed and the kernel code (for any OS) is
>expected
>>> to interpret this and then set the required values for the OS in the
>>> init path. Long term though if we wanted to merge init further one
>way
>>> could be to have the hypervisor just set the zero page cleanly for
>the
>>> different modes. If we needed more data other than the
>>> hardware_subarch we also have the hardware_subarch_data, that's a
>u64
>>> , and how that is used would be up to the subarch. In Xen's case it
>>> could do what it wants with it. That would still mean perhaps
>defining
>>> as part of a Xen boot protocol a place where xen specific code can
>>> count on finding more Xen data passed by the hypervisor, the
>>> xen_start_info. That is, if we wanted to merge init paths this is
>>> something to consider.
>>>
>>> One thing I considered on the question of who should set the zero
>page
>>> for Xen with the prospect of merging inits, or at least this subarch
>>> for both short term and long term are the obvious implications in
>>> terms of hypervisor / kernel / qemu combination requirements if the
>>> subarch is needed. Having it set in the kernel is an obvious
>immediate
>>> choice for PV / PVH but it means we can't merge init paths
>completely
>>> (down to asm inits), we'd still be able to merge some C init paths
>>> though, the first entry would still be different. Having the zero
>page
>>> set on the hypervisor would go long ways but it would mean a
>>> hypervisor change required.
>>>
>>> These prospects are worth discussing, specially in light of Boris's
>>> hvmlite work.
>>>
>>
>> The above doesn't make sense to me.  hardware_subarch is really used
>> when the boot sequence is somehow nonstandard.
>
>Thanks for the feedback -- as it stands today hardware_subarch is only
>used by lguest, Moorestown, and CE4100 even though we had definitions
>for it for Xen -- this is not used yet. Its documentation does make
>references to differences for a paravirtualized environment, and uses
>a few examples but doesn't go into great depths about restrictions so
>its limitations in how we could use it were not clear to me.
>
>>  HVM probably doesn't need that.
>
>Today HVM doesn't need it, but perhaps that is because it has not
>needed changes early on boot. Will it, or could it? I'd even invite us
>to consider the same for other hypervisors or PV hypervisors. I'll
>note that things like cpu_has_hypervisor() or derivatives
>(kvm_para_available() which is now used on drivers even, see
>sound/pci/intel8x0.c) requires init_hypervisor_platform() run, in
>terms of the x86 init sequence this is run pretty late at
>setup_arch(). Should code need to know hypervisor info anytime before
>that they have no generic option available.
>
>I'm fine if we want to restrict hardware_subarch but I'll note the
>semantics were not that explicit to delineate clear differences and I
>just wanted to highlight the current early boot restriction of
>cpu_has_hypervisor().
>
>  Luis

Basically, if the hardware is enumerable using standard PC mechanisms (PCI, 
ACPI) and doesn't need a special boot flow it should use type 0.
-- 
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.