[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.10.0-shim-comet] x86/guest: use the vcpu_info area from shared_info



>>> On 17.01.18 at 12:04, <george.dunlap@xxxxxxxxxx> wrote:
> On 01/17/2018 10:57 AM, Roger Pau Monne wrote:
>> If using less than 32 vCPUs (XEN_LEGACY_MAX_VCPUS).
>> 
>> This is a workaround that should allow to boot the shim on hypervisors
>> without commit "x86/upcall: inject a spurious event after setting
>> upcall vector" as long as less than 32 vCPUs are assigned to the
>> shim.
>> 
>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>> ---
>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
>> Cc: George Dunlap <george.dunlap@xxxxxxxxxx>
>> ---
>> ONLY apply to the 4.10.0-shim-comet branch. Long term we don't want to
>> carry this patch since it would prevent the vcpu_info mapping code
>> from being tested unless a shim with > 32 vCPUs is created, which
>> doesn't seem very common.
> 
> Just to fill this out a bit:
> 
> Without this patch, people need to reboot their L0 hypervisor in order
> to use Comet.
> 
> With this patch, people only need to compile and update their L0 tools
> to use Comet; they can avoid rebooting their L0 hypervisor.
> 
> Roger would like to avoid checking this in to staging, because he's
> afraid it might make the >32vcpu path bitrot.
> 
> The risk of checking it into the Comet branches but not staging is that
> if people update their shim to "Rudolph" (4.11) without rebooting their
> host, things may unexpectedly not work.  I think that's something we can
> live with.

I agree. I'm not sure this patch going into that branch only needs
an x86 maintainers ack, but if so, feel free to add mine.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.