[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH linux v2 0/9] xen: pvhvm: support bootup on secondary vCPUs
On Mon, 25 Jul 2016, Vitaly Kuznetsov wrote: > Julien Grall <julien.grall@xxxxxxx> writes: > > > Hello, > > > > On 25/07/16 14:39, Vitaly Kuznetsov wrote: > >> Julien Grall <julien.grall@xxxxxxx> writes: > >> > >>> Hi David, > >>> > >>> On 25/07/16 13:38, David Vrabel wrote: > >>>> On 30/06/16 16:56, Vitaly Kuznetsov wrote: > >>>>> It may happen that Xen's and Linux's ideas of vCPU id diverge. In > >>>>> particular, when we crash on a secondary vCPU we may want to do kdump > >>>>> and unlike plain kexec where we do migrate_to_reboot_cpu() we try > >>>>> booting > >>>>> on the vCPU which crashed. This doesn't work very well for PVHVM guests > >>>>> as > >>>>> we have a number of hypercalls where we pass vCPU id as a parameter. > >>>>> These > >>>>> hypercalls either fail or do something unexpected. To solve the issue we > >>>>> need to have a mapping between Linux's and Xen's vCPU ids. > >>>>> > >>>>> This series solves the issue for x86 PVHVM guests. PV guests don't (and > >>>>> probably won't) support kdump so I always assume Xen's vCPU id == > >>>>> Linux's > >>>>> vCPU id. ARM guests will probably need to get proper mapping once we > >>>>> start > >>>>> supporting kexec/kdump there. > >>>> > >>>> Applied to for-linus-4.8, thanks. > >>> > >>> It would have been nice to send a ping before applying. This patch > >>> series is containing Xen ARM code which has not been acked by Stefano, > >>> nor had feedback from ARM side. > >>> > >>> For instance given that all the hypercalls are representing a "vcpu > >>> id" using "uint32_t" it is a bit weird to use "int" to define > >>> xen_vcpu_id (see patch #3). > >> > >> CPU id is usually 'int' in linux and now we pass it to all > >> hypercalls as it is. > > > > Well, we need to differentiate between the internal representation of > > the CPU which is based on the boot order and the logical CPU ID. For > > instance on ARM, the logical CPU ID may not be contiguous nor 0 for > > the first CPU. > > > > From my understanding, the macros in patch #3 will be used at the last > > minute when prepare the hypercall data. IHMO this is very similar to a > > logical ID and defined as uint32_t by the hypercall ABI. > > > > Although, I agree that currently we use the internal CPU id on ARM > > which is very unfortunate because this value is based on the order of > > the nodes in the device tree. > > > > One way to abolish it on ARM would be to use the MPIDR (or at least a > > part) for the VCPU ID. > > > > I probably know too little about ARM but it seems that unlike x86 we > don't need the knowledge of _other_ vCPU ids before we start them so > MPIDR looks very suitable. > > >> It is a bit more convenient in the mapping I > >> introduce as we can set it to a negative value to indicate there is no > >> mapping available. I can definitely change that and use something like > >> U32_MAX-1 to instead but I'm not sure it is worth it... > > > > I looked at the definition of cpu_acpi_id on x86 which return > > x86_cpu_to_acpiid that has been defined to an uint32_t. > > > > So you are assuming that it will never be possible to have an ID > > > 0x80000000. > > > > Also, this may not be true on ARM depending how we define the VCPU > > mapping. We could decide to use the MPIDR which is in this case may be > > considered as "negative". > > While we're not obliged to have the same type for xen_vcpu_id on all > arches I see no point in diverging without a reason. I can do v3 making > the mapping uint32 I agree that making the mapping uint32_t would be desirable. It would even make sense from the int types point of view in Linux. >and indicating the missing value as U32_MAX-1 if nobody is against the >idea. Why U32_MAX-1? (int)-1 is (unsigned)U32_MAX. Even XEN_INVALID_MAX_VCPU_ID is defined as (~0U). _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |