[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 10/11] pvh: Send an SCI on VCPU hotplug event



>>> On 15.11.16 at 15:57, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 11/15/2016 04:36 AM, Jan Beulich wrote:
>>>>> On 09.11.16 at 15:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/domctl.c
>>> +++ b/xen/arch/x86/domctl.c
>>> @@ -1443,6 +1443,18 @@ long arch_do_domctl(
>>>              break;
>>>  
>>>          d->arch.avail_vcpus = num;
>>> +
>>> +        /*
>>> +         * For PVH guests we need to send an SCI and set enable/status
>>> +         * bits in GPE block (DSDT specifies _E02, so it's bit 2).
>>> +         */
>>> +        if ( is_hvm_domain(d) && !has_ioreq_cpuhp(d) )
>>> +        {
>>> +            d->arch.hvm_domain.acpi_io.gpe[2] =
>>> +                d->arch.hvm_domain.acpi_io.gpe[0] = 4;
>>> +            send_guest_vcpu_virq(d->vcpu[0], VIRQ_SCI);
>>> +        }
>> The use of d->vcpu[0] here supports this not being a per-vCPU
>> vIRQ. And it has two problems: you don't check it to be non-NULL
>> (see send_guest_global_virq(), which you want to make
>> non-static), and what do you do if vCPU0 is offline (i.e. you also
>> want to generalize send_guest_global_virq())?
> 
> IIRC in Linux you can't offline BP but I guess in general it should be
> supported.

I thought that has changed a year or two ago (see the
*_HOTPLUG_CPU0 config options).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.