[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests



>>> On 15.11.16 at 15:55, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 11/15/2016 04:24 AM, Jan Beulich wrote:
>>>>> On 09.11.16 at 15:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> --- a/xen/arch/x86/hvm/ioreq.c
>>> +++ b/xen/arch/x86/hvm/ioreq.c
>>> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(static int acpi_ioaccess(
>>>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>>>  {
>>> +    unsigned int i;
>>> +    unsigned int bits = bytes * 8;
>>> +    unsigned int idx = port & 3;
>>> +    uint8_t *reg = NULL;
>>> +    bool is_cpu_map = false;
>>> +    struct domain *currd = current->domain;
>>> +
>>> +    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
>>> +                 (ACPI_GPE0_BLK_LEN_V1 != 4));
>>> +
>>> +    if ( has_ioreq_cpuhp(currd) )
>>> +        return X86EMUL_UNHANDLEABLE;
>> Hmm, so it seems you indeed mean the flag to have the inverse sense
>> of what I would have expected, presumably in order for HVM guests
>> to continue to have all emulation flags set. I think that's a little
>> unfortunate, or at least the name of flag and predicate are somewhat
>> misleading (as there's no specific CPU hotplug related ioreq).
> 
> The other option was XEN_X86_EMU_ACPI. Would it be better?

As that's a little too wide (and I think someone else had also
disliked it for that reason), how about XEN_X86_EMU_ACPI_FF
(for "fixed features"), or if that's still too wide, break things up
(PM1a, PM1b, PM2, TMR, GPE0, GPE1)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.