[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests



>>> On 09.11.16 at 15:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1383,6 +1383,78 @@ static int hvm_access_cf8(static int acpi_ioaccess(
>      int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>  {
> +    unsigned int i;
> +    unsigned int bits = bytes * 8;
> +    unsigned int idx = port & 3;
> +    uint8_t *reg = NULL;
> +    bool is_cpu_map = false;
> +    struct domain *currd = current->domain;
> +
> +    BUILD_BUG_ON((ACPI_PM1A_EVT_BLK_LEN != 4) ||
> +                 (ACPI_GPE0_BLK_LEN_V1 != 4));
> +
> +    if ( has_ioreq_cpuhp(currd) )
> +        return X86EMUL_UNHANDLEABLE;

Hmm, so it seems you indeed mean the flag to have the inverse sense
of what I would have expected, presumably in order for HVM guests
to continue to have all emulation flags set. I think that's a little
unfortunate, or at least the name of flag and predicate are somewhat
misleading (as there's no specific CPU hotplug related ioreq).

In any event, with the handler getting registered only when 
!has_ioreq_cpuhp() I think this should be an ASSERT() - iirc the
emulation flags can be set only once at domain creation time.

> +    switch (port)
> +    {
> +    case ACPI_PM1A_EVT_BLK_ADDRESS_V1 ...
> +        (ACPI_PM1A_EVT_BLK_ADDRESS_V1 + ACPI_PM1A_EVT_BLK_LEN - 1):

Please align the opening parenthesis with the respective expression
on the previous line (an really the parentheses aren't needed here,
so you could just align the two starting A-s).

> +        reg = currd->arch.hvm_domain.acpi_io.pm1a;
> +        break;
> +    case ACPI_GPE0_BLK_ADDRESS_V1 ...
> +        (ACPI_GPE0_BLK_ADDRESS_V1 + ACPI_GPE0_BLK_LEN_V1 - 1):
> +        reg = currd->arch.hvm_domain.acpi_io.gpe;
> +        break;
> +    case ACPI_CPU_MAP ... (ACPI_CPU_MAP + ACPI_CPU_MAP_LEN - 1):
> +        is_cpu_map = true;
> +        break;
> +    default:
> +        return X86EMUL_UNHANDLEABLE;
> +    }

Blank lines between non-fall-through case statements please.

> +    if ( bytes == 0 )
> +        return X86EMUL_OKAY;
> +
> +    if ( dir == IOREQ_READ )
> +    {
> +        *val &= ~((1U << bits) - 1);

Undefined behavior for bits == 32.

> +        if ( is_cpu_map )
> +        {
> +            unsigned int first_bit, last_bit;
> +
> +            first_bit = (port - ACPI_CPU_MAP) * 8;
> +            last_bit = min(currd->arch.avail_vcpus, first_bit + bits);
> +            for ( i = first_bit; i < last_bit; i++ )
> +                *val |= (1U << (i - first_bit));
> +        }
> +        else
> +            memcpy(val, &reg[idx], bytes);
> +    }
> +    else
> +    {
> +        if ( is_cpu_map )
> +            /*
> +             * CPU map is only read by DSDT's PRSC method and should never
> +             * be written by a guest.
> +             */
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        /* Write either status or enable reegister. */
> +        if ( (bytes > 2) || ((bytes == 2) && (port & 1)) )
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        if ( idx < 2 ) /* status, write 1 to clear. */
> +        {
> +            reg[idx] &= ~(*val & 0xff);
> +            if ( bytes == 2 )
> +                reg[idx + 1] &= ~((*val >> 8) & 0xff);
> +        }
> +        else           /* enable */
> +            memcpy(&reg[idx], val, bytes);
> +    }

Overall - how does this implementation match up with the following
requirements from the spec:

● Reserved or unimplemented bits always return zero (control or enable).
● Writes to reserved or unimplemented bits have no affect.

To me it looks as it at this point all bits are reserved/unimplemented.

>      return X86EMUL_OKAY;

So regarding my comment on the previous patch: That one should
fail the call unconditionally, and the one here should switch to
returning success.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.