[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 02/11] acpi: Define ACPI IO registers for PVH guests



>>> On 22.11.16 at 16:52, <boris.ostrovsky@xxxxxxxxxx> wrote:

> 
> On 11/22/2016 10:13 AM, Jan Beulich wrote:
>>>>> On 22.11.16 at 15:53, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>
>>>
>>> On 11/22/2016 09:07 AM, Jan Beulich wrote:
>>>>>>> On 22.11.16 at 13:28, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>
>>>>>
>>>>> On 11/22/2016 05:37 AM, Jan Beulich wrote:
>>>>>>>>> On 21.11.16 at 22:00, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>>>> @@ -119,11 +122,33 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>>>>>
>>>>>>>  /* Compatibility definitions for the default location (version 0). */
>>>>>>>  #define ACPI_PM1A_EVT_BLK_ADDRESS    ACPI_PM1A_EVT_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>>>>>> +#define ACPI_PM1A_EVT_BLK_BIT_OFFSET 0x00
>>>>>>>  #define ACPI_PM1A_CNT_BLK_ADDRESS    ACPI_PM1A_CNT_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>>>>>> +#define ACPI_PM1A_CNT_BLK_BIT_OFFSET 0x00
>>>>>>>  #define ACPI_PM_TMR_BLK_ADDRESS      ACPI_PM_TMR_BLK_ADDRESS_V0
>>>>>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>>>>>> +#define ACPI_PM_TMR_BLK_BIT_OFFSET   0x00
>>>>>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>>>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>>>>>
>>>>>>> +#if __XEN_INTERFACE_VERSION__ >= 0x00040800
>>>>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>>>> +
>>>>>>> +/* Location of online VCPU bitmap. */
>>>>>>> +#define XEN_ACPI_CPU_MAP             0xaf00
>>>>>>> +#define XEN_ACPI_CPU_MAP_LEN         ((HVM_MAX_VCPUS + 7) / 8)
>>>>>>> +
>>>>>>> +#if XEN_ACPI_CPU_MAP + XEN_ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>>>>>> +#error "XEN_ACPI_CPU_MAP is too big"
>>>>>>> +#endif
>>>>>>> +
>>>>>>> +/* GPE0 bit set during CPU hotplug */
>>>>>>> +#define XEN_GPE0_CPUHP_BIT           2
>>>>>>> +
>>>>>>> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>>>>>>> +#endif /* __XEN_INTERFACE_VERSION__ >= 0x00040800 */
>>>>>>>
>>>>>>>  #endif /* _IOREQ_H_ */
>>>>>>
>>>>>> I'm afraid there's been some misunderstanding here during the v2
>>>>>> discussion: New hypervisor/tools only definitions don't need an
>>>>>> additional interface version guard. It's instead the pre-existing
>>>>>> ones which should be removed from the namespace by adding
>>>>>> such a guard.
>>>>>
>>>>> We want to keep all of them now. Shouldn't then the interface check be
>>>>> added after we move to Andrew's suggestion of dynamic IO ranges?
>>>>
>>>> No, we want them gone from the public interface for new
>>>> consumers. The only valid consumer has always been the tool
>>>> stack, just that this hadn't been properly done in the header.
>>>> Hence the need to add __XEN__ / __XEN_TOOLS__ around new
>>>> additions, and additionally interface version guards around
>>>> existing ones.
>>>
>>> pmtimer.c uses some of those macros so how will wrapping interface
>>> version check around existing definitions continue to work when
>>> interface version is updated, unless dynamic IO ranges are also added by
>>> that time?
>>
>> The question makes me suspect you still don't understand how things
>> should look like. I'll try to sketch this out in a simplified manner:
>>
>> #if defined(__XEN__) || defined(__XEN_TOOLS__) || interface-version < 4.9
>> existing #define-s
>> #endif
>>
>> #if defined(__XEN__) || defined(__XEN_TOOLS__)
>> new #define-s
>> #endif
> 
> OK, now I understand what you want.
> 
> Unfortunately, I still don't understand why, until IO range reservation 
> is implemented. Sorry for being obtuse.

The "why" is the same as for everything else we want to hide from
the general public, but make available to tools: Be able to modify
it without going through versioning hoops.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.