[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 02/11] acpi: Define ACPI IO registers for PVH guests



On 09/11/16 15:14, Boris Ostrovsky wrote:
> On 11/09/2016 09:59 AM, Andrew Cooper wrote:
>>> diff --git a/xen/include/public/hvm/ioreq.h b/xen/include/public/hvm/ioreq.h
>>> index 2e5809b..e3fa704 100644
>>> --- a/xen/include/public/hvm/ioreq.h
>>> +++ b/xen/include/public/hvm/ioreq.h
>>> @@ -24,6 +24,8 @@
>>>  #ifndef _IOREQ_H_
>>>  #define _IOREQ_H_
>>>  
>>> +#include "hvm_info_table.h" /* HVM_MAX_VCPUS */
>>> +
>>>  #define IOREQ_READ      1
>>>  #define IOREQ_WRITE     0
>>>  
>>> @@ -124,6 +126,17 @@ typedef struct buffered_iopage buffered_iopage_t;
>>>  #define ACPI_GPE0_BLK_ADDRESS        ACPI_GPE0_BLK_ADDRESS_V0
>>>  #define ACPI_GPE0_BLK_LEN            ACPI_GPE0_BLK_LEN_V0
>>>  
>>> +#define ACPI_PM1A_EVT_BLK_LEN        0x04
>>> +#define ACPI_PM1A_CNT_BLK_LEN        0x02
>>> +#define ACPI_PM_TMR_BLK_LEN          0x04
>>> +
>>> +/* Location of online VCPU bitmap. */
>>> +#define ACPI_CPU_MAP                 0xaf00
>>> +#define ACPI_CPU_MAP_LEN             ((HVM_MAX_VCPUS / 8) + \
>>> +                                      ((HVM_MAX_VCPUS & 7) ? 1 : 0))
>>> +#if ACPI_CPU_MAP + ACPI_CPU_MAP_LEN >= ACPI_GPE0_BLK_ADDRESS_V1
>>> +#error "ACPI_CPU_MAP is too big"
>>> +#endif
>> Why is this in ioreq.h?  It has nothing to do with ioreq's.
>>
>> The current ACPI bits in here are to do with the qemu ACPI interface,
>> not the Xen ACPI interface.
>>
>> Also, please can we avoid hard-coding the location of the map in the
>> hypervisor ABI.  These constants make it impossible to ever extend the
>> number of HVM vcpus at a future date.
> The first three logically belong here because corresponding blocks'
> addresses are defined right above.

They have no relationship to the ones above, other than their name.

>
> ACPI_CPU_MAP has to be seen by both the toolstack (libacpi) and the
> hypervisor (and qemu as well, although it is defined as
> PIIX4_CPU_HOTPLUG_IO_BASE).
>
> Where do you think it should go then?

This highlights a reoccurring problem in Xen which desperately needs
fixing, but still isn't high enough on my TODO list to tackle yet.

There is no central registration of claims on domain resources.  This is
the root cause of memory accounting problems for HVM guests.


The way I planned to fix this was to have Xen maintain a registry of
domains physical resources which ends up looking very much like
/proc/io{mem,ports}.  There will be a hypercall interface for querying
this information, and for a toolstack and device model to modify it.

The key point is that Xen needs to be authoritative source of
information pertaining to layout, rather than the current fiasco we have
of the toolstack, qemu and hvmloader all thinking they know and control
what's going on.  This fixes several current unknowns which have caused
real problems, such as whether a domain was told about certain RMRRs
when it booted, or how many PXEROMs qemu tried to fit into the physmap.

This information (eventually, when I get Xen-level migration v2 sorted)
needs to move at the head of the migration stream.

The way I would envisage this working is that on domain create, Xen
makes a blank map indicating that all space is free.  By selecting
X86_EMUL_APCI_*, Xen takes out an allocation when it wires up the ioport
handler.

Later, when constructing the ACPI tables, the toolstack reads the
current ioport allocations and can see exactly which ports are reserved
for what.


Now, I understand that lumbering you with this work as a prerequisite
would be unfair.

Therefore, I will accept an alternative of hiding all these definitions
behind __XEN_TOOLS__ so the longterm fix can be introduced in a
compatible manner in the future.

That said, I am still certain that they shouldn't live in ioreq.h, as
they have nothing to do with Qemu.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.