[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 18/20] libxc/acpi: Build ACPI tables for HVMlite guests



>>> On 06.06.16 at 18:59, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 06/06/2016 09:29 AM, Jan Beulich wrote:
>>>>> On 06.04.16 at 03:25, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> +#define RESERVED_MEMORY_DYNAMIC_START 0xFC001000
>>> +#define ACPI_PHYSICAL_ADDRESS         0x000EA020
>>> +
>>> +/* Initial allocation for ACPI tables */
>>> +#define NUM_ACPI_PAGES  16
>> With which other definitions do these three need to remain in sync?
> 
> NUM_ACPI_PAGES is private to this file.
> 
> ACPI_PHYSICAL_ADDRESS (RSDP pointer) needs to be between 0xe0000 and 
> 0xfffff, I picked this number because that's where most systems that I have 
> appear to have it. (And by "most" I mean the two that I checked ;-))

With there not being a BIOS, I can see this being pretty arbitrary.
Yet in that case I'm not convinced of this getting put at a random
address in the middle. Plus I'm not sure I see the connection to the
reservations done in the E820 map the guest gets to see.

> RESERVED_MEMORY_DYNAMIC_START is one page after DSDT's SystemMemory (aka 
> ACPI_INFO_PHYSICAL_ADDRESS). But then it looks like PVHv2 doesn't need 
> SystemMemory so it can be anywhere (and e820 should presumably be aware of 
> this, which it is not right now)

So you say there's no connection to the end of hvmloader's window
for PCI MMIO assignments (an equivalent of which is going to be
needed for PVHv2)?

>>> +static int init_acpi_config(struct xc_dom_image *dom,
>>> +                            struct acpi_config *config)
>>> +{
>>> +    xc_interface *xch = dom->xch;
>>> +    uint32_t domid = dom->guest_domid;
>>> +    xc_dominfo_t info;
>>> +    int i, rc;
>>> +
>>> +    memset(config, 0, sizeof(*config));
>>> +
>>> +    config->dsdt_anycpu = config->dsdt_15cpu = dsdt_empty;
>>> +    config->dsdt_anycpu_len = config->dsdt_15cpu_len = dsdt_empty_len;
>> What good does an empty DSDT do? (Perhaps this question is a
>> result of there not being any description of this change.)
> 
> DSDT is required to be present by the spec. And ACPICA gets upset if it
> doesn't see it.

But my point (also mentioned further down in the original reply) was
that there's no need for anything if acpi=0. But note that as soon as
you report processors in MADT, the combined set of tables holding
AML code can't be empty anymore: Processors need to be
declared using Processor() (legacy) or Device(). Maybe we don't
need as much as an ordinary HVM guest, but nothing seems too little.

>>> --- a/tools/libxc/xc_dom_x86.c
>>> +++ b/tools/libxc/xc_dom_x86.c
>>> @@ -643,6 +643,13 @@ static int alloc_magic_pages_hvm(struct xc_dom_image 
>>> *dom)
>>>              DOMPRINTF("Unable to reserve memory for the start info");
>>>              goto out;
>>>          }
>>> +
>>> +        rc = xc_dom_build_acpi(dom);
>>> +        if ( rc != 0 )
>>> +        {
>>> +            DOMPRINTF("Unable to build ACPI tables");
>>> +            goto out;
>>> +        }
>> Iirc there is an "acpi=" guest config setting, yet neither here nor
>> down the call tree I have been able to find a respective check. Is
>> that option not relevant anymore? Do we really want to always
>> have those tables built?
> 
> Right, I should check. Come think of it, we should probably also check
> "apic=" option when building MADT in libacpi.

Very likely, yes.

>>> --- a/xen/common/libacpi/build.c
>>> +++ b/xen/common/libacpi/build.c
>>> @@ -480,7 +480,7 @@ static int new_vm_gid(struct acpi_config *config)
>>>      return 1;
>>>  }
>>>  
>>> -void acpi_build_tables(struct acpi_config *config, unsigned int physical)
>>> +void acpi_build_tables(struct acpi_config *config, unsigned long physical)
>> I'm having some difficulty seeing how this change belongs here.
> 
> acpi_build_tables() is called with virtual (i.e. 64-bit) address from libxc.

Oh - so the parameter name is then wrong?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.