|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3.1 15/15] xen/x86: setup PVHv2 Dom0 ACPI tables
>>> On 30.11.16 at 13:40, <roger.pau@xxxxxxxxxx> wrote:
> On Mon, Nov 14, 2016 at 09:15:37AM -0700, Jan Beulich wrote:
>> >>> On 29.10.16 at 11:00, <roger.pau@xxxxxxxxxx> wrote:
>> > Also, regions marked as E820_ACPI or E820_NVS are identity mapped into Dom0
>> > p2m, plus any top-level ACPI tables that should be accessible to Dom0 and
>> > that don't reside in RAM regions. This is needed because some memory maps
>> > don't properly account for all the memory used by ACPI, so it's common to
>> > find ACPI tables in holes.
>>
>> I question whether this behavior should be enabled by default. Not
>> having seen the code yet I also wonder whether these regions
>> shouldn't simply be added to the guest's E820 as E820_ACPI, which
>> should then result in them getting mapped without further special
>> casing.
>>
>> > +static int __init hvm_add_mem_range(struct domain *d, uint64_t s,
>> > uint64_t e,
>> > + uint32_t type)
>>
>> I see s and e being uint64_t, but I don't see why type can't be plain
>> unsigned int.
>
> Well, that's the type for "type" as defined in e820.h. I'm just using
> uint32_t
> for consistency with that.
As said a number of times in various contexts: We should try to
get away from using fixed width types where we don't really need
them.
>> > + {
>> > + d->arch.e820[i].size += e - s;
>> > + return 0;
>> > + }
>> > +
>> > + if ( rs >= e )
>> > + break;
>> > +
>> > + if ( re > s )
>> > + return -ENOMEM;
>>
>> I don't think ENOMEM is appropriate to signal an overlap. And don't
>> you need to reverse these last two if()s?
>
> I've changed ENOMEM to EEXIST. Hm, I don't think so, if I reversed those we
> will
> get error when trying to add a non-contiguous region to fill a hole between
> two
> existing regions right?
Looks like I've managed to write something else than I meant. I was
really thinking of
if ( re > s )
{
if ( rs >= e )
break;
return -ENOMEM;
}
But then again I think with things being sorted it may not matter at all.
>> > + ACPI_MEMCPY(intsrcovr, intr, sizeof(*intr));
>>
>> Structure assignment (for type safety; also elsewhere)?
>
> I wasn't sure what to do here, since there's a specific ACPI_MEMCPY function,
> but I guess this is designed to be used by acpica code itself, and
> ACPI_MEMCPY
> is just an OS-agnotic wrapper to memcpy.
Indeed.
>> > + /* Setup the IO APIC entry. */
>> > + if ( nr_ioapics > 1 )
>> > + printk("WARNING: found %d IO APICs, Dom0 will only have access to
>> > 1 emulated IO APIC\n",
>> > + nr_ioapics);
>>
>> I've said elsewhere already that I think we should provide 1 vIO-APIC
>> per physical one.
>
> Agree, but the current vIO-APIC is not really up to it. I will work on
> getting
> it to support multiple instances.
Until then this should obtain a grep-able "fixme" annotation.
>> > + io_apic = (struct acpi_madt_io_apic *)(madt + 1);
>> > + io_apic->header.type = ACPI_MADT_TYPE_IO_APIC;
>> > + io_apic->header.length = sizeof(*io_apic);
>> > + io_apic->id = 1;
>> > + io_apic->address = VIOAPIC_DEFAULT_BASE_ADDRESS;
>> > +
>> > + local_apic = (struct acpi_madt_local_apic *)(io_apic + 1);
>> > + for ( i = 0; i < dom0_max_vcpus(); i++ )
>> > + {
>> > + local_apic->header.type = ACPI_MADT_TYPE_LOCAL_APIC;
>> > + local_apic->header.length = sizeof(*local_apic);
>> > + local_apic->processor_id = i;
>> > + local_apic->id = i * 2;
>> > + local_apic->lapic_flags = ACPI_MADT_ENABLED;
>> > + local_apic++;
>> > + }
>>
>> What about x2apic? And for lapic, do you limit vCPU count anywhere?
>
> Yes, there's no x2apic information, I'm currently looking at libacpi in
> tools,
> and there doesn't seem to be any local x2apic structure there either. Am I
> missing something?
I don't think you are.
> Regarding vCPU count, I will limit it to 128.
With it limited there'll be no strict need for x2apic structures. Still
we should get them added eventually.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |