[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/15] xen/arm: vgic-v3: Emulate correctly the re-distributor



On 02/02/15 16:47, Ian Campbell wrote:
> On Mon, 2015-02-02 at 16:33 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/02/15 15:59, Ian Campbell wrote:
>>> On Thu, 2015-01-29 at 18:25 +0000, Julien Grall wrote:
>>>> There is a one-to-one mapping between each re-distributors and processors.
>>>> Each re-distributors can be accessed by any processor at any time. For
>>>> instance during the initialization of the GIC, the drivers will browse
>>>> the re-distributor to find the one associated to the current processor
>>>> (via GICR_TYPER). So each re-distributor has its own MMIO region.
>>>>
>>>> The current implementation of the vGICv3 emulation assumes that the
>>>> re-distributor region is banked. Therefore, the processor won't be able
>>>> to access other re-distributor. While this is working fine for Linux, a
>>>> processor will only access GICR_TYPER to find the associated 
>>>> re-distributor,
>>>> we should have a correct implementation for the other operating system.
>>>
>>> You could state something stronger here, which is that it is wrong and
>>> should be fixed as a matter of principal. The fact that we happen to get
>>> away with it for some versions of Linux is an aside (although worth
>>> mentioning)
>>
>> I will rework the paragraph.
>>
>>>>
>>>> All emulated registers of the re-distributors take a vCPU in parameter
>>>> and necessary lock. Therefore concurrent access is already properly 
>>>> handled.
>>>>
>>>> The missing bit is retrieving the right vCPU following the region accessed.
>>>> Retrieving the right vCPU could be slow, so it has been divided in 2 paths:
>>>>     - fast path: The current vCPU is accessing its own re-distributor
>>>>     - slow path: The current vCPU is accessing an other re-distributor
>>>
>>> "another".
>>>
>>>>
>>>> As the processor needs to initialize itself, the former case is very
>>>> common. To handle the access quickly, the base address of the
>>>> re-distributor is computed and stored per-vCPU during the vCPU 
>>>> initialization.
>>>>
>>>> The latter is less common and more complicate to handle. The 
>>>> re-distributors
>>>> can be spread accross multiple regions in the memory.
>>>
>>> "across"
>>>
>>>> +    /*
>>>> +     * Find the region where the re-distributor lives. For this purpose,
>>>> +     * we look one region ahead as only MMIO range for redistributors
>>>> +     * traps here.
>>>> +     * Note: We assume that the region are ordered.
>>>
>>> You could also check base+size vs gpa to avoid this limitation.
>>
>> IHMO, this limitation is harmless. This would happen for DOM0 if the
>> device tree doesn't sort the region.
> 
> If it can happen then it isn't harmless, and it's easy to avoid so why
> not do so.

The code is looking one region a-head to avoid check in the case there
is only one big re-distributors region.

>> AFAICT, we have a similar limitation for the memory region. Although I
>> could sort them when we build DOM0.
> 
> I'm not sure we do these days, but in any case two wrongs don't make a
> right.

See consider_modules. We implicitly assume memory ordering.

Anyway, if we do that, we should also check the overlapping between
regions, the size of the region is valid (i.e aligned to 64K and
correctly sized...),...

The main drawbacks here would be DOM0 access the wrong vCPU or receive a
data abort. It's not too bad compare to "slowing down" the lookup.

If you really want to support non-ordered access, I could do at Domain
initialization.

>>
>>>> +
>>>> +    /*
>>>> +     * Note: We are assuming that d->vcpu[vcpu_id] is never NULL. If
>>>> +     * it's the case, the guest will receive a data abort and won't be
>>>> +     * able to boot.
>>>
>>> Is cpu hotplug a factor here? Do we support guests booting with offline
>>> cpus yet?
>>
>> The "problem" is not because of CPU hotpluging. XEN_DOMCTL_max_vcpus
>> doesn't allow the change of the number of vCPUs. AFAICT, this won't be
>> supported (see comments within the code).
> 
>> I felt it was good to mention it for documentation purpose.
> 
> The important thing is that the vcpu array is fully populated by that
> hypercall, rather than e.g. deferred until vcpuop_initialise, that's
> what matters here really.
> 
> I think the comment is redundant then, or at least misleading in that
> implies there is some possibility that d->vcpu[valid_vcpu] is NULL when
> there isn't.
> 
>>>> +    /*
>>>> +     * Safety check mostly for DOM0. It's possible to have more vCPU
>>>> +     * than the number of physical CPU. Maybe we should deny this case?
>>>
>>> In general it's allowed, if a bit silly. Mainly for e.g. people working
>>> on PV spinlock code who want to force contention... Unlikely for dom0 I
>>> suppose.
>>
>> We don't control the DOM0 memory layout, so in practice we can't have
>> more vCPUs than allowed by the hardware. This is because every
>> re-distributors have it's own MMIO region.
> 
> I'd not be all that surprised to see systems with more rdist region
> space than they have physical processors e.g. sku's with different
> numbers of cores, but otherwise the same platform.

So the current check would be enough? Even though, having more vCPUs
than physical CPUs for DOM0 sounds silly.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.