[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 20/25] arm/altp2m: Add altp2m paging mechanism.



Hi Julien,

I just wanted to indicate that this email did not have any contents from
your side.

On 08/06/2016 04:21 PM, Julien Grall wrote:
>
>
> On 06/08/2016 13:57, Sergej Proskurin wrote:
>> Hi Julien,
>
> Hello Sergej,
>
>
>> On 08/04/2016 06:59 PM, Julien Grall wrote:
>>> Hi Sergej,
>>>
>>> On 01/08/16 18:10, Sergej Proskurin wrote:
>>>> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
>>>> index 12be7c9..628abd7 100644
>>>> --- a/xen/arch/arm/traps.c
>>>> +++ b/xen/arch/arm/traps.c
>>>
>>> [...]
>>>
>>>> @@ -2403,35 +2405,64 @@ static void do_trap_instr_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>
>>> [...]
>>>
>>>>      switch ( fsc )
>>>>      {
>>>> +    case FSC_FLT_TRANS:
>>>> +    {
>>>> +        if ( altp2m_active(d) )
>>>> +        {
>>>> +            const struct npfec npfec = {
>>>> +                .insn_fetch = 1,
>>>> +                .gla_valid = 1,
>>>> +                .kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +            };
>>>> +
>>>> +            /*
>>>> +             * Copy the entire page of the failing instruction
>>>> into the
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, gpa, gva, npfec, &p2m) )
>>>> +                return;
>>>
>>> I forgot to mention that I think there is a race condition here. If
>>> multiple vCPU (let say A and B) use the same altp2m, they may fault
>>> here.
>>>
>>> If vCPU A already fixed the fault, this function will return false and
>>> continue. So this will lead to inject an instruction abort to the
>>> guest.
>>>
>>
>> I believe this is exactly what I have experienced in the last days. I
>> have applied Tamas' patch [0] but it did not entirely solve the issue. I
>> will provide more information about the exact behavior a bit later.
>>
>>>> +
>>>> +            rc = p2m_mem_access_check(gpa, gva, npfec);
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>>> +
>>>> +        break;
>>>> +    }
>>>
>>> [...]
>>>
>>>> @@ -2470,23 +2503,31 @@ static void do_trap_data_abort_guest(struct
>>>> cpu_user_regs *regs,
>>>>
>>>>      switch ( fsc )
>>>>      {
>>>> -    case FSC_FLT_PERM:
>>>> +    case FSC_FLT_TRANS:
>>>>      {
>>>> -        const struct npfec npfec = {
>>>> -            .read_access = !dabt.write,
>>>> -            .write_access = dabt.write,
>>>> -            .gla_valid = 1,
>>>> -            .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> -        };
>>>> +        if ( altp2m_active(current->domain) )
>>>> +        {
>>>> +            const struct npfec npfec = {
>>>> +                .read_access = !dabt.write,
>>>> +                .write_access = dabt.write,
>>>> +                .gla_valid = 1,
>>>> +                .kind = dabt.s1ptw ? npfec_kind_in_gpt :
>>>> npfec_kind_with_gla
>>>> +            };
>>>>
>>>> -        rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>>> +            /*
>>>> +             * Copy the entire page of the failing data access
>>>> into the
>>>> +             * currently active altp2m view.
>>>> +             */
>>>> +            if ( altp2m_lazy_copy(v, info.gpa, info.gva, npfec,
>>>> &p2m) )
>>>> +                return;
>>>
>>> Ditto.
>>>
>>
>> Ok.
>>
>>>> +
>>>> +            rc = p2m_mem_access_check(info.gpa, info.gva, npfec);
>>>> +
>>>> +            /* Trap was triggered by mem_access, work here is done */
>>>> +            if ( !rc )
>>>> +                return;
>>>> +        }
>>
>> Best regards,
>> ~Sergej
>>
>> [0] https://github.com/tklengyel/xen branch arm_mem_access_reinject
>>
>

Best regards,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.