[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 01/14] arch/arm: add consistency check to REMOVE p2m changes



On 07/03/2014 03:28 PM, Ian Campbell wrote:
> On Thu, 2014-07-03 at 12:03 +0100, Julien Grall wrote:
>> On 07/02/2014 07:42 PM, Arianna Avanzini wrote:
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 9960e17..7cb4a27 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -439,12 +441,37 @@ static int apply_p2m_changes(struct domain *d,
>>>                      pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t);
>>>                      p2m_write_pte(&third[third_table_offset(addr)],
>>>                                    pte, flush_pt);
>>> -                    maddr += PAGE_SIZE;
>>>                  }
>>>                  break;
>>> -            case RELINQUISH:
>>>              case REMOVE:
>>>                  {
>>> +                    unsigned long mfn = pte.p2m.base;
>>> +
>>> +                    /*
>>> +                     * Ensure that the guest address addr currently being
>>> +                     * handled (that is in the range given as argument to
>>> +                     * this function) is actually mapped to the 
>>> corresponding
>>> +                     * machine address in the specified range. maddr here 
>>> is
>>> +                     * the machine address given to the function, while mfn
>>> +                     * is the machine frame number actually mapped to the
>>> +                     * guest address: check if the two correspond.
>>> +                     */
>>> +                    if ( !pte.p2m.valid || maddr != pfn_to_paddr(mfn) )
>>> +                    {
>>> +                        gdprintk(XENLOG_WARNING,
>>> +                                 "p2m_remove: mapping at %"PRIpaddr" is of 
>>> maddr %"PRIpaddr" not %"PRIpaddr" as expected",
>>> +                                 addr, pfn_to_paddr(mfn), maddr);
>>
>> gdprintk is using the current domain to print the domid. We are not
>> necessarily remove the mapping from the current domain.
>>
>>> +                        /*
>>> +                         * Continue to process the range even if an error 
>>> is
>>> +                         * encountered, to prevent I/O-memory regions from
>>> +                         * being partially accessible to a domain.
>>> +                         */
>>> +                       continue;
>>
>>
>> I've just reviewed the patch #5 (which does the similar check for x86)
>> and I'm surprised that you differ. Here you let the mapping in place if
>> something is wrong rather than clearing it.
> 
> I agree that we should be consistent. The x86 behaviour (that is: warn
> and nuke the mapping whatever it is) seems safer to me.
> 
>> That made me think there is a possible security issue here. If you are
>> trying to clear a page that it's actually a foreign mapping, we already
>> drop the reference above. Xen may reallocate this page for anything
>> else, so the domain will have a mapping to a page which potentially
>> belong to another domain or Xen. Therefore we will leak information.
> 
> Even if the unmap on failure something must be keeping the page live
> from the put_page until the mapping actually gets cleared. What is that?
> 
> ISTR debating this at the time during 4.4, but I don't recall the
> answer, or we've subsequently broken it.
> 
> Something else must be holding a reference for the duration of this
> call. Right?

We don't take any reference on mapping for now. We rely on the caller of
guest_physmap_remove_page to take a reference before removing the page
from the p2m.

IIRC we talked about implementing reference counting for mapping for Xen
4.5. It looks we won't have time for that. Anyway, the code was safe
before, and if we do the same as x86 (i.e don't put the continue in the
code) we are still safe.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.