[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [for-4.7 2/2] xen/arm: p2m: Release the p2m lock before undoing the mappings



On Tue, 17 May 2016, Julien Grall wrote:
> Hi Stefano and Wei,
> 
> On 17/05/16 12:24, Stefano Stabellini wrote:
> > I think you are right. Especially with backports in mind, it would be
> > better to introduce an __apply_p2m_changes function which assumes that
> > the p2m lock has already been taken by the caller. Then you can base the
> > implementation of apply_p2m_changes on it.
> 
> > On Tue, 17 May 2016, Wei Chen wrote:
> > > Hi Julien,
> > > 
> > > I have some concern about this patch. Because we released the spinlock
> > > before remove the mapped memory. If somebody acquires the spinlock
> > > before we remove the mapped memory, this mapped memory region can be
> > > accessed by guest.
> > > 
> > > The apply_p2m_changes is no longer atomic. Is it a security risk?
> 
> Accesses to the page table have never been atomic, as soon as an entry is
> written in the page tables, the guest vCPUs or a prefetcher could read it.
> 
> The spinlock is only here to protect the page tables against concurrent
> modifications. Releasing the lock is not an issue as Xen does not promise any
> ordering for the p2m changes.

I understand that. However I am wondering whether it might be possible
for the guest to run commands which cause concurrent p2m change requests
on purpose, inserting something else between the first phase and the
second phase of apply_p2m_changes, causing problems to the hypervisor.
Or maybe not even on purpose, but causing problem to itself nonetheless.

Honestly it is true that it doesn't look like Xen could run into
troubles. But still this is a change in behaviour compared to the
current code (which I know doesn't actually work) and I wanted to flag
it.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.