[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.



On Mon, Jul 21, 2014 at 5:01 AM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
On 21/07/14 08:31, Andres Lagar Cavilla wrote:
On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl <myselfdushyantbehl@xxxxxxxxx> wrote:
Ian,

Sorry for this extremely late response.

On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>> This patch is part of the work done under the gsoc project -
>> Lazy Restore Using Memory Paging.
>>
>> This patch is meant to fix a known race condition bug in mempaging
>> ring setup routines. The race conditon was between initializing
>
> "condition"
>
>> mem paging and initializing shared ring, earlier the code initialized
>> mem paging before removing the ring page from guest's physical map
>> which could enable the guest to interfere with the ring initialisation.
>> Now the code removes the page from the guest's physical map before
>> enabling mempaging so that the guest cannot clobber the ring after
>> we initialise it.
>>
>> Signed-off-by: Dushyant Behl <myselfdushyantbehl@xxxxxxxxx>
>> Reviewed-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> ---
>> Âtools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>> Â1 file changed, 20 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
>> index 679c606..a4c4798 100644
>> --- a/tools/libxc/xc_mem_paging_setup.c
>> +++ b/tools/libxc/xc_mem_paging_setup.c
>> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>> Â Â Â Â Â}
>> Â Â Â}
>>
>> + Â Â/*
>> + Â Â * Remove the page from guest's physical map so that the guest will not
>> + Â Â * be able to break security by pretending to be toolstack for a while.
>> + Â Â */
>> + Â Âif ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, ring_page) )
>
> ring_page here is a void * mapping of the page, isn't it? Not a
> xen_pfn_t[] array of pfns, so surely this isn't right.

Yes, you are correct and the invocation should be with ring_pfn. But
if I change this with
the invocation done below as -
if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )Â

then the code fails to enable mem paging.
Can anyone help as to why this would happen?
The call that enables paging will use the hvm param in the hypervisor to look up the page by its pfn within the guest physical address space. If you call decrease reservation, then the page is removed from the guest phys address space, and the hvm param will point to a hole. And the call will fail of course. So you need to call decrease reservation strictly after enable paging.


You cannot have a period of time where the paging/access/events are enabled and the guest can interfere with the ring page, as the Xen side of ring pages are not robust to untrusted input.
It's deja vu all over again!

Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable paging/access/sharing, and only after that decrease reservation (and after that unpause). So the window is open ... methinks?

At some point the assertion "xen side ... not robust" has to be declared false, because it's either FUD, or untenable to ship a hypervisor that can be crashed by guest input. You fixed an important bug with the vcpu run off, so I'll refrain from saying "I don't see any problems there". But I think I don't see any problems there ....



How does this series interact with 6ae2df93 ?
Likely not a problem, you are referring to pause count? Looks like different things.

Thanks
Andres


~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.