[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] xen/balloon: set a mapping for ballooned out pages



On Mon, 2013-07-22 at 18:26 +0100, David Vrabel wrote:
> On 22/07/13 18:22, Ian Campbell wrote:
> > On Mon, 2013-07-22 at 17:51 +0100, David Vrabel wrote:
> >> On 22/07/13 17:28, Stefano Stabellini wrote:
> >>>
> >>>  #ifdef CONFIG_HIGHMEM
> >>>  #define inc_totalhigh_pages() (totalhigh_pages++)
> >>> @@ -423,7 +426,8 @@ static enum bp_state decrease_reservation(unsigned 
> >>> long nr_pages, gfp_t gfp)
> >>>           if (xen_pv_domain() && !PageHighMem(page)) {
> >>>                   ret = HYPERVISOR_update_va_mapping(
> >>>                           (unsigned long)__va(pfn << PAGE_SHIFT),
> >>> -                         __pte_ma(0), 0);
> >>> +                         pfn_pte(page_to_pfn(get_balloon_trade_page()),
> >>> +                                 PAGE_KERNEL_RO), 0);
> >>
> >> Preemption needs to be disabled while using the trade page, see
> >> suggestion below.
> > 
> > Hopefully you mean just when setting up/manipulating it?
> 
> Yes, sorry.
> 
> get_...()
> update_va_mapping()
> put_...()

I can see why it would matter in the unmap_and_replace+fixup case (since
you need them to happen "atomically") but why in this case? We don't
actually care which of the trade pages gets used for this purpose, so
even if we happen to get preempted and change CPU it doesn't really
matter. Hrm, unless the CPU goes offline I suppose, ah but we don't free
the page when the cpu goes down (this is good).

Oh, that made me notice:

+       case CPU_UP_PREPARE:
+               balloon_trade_pages[cpu] = alloc_page(GFP_KERNEL);

will leak the previously allocated page if you take the CPU down then up
again.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.