|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/3] xen/tools: Remove the XENMEM_get_oustanding_pages and provide the data via xc_phys_info
At 08:44 +0100 on 09 May (1368089087), Ian Campbell wrote:
> On Wed, 2013-05-08 at 19:35 +0100, Konrad Rzeszutek Wilk wrote:
> > During the review of the patches it was noticed that there exists
> > a race wherein the 'free_memory' value consists of information from
> > two hypercalls. That is the XEN_SYSCTL_physinfo and
> > XENMEM_get_outstanding_pages.
> >
> > The free memory the host has available for guest is the difference between
> > the 'free_pages' (from XEN_SYSCTL_physinfo) and 'outstanding_pages'. As they
> > are two hypercalls many things can happen in between the execution of them.
> >
> > This patch resolves this by eliminating the XENMEM_get_outstanding_pages
> > hypercall and providing the free_pages and outstanding_pages information
> > via the xc_phys_info structure.
> >
> > It also removes the XSM hooks and adds locking as needed.
> >
> > CC: dgdera@xxxxxxxxxxxxx
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
>
> For the tools side:
> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
>
> Needs a hypervisor ack though, since contrary to the subject line this
> isn't just a tools change. Adding Keir, Tim & Jan (not sure which of
> them is the right one here).
> > -long get_outstanding_claims(void)
> > +int get_outstanding_claims(uint64_t *free_pages, uint64_t
> > *outstanding_pages)
> > {
> > - return outstanding_claims;
> > + spin_lock(&heap_lock);
> > + *outstanding_pages = outstanding_claims;
> > + *free_pages = avail_domheap_pages();
> > + spin_unlock(&heap_lock);
> > + return 0;
> > }
This function should return void -- its only caller ignores the
return value anyway.
Apart from that,
Reviewed-by: Tim Deegan <tim@xxxxxxx>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |