[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: fix map_domain_page() last resort fallback
On 13/06/2013 08:49, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: >>>> On 12.06.13 at 19:27, Keir Fraser <keir.xen@xxxxxxxxx> wrote: >> On 12/06/2013 16:59, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: >> >>> Guests with vCPU count not divisible by 4 have unused bits in the last >>> word of their inuse bitmap, and the garbage collection code therefore >>> would get mislead believing that some entries were actually recoverable >>> for use. >>> >>> Also use an earlier established local variable in mapcache_vcpu_init() >>> instead of re-calculating the value (noticed while investigating the >>> generally better option of setting those overhanging bits once during >>> setup - this didn't work out in a simple enough fashion because the >>> mapping getting established there isn't in the current address space, >>> and hence the bitmap isn't directly accessible there). >>> >>> Reported-by: Konrad Wilk <konrad.wilk@xxxxxxxxxx> >>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >> >> Whilst I can't argue against this as the obvious bugfix to the existing >> code, I personally object to clawing back hash-table entries at all. The >> size of the per-vcpu hashtable is small, and it should be perfectly possible >> to always allow enough extra entries in the mapcache to always be able to >> allocate an entry even when all vcpu's maphash buckets are in use. >> >> Perhaps this is the right fix for 4.3 at this point, but in that case I am >> quite inclined to simplify this down after 4.3, sidestepping the whole >> issue. > > I won't object undoing this, and moving the MAPHASH_ENTRIES > definition into config.h, but I also won't put my name under it. I think your fix is best for 4.3. Let's get it checked in. Acked-by: Keir Fraser <keir@xxxxxxx> > Jan > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |