[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Hackathon minutes] PV block improvements
On Sat, 2013-06-22 at 09:11 +0200, Roger Pau Monnà wrote: > On 21/06/13 20:07, Matt Wilson wrote: > > On Fri, Jun 21, 2013 at 07:10:59PM +0200, Roger Pau Monnà wrote: > >> Hello, > >> > >> While working on further block improvements I've found an issue with > >> persistent grants in blkfront. > >> > >> Persistent grants basically allocate grants and then they are never > >> released, so both blkfront and blkback keep using the same memory pages > >> for all the transactions. > >> > >> This is not a problem in blkback, because we can dynamically choose how > >> many grants we want to map. On the other hand, blkfront cannot remove > >> the access to those grants at any point, because blkfront doesn't know > >> if blkback has this grants mapped persistently or not. > >> > >> So if for example we start expanding the number of segments in indirect > >> requests, to a value like 512 segments per requests, blkfront will > >> probably try to persistently map 512*32+512 = 16896 grants per device, > >> that's much more grants that the current default, which is 32*256 = 8192 > >> (if using grant tables v2). This can cause serious problems to other > >> interfaces inside the DomU, since blkfront basically starts hoarding all > >> possible grants, leaving other interfaces completely locked. > > > > Yikes. > > > >> I've been thinking about different ways to solve this, but so far I > >> haven't been able to found a nice solution: > >> > >> 1. Limit the number of persistent grants a blkfront instance can use, > >> let's say that only the first X used grants will be persistently mapped > >> by both blkfront and blkback, and if more grants are needed the previous > >> map/unmap will be used. > > > > I'm not thrilled with this option. It would likely introduce some > > significant performance variability, wouldn't it? > > Probably, and also it will be hard to distribute the number of available > grant across the different interfaces in a performance sensible way, > specially given the fact that once a grant is assigned to a interface it > cannot be returned back to the pool of grants. > > So if we had two interfaces with very different usage (one very busy and > another one almost idle), and equally distribute the grants amongst > them, one will have a lot of unused grants while the other will suffer > from starvation. I do think we need to implement some sort of reclaim scheme, which probably does mean a specific request (per your #4). We simply can't have a device which once upon a time had high throughput but is no mostly ideal continue to tie up all those grants. If you make the reuse of grants use an MRU scheme and reclaim the currently unused tail fairly infrequently and in large batches then the perf overhead should be minimal, I think. I also don't think I would discount the idea of using ephemeral grants to cover bursts so easily either, in fact it might fall out quite naturally from an MRU scheme? In that scheme bursting up is pretty cheap since grant map is relative inexpensive, and recovering from the burst shouldn't be too expensive if you batch it. If it turns out to be not a burst but a sustained level of I/O then the MRU scheme would mean you wouldn't be recovering them. I also think there probably needs to be some tunable per device limit on the maximum persistent grants, perhaps minimum and maximum pool sizes ties in with an MRU scheme? If nothing else it gives the admin the ability to prioritise devices. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |