[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Block ring protocol (segment expansion, multi-page, etc).
>>> On 05.09.12 at 15:29, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote: > The three major outstanding issues that exists with the current protocol > that I know of are: > - We split up the I/O requests. This ends up eating a lot of CPU > cycles. > - We might have huge I/O requests. Justin mentioned 1MB single I/Os - > and to fit that on a ring it has to be .. well, be able to fit 256 > segments. Jan mentioned 256kB for SCSI - since the protocol > extensions here could very well be carried over. This one is at least partly solved with the higher segment count. With Justin's scheme, up to 255 segments (i.e. slightly less than 1Mb) can be transferred at a time. With Ronghui's scheme (and provided the segment count is wider than a byte), there shouldn't be any really limiting upper bound anymore. > - concurrent usage. If we have more than 4 VBDs blkback suffers when it > tries to get a page as there is a "global" pool shared across all > guests instead of being something 'per guest' or 'per VBD'. Per-vbd would be what we currently have, where for little used vbd-s a pointlessly large amount of pages is set aside. Per-guest is what I think it needs to be (to prevent multiple guests from starving one another). But then it's also not just the page pool, but also the number of grants used/mapped - without command line override there's 32 map track frames, allowing 32k grants to be mapped in a single domain (e.g. Dom0). Scaling the larger segment and request counts with the number of guests and considering that other backends also need to be able to do their jobs, this could become a noticeable limit quite quickly (especially considering that failed grant map operations fail the request in the backend rather than deferring it, at least when GNTST_no_device_space gets returned). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |