[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen_disk: fix unmapping of persistent grants
El 12/11/14 a les 18.41, Stefano Stabellini ha escrit: > On Wed, 12 Nov 2014, Roger Pau Monne wrote: >> This patch fixes two issues with persistent grants and the disk PV backend >> (Qdisk): >> >> - Don't use batch mappings when using persistent grants, doing so prevents >> unmapping single grants (the whole area has to be unmapped at once). > > The real issue is that destroy_grant cannot work with batch_maps. > One could reimplement destroy_grant to build a single array with all the > grants to unmap and make a single xc_gnttab_munmap call. > > Do you think that would be feasible? Making destroy_grant work with batch maps using the current tree structure is going to be quite complicated, because destroy_grant iterates on every entry on the tree, and doesn't know which grants belong to which regions. IMHO a simpler solution would be to introduce another tree (or list) that keeps track of grant-mapped regions, and on tear down use the data in that list to unmap the regions. This way the current tree will still be used to perform the grant_ref->vaddr translation, but on teardown the newly introduced list would be used instead. In general I was reluctant to do this because not using batch maps with persistent grants should not introduce a noticeable performance regression due to the fact that grants are only mapped once for the whole life-cycle of the virtual disk. Also, if we plan to implement indirect descriptors for Qdisk we really need to be able to unmap single grants in order to purge the list, since in that case it's not possible to keep all possible grants persistently mapped. Since this alternate solution is easy to implement I will send a new patch using this approach, then we can decide what to do. Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |