[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation driver
On 12/16/2010 07:22 AM, Daniel De Graaf wrote: >> Hm, yeah, that could be a bit fiddly. I guess you'd need to stick them >> into an rbtree or something. > Another option that provides more flexibility - have a flag in the create > operation, similar to MAP_FIXED in mmap, that allows userspace to mandate > the offset if it wants control, but default to letting the kernel handle it. > We already have a flags field for making the grant writable, this is just > another bit. I'd go for just implementing one way of doing it unless there's a clear need for each of them. The choose-your-own-offset route is looking pretty complex. If you have the kernel allocate the offsets, but perhaps with the guarantee that subsequent allocations will get consecutive offsets, then that lets usermode set up a group of pages which can be mapped with a single mmap call, which is all I was really aiming for. >>> While this isn't hard, IOCTL_GNTDEV_GET_OFFSET_FOR_VADDR only exists in >>> order to relieve userspace of the need to track its mappings, so this >>> seems to have been a concern before. >> It would be nice to have them symmetric. However, its easy to implement >> GET_OFFSET_FOR_VADDR either way - given a vaddr, you can look up the vma >> and return its pgoff. >> >> It looks like GET_OFFSET_FOR_VADDR is just used in xc_gnttab_munmap() so >> that libxc can recover the offset and the page count from the vaddr, so >> that it can pass them to IOCTL_GNTDEV_UNMAP_GRANT_REF. >> >> Also, it seems to fail unmaps which don't exactly correspond to a >> MAP_GRANT_REF. I guess that's OK, but it looks a bit strange. > So, implementing an IOCTL_GNTALLOC_GET_OFFSET_FOR_VADDR would be useful in > order to allow gntalloc munmap() to be similar to gnttab's. If we want to > allow a given offset to be mapped to multiple domains, we couldn't just > return the offset; it would have to be a list of grant references, and > the destroy ioctl would need to take the grant reference. See below. >>> Another use case of gntalloc that may prove useful is to have more than >>> one application able to map the same grant within the kernel. >> So you mean have gntalloc allocate one page and the allow multiple >> processes to map and use it? In that case it would probably be best >> implemented as a filesystem, so you can give proper globally visible >> names to the granted regions, and mmap them as normal files, like shm. > That seems like a better way to expose this functionality. I didn't have > a use case for multiple processes mapping a grant, just didn't want to > prevent doing it in the future if it was a trivial change. Since it's > more complex to implement a filesystem, I think someone needs to find a > use for it before it's written. I believe the current code lets you map > the areas in multiple processes if you pass the file descriptor around > with fork() or using unix sockets; that seems sufficient to me. That raises another quirk in the gntdev (which I think also applies to gntalloc) API - the relationship between munmap and IOCTL_GNTDEV_UNMAP_GRANT_REF. The current behaviour is that the ioctl will fail with EBUSY if there are still mappings of the granted pages. It would probably be better to make the map refcounted by the number of vmas+1 pointing to it so that the UNMAP_GRANT_REF would drop a reference, as would each vma as its unmapped, with the actual ungranting happening at ref==0. That would allow multiple uncoordinated processes to use the same mappings without having to work out who's doing the cleanup. This would also allow auto-ungranting maps (xc_gnttab_map_grant_ref() could do the UNMAP_GRANT_REF immediately after the mmap(), so that xc_gnttab_munmap() can simply munmap() without the need for GET_OFFSET_FOR_VADDR). > Anyway, you don't have to call mmap() to let another domain access the shared > pages; they are mappable as soon as the ioctl() returns, and remain so > until you call the removal ioctl(). So if you do call mmap(), you probably > expect to use the mapping. Yep. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |