[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 3/3] xen/gntdev: remove erronous use of copy_to_user



Since there is now a mapping of granted pages in kernel address space in
both PV and HVM, use it for UNMAP_NOTIFY_CLEAR_BYTE instead of accessing
memory via copy_to_user and triggering sleep-in-atomic warnings.

Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
---
 drivers/xen/gntdev.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 9be3e5e..3c8803f 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -312,17 +312,10 @@ static int __unmap_grant_pages(struct grant_map *map, int 
offset, int pages)
 
        if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
                int pgno = (map->notify.addr >> PAGE_SHIFT);
-               if (pgno >= offset && pgno < offset + pages && use_ptemod) {
-                       void __user *tmp = (void __user *)
-                               map->vma->vm_start + map->notify.addr;
-                       err = copy_to_user(tmp, &err, 1);
-                       if (err)
-                               return -EFAULT;
-                       map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
-               } else if (pgno >= offset && pgno < offset + pages) {
-                       uint8_t *tmp = kmap(map->pages[pgno]);
+               if (pgno >= offset && pgno < offset + pages) {
+                       /* No need for kmap, pages are in lowmem */
+                       uint8_t *tmp = 
pfn_to_kaddr(page_to_pfn(map->pages[pgno]));
                        tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;
-                       kunmap(map->pages[pgno]);
                        map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;
                }
        }
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.