[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxc: use correct macro when unmapping memory after save operation
On Fri, 2011-05-20 at 23:04 +0100, Jim Fehlig wrote: > With some help from Olaf, I've finally got to the bottom of an issue I > came across while trying to implement save/restore in the libvirt > libxenlight driver. After issuing the save operation, the saved domain > was not being cleaned up properly and left in this state from xl's > perspective > > xen33:# xl list > Name ID Mem VCPUs State Time(s) > Domain-0 0 6821 8 r----- 122.5 > (null) 2 2 2 --pssd 10.8 > > Checking the libvirtd /proc/$pid/maps I found this > > 7f3798984000-7f3798b86000 r--s 00002000 00:03 4026532097 /proc/xen/privcmd > > So not all all pages belonging to the domain were unmapped from > libvirtd. In tools/libxc/xc_domain_save.c we found that P2M_FL_ENTRIES > were being mapped but only P2M_FLL_ENTRIES were being unmapped. The > attached patch changes the unmapping to use the same P2M_FL_ENTRIES > macro. I'm not too familiar with this code though so posting here for > review. > > I suspect this was not noticed before since most (all?) processes doing > save terminate after the save and are not long-running like libvirtd. Good catch! Looks like I introduced this in 18558:ccf0205255e1, sorry! I guess it is also wrong in the error path out of map_and_save_p2m_table and so we also need: diff -r 35ae855173fa tools/libxc/xc_domain_save.c --- a/tools/libxc/xc_domain_save.c Mon May 23 10:06:23 2011 +0100 +++ b/tools/libxc/xc_domain_save.c Mon May 23 10:15:43 2011 +0100 @@ -861,7 +861,7 @@ static xen_pfn_t *map_and_save_p2m_table out: if ( !success && p2m ) - munmap(p2m, P2M_FLL_ENTRIES * PAGE_SIZE); + munmap(p2m, P2M_FL_ENTRIES * PAGE_SIZE); if ( live_p2m_frame_list_list ) munmap(live_p2m_frame_list_list, PAGE_SIZE); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |