[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 07/10] vm_event: Add vm_event_ng interface
On Wed, 2019-07-17 at 16:32 +0000, Jan Beulich wrote: > On 17.07.2019 16:41, Petre Ovidiu PIRCALABU wrote: > > On Wed, 2019-07-17 at 10:06 +0000, Jan Beulich wrote: > > > On 16.07.2019 19:06, Petre Pircalabu wrote: > > > > +static void vm_event_channels_free_buffer(struct > > > > vm_event_channels_domain *impl) > > > > { > > > > - vm_event_ring_resume(to_ring(v->domain- > > > > >vm_event_monitor)); > > > > + int i; > > > > + > > > > + vunmap(impl->slots); > > > > + impl->slots = NULL; > > > > + > > > > + for ( i = 0; i < impl->nr_frames; i++ ) > > > > + free_domheap_page(mfn_to_page(impl->mfn[i])); > > > > } > > > > > > What guarantees that there are no mappings left of the pages you > > > free > > > here? Sharing pages with guests is (so far) an (almost) > > > irreversible > > > action, i.e. they may generally only be freed upon domain > > > destruction. > > > See gnttab_unpopulate_status_frames() for a case where we > > > actually > > > make an attempt at freeing such pages (but where we fail the > > > request > > > in case references are left in place). > > > > > > > I've tested manually 2 cases and they both work (no crashes): > > 1: introspected domain starts -> monitor starts -> monitor stops -> > > domain stops > > 2: introspected domain starts -> monitor starts -> domain stops -> > > monitor stops. > > Well, testing is important, but doing tests like you describe won't > catch any misbehaving or malicious monitor domain. > > > However, I will take a closer look at > > gnttab_unpopulate_status_frames > > and post a follow up email. > > Thanks. > Hi Jan, Could you help me clarify some things? Maybe am approaching the whole problem incorrectly. To help explaining things a little better I will use the following abbreviations: ID - introspected domain (the domain for which the vm_event requests are generated) MD - monitor domain (the domain which handles the requests and posts the responses) The legacy approach (ring) is to have a dedicated gfn in ID (ring page), which is mapped by XEN using __map_domain_page_global and then MD use xc_map_foreign_pages to create the mapping and xc_domain_decrease_reservation_exact to remove the page from ID's physmap. The are a number of problems with this approach, the most impactfull being that guests with a high number of vcpus will fills-up the ring quite quicly. This and the fact vm_event_request size increases as monitor applications become more complex incur idle times for vcpus waiting to post a request. To alleviate this problem I need to have a number of frames shared between the hypervisor and MD. The ID doesn't need to know about those frames because it will never access this memory area (unlike ioreq who intercepts the access to certain addresses). Before using xenforeignmemory_map_resource I investigated several different approaches: - Allocate the memory in hypervisor and xc_map_foreign_pages (doesn't work because you cannot "foreignmap" pages of your own domain. - Allocate the memory in guest, and map the allocated pages in XEN. To my knowledge there is no such API in linux to do this and the monitor application, as an userspace program, is not aware of the actual gfns for an allocated memory area. So, at this point the most promising solution is allocating the memory in XEN, sharing it with ID using share_xen_page_with_guest, and mapping it with xenforeignmemory_map_resource (with the flag XENMEM_rsrc_acq_caller_owned set) To my understanding the cleanup sequence from gnttab_unpopulate_status_frames basically boils down to: 1. guest_physmap_remove_page 2. if ( test_and_clear_bit(_PGC_allocated, &pg->count_info) ) put_page(pg); 3. free_xenheap_page My current implementation uses vzalloc instead of alloc_xenheap_pages and that's why I assumed vunmap and free_domheap_pages will suffice (I would have called vfree directly, but the temporary linked list that is used to hold the page references causes free_domheap_pages to crash) Do I also have to call guest_physmap_remove_page and put_page? (steps 1. and 2.) Many thanks for your support, Petre _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |