[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/10] vm_event: Add vm_event_ng interface



> -----Original Message-----
> From: Petre Ovidiu PIRCALABU <ppircalabu@xxxxxxxxxxxxxxx>
> Sent: 18 July 2019 14:59
> To: Jan Beulich <JBeulich@xxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: JulienGrall <julien.grall@xxxxxxx>; Alexandru Stefan ISAILA 
> <aisaila@xxxxxxxxxxxxxxx>; Razvan
> Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>; Andrew Cooper 
> <Andrew.Cooper3@xxxxxxxxxx>; Roger Pau Monne
> <roger.pau@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; 
> xen-devel@xxxxxxxxxxxxxxxxxxxx;
> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Tamas K Lengyel 
> <tamas@xxxxxxxxxxxxx>; Tim (Xen.org)
> <tim@xxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH v2 07/10] vm_event: Add vm_event_ng interface
> 
> On Wed, 2019-07-17 at 16:32 +0000, Jan Beulich wrote:
> > On 17.07.2019 16:41, Petre Ovidiu PIRCALABU wrote:
> > > On Wed, 2019-07-17 at 10:06 +0000, Jan Beulich wrote:
> > > > On 16.07.2019 19:06, Petre Pircalabu wrote:
> > > > > +static void vm_event_channels_free_buffer(struct
> > > > > vm_event_channels_domain *impl)
> > > > >    {
> > > > > -    vm_event_ring_resume(to_ring(v->domain-
> > > > > >vm_event_monitor));
> > > > > +    int i;
> > > > > +
> > > > > +    vunmap(impl->slots);
> > > > > +    impl->slots = NULL;
> > > > > +
> > > > > +    for ( i = 0; i < impl->nr_frames; i++ )
> > > > > +        free_domheap_page(mfn_to_page(impl->mfn[i]));
> > > > >    }
> > > >
> > > > What guarantees that there are no mappings left of the pages you
> > > > free
> > > > here? Sharing pages with guests is (so far) an (almost)
> > > > irreversible
> > > > action, i.e. they may generally only be freed upon domain
> > > > destruction.
> > > > See gnttab_unpopulate_status_frames() for a case where we
> > > > actually
> > > > make an attempt at freeing such pages (but where we fail the
> > > > request
> > > > in case references are left in place).
> > > >
> > >
> > > I've tested manually 2 cases and they both work (no crashes):
> > > 1: introspected domain starts -> monitor starts -> monitor stops ->
> > > domain stops
> > > 2: introspected domain starts -> monitor starts -> domain stops ->
> > > monitor stops.
> >
> > Well, testing is important, but doing tests like you describe won't
> > catch any misbehaving or malicious monitor domain.
> >
> > > However, I will take a closer look at
> > > gnttab_unpopulate_status_frames
> > > and post a follow up email.
> >
> > Thanks.
> >
> Hi Jan,
> 
> Could you help me clarify some things? Maybe am approaching the whole
> problem incorrectly.
> 
> To help explaining things a little better I will use the following
> abbreviations:
> ID - introspected domain (the domain for which the vm_event requests
> are generated)
> MD - monitor domain (the domain which handles the requests and posts
> the responses)
> 
> The legacy approach (ring) is to have a dedicated gfn in ID (ring
> page), which is mapped by XEN using __map_domain_page_global and then
> MD use xc_map_foreign_pages to create the mapping and
> xc_domain_decrease_reservation_exact to remove the page from ID's
> physmap.
> The are a number of problems with this approach, the most impactfull
> being that guests with a high number of vcpus will fills-up the ring
> quite quicly. This and the fact vm_event_request size increases as
> monitor applications become more complex incur idle times for vcpus
> waiting to post a request.
> 
> To alleviate this problem I need to have a number of frames shared
> between the hypervisor and MD. The ID doesn't need to know about those
> frames because it will never access this memory area (unlike ioreq who
> intercepts the access to certain addresses).
> 
> Before using xenforeignmemory_map_resource I investigated several
> different approaches:
> - Allocate the memory in hypervisor and xc_map_foreign_pages (doesn't
> work because you cannot "foreignmap" pages of your own domain.
> - Allocate the memory in guest, and map the allocated pages in XEN. To
> my knowledge there is no such API in linux to do this and the monitor
> application, as an userspace program, is not aware of the actual gfns
> for an allocated memory area.
> 
> So, at this point the most promising solution is allocating the memory
> in XEN, sharing it with ID using share_xen_page_with_guest, and mapping
> it with xenforeignmemory_map_resource (with the flag
> XENMEM_rsrc_acq_caller_owned set)

If that page is shared with the ID then XENMEM_rsrc_acq_caller_owned should 
*not* be set. Also, that flag is an 'out' flag... the caller doesn't decide who 
owns the resource. TBH I regret ever introducing the flag; it caused a lot of 
problems, which is why it is no longer used.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.