[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] vm_event: Add support for multi-page ring buffer



On Mon, 2018-09-17 at 15:41 +0100, Andrew Cooper wrote:
> On 13/09/18 16:02, Petre Pircalabu wrote:
> > In high throughput introspection scenarios where lots of monitor
> > vm_events are generated, the ring buffer can fill up before the
> > monitor
> > application gets a chance to handle all the requests thus blocking
> > other vcpus which will have to wait for a slot to become available.
> > 
> > This patch adds support for extending the ring buffer by allocating
> > a
> > number of pages from domheap and mapping them to the monitor
> > application's domain using the foreignmemory_map_resource
> > interface.
> > Unlike the current implementation, the ring buffer pages are not
> > part of
> > the introspected DomU, so they will not be reclaimed when the
> > monitor is
> > disabled.
> > 
> > Signed-off-by: Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>
> 
> What about the slotted format for the synchronous events?  While this
> is
> fine for the async bits, I don't think we want to end up changing the
> mapping API twice.
> 
> Simply increasing the size of the ring puts more pressure on the
I'm currently investigating this approach and I will send an
inplementation proposal with the next version of the patch.
> 
> > diff --git a/xen/arch/x86/domain_page.c
> > b/xen/arch/x86/domain_page.c
> > index 0d23e52..2a9cbf3 100644
> > --- a/xen/arch/x86/domain_page.c
> > +++ b/xen/arch/x86/domain_page.c
> > @@ -331,10 +331,9 @@ void *__map_domain_pages_global(const struct
> > page_info *pg, unsigned int nr)
> >  {
> >      mfn_t mfn[nr];
> >      int i;
> > -    struct page_info *cur_pg = (struct page_info *)&pg[0];
> >  
> >      for (i = 0; i < nr; i++)
> > -        mfn[i] = page_to_mfn(cur_pg++);
> > +        mfn[i] = page_to_mfn(pg++);
> 
> This hunk looks like it should be in the previous patch?  That
> said...
Yep. I've completely missed it. This piece of code will be removed
along  with the map_domain_pages_global patch.
> 
> >  
> >      return map_domain_pages_global(mfn, nr);
> >  }
> > diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> > index 4793aac..faece3c 100644
> > --- a/xen/common/vm_event.c
> > +++ b/xen/common/vm_event.c
> > @@ -39,16 +39,66 @@
> >  #define vm_event_ring_lock(_ved)       spin_lock(&(_ved)-
> > >ring_lock)
> >  #define vm_event_ring_unlock(_ved)     spin_unlock(&(_ved)-
> > >ring_lock)
> >  
> > +#define XEN_VM_EVENT_ALLOC_FROM_DOMHEAP 0xFFFFFFFF
> > +
> > +static int vm_event_alloc_ring(struct domain *d, struct
> > vm_event_domain *ved)
> > +{
> > +    struct page_info *page;
> > +    void *va = NULL;
> > +    int i, rc = -ENOMEM;
> > +
> > +    page = alloc_domheap_pages(d, ved->ring_order,
> > MEMF_no_refcount);
> > +    if ( !page )
> > +        return -ENOMEM;
> 
> ... what is wrong with vzalloc()?
> 
> You don't want to be making a ring_order allocation, especially as
> the
> order grows.  All you need are some mappings which are virtually
> contiguous, not physically contiguous.
Unfortunately, vzalloc doesn't work here (the acquire_resource
succeeds, but the subsequent mapping fails:
(XEN) mm.c:1024:d0v5 pg_owner d0 l1e_owner d0, but real_pg_owner d-1
(XEN) mm.c:1100:d0v5 Error getting mfn dd24d (pfn ffffffffffffffff)
from L1 entry 80000000dd24d227 for l1e_owner d0, pg_owner d0
(XEN) mm.c:1024:d0v5 pg_owner d0 l1e_owner d0, but real_pg_owner d-1
(XEN) mm.c:1100:d0v5 Error getting mfn dd24c (pfn ffffffffffffffff)
from L1 entry 80000000dd24c227 for l1e_owner d0, pg_owner d0

However, allocating each page with alloc_domheap_page and then mapping
them using vmap does the trick.
Until the next version is ready (slotted format for synchronous events)
 I have pushed an intermediate version which addresses the issues
signaled by you an Jan to my github fork of the xen repository:
https://github.com/petrepircalabu/xen/tree/multi_page_ring_buffer/devel
_new

> ~Andrew

Many thanks for your support,
Petre

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.