[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] vm_event: Add support for multi-page ring buffer



On Jo, 2018-09-13 at 10:42 -0600, Tamas K Lengyel wrote:
> On Thu, Sep 13, 2018 at 9:02 AM Petre Pircalabu
> <ppircalabu@xxxxxxxxxxxxxxx> wrote:
> > 
> > 
> > In high throughput introspection scenarios where lots of monitor
> > vm_events are generated, the ring buffer can fill up before the
> > monitor
> > application gets a chance to handle all the requests thus blocking
> > other vcpus which will have to wait for a slot to become available.
> > 
> > This patch adds support for extending the ring buffer by allocating
> > a
> > number of pages from domheap and mapping them to the monitor
> > application's domain using the foreignmemory_map_resource
> > interface.
> > Unlike the current implementation, the ring buffer pages are not
> > part of
> > the introspected DomU, so they will not be reclaimed when the
> > monitor is
> > disabled.
> > 
> > Signed-off-by: Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>
> Thanks for this addition, it has been on the TODO for a long while
> now. Could you also please push the patches as a git branch
> somewhere?
I've pushed it to my github repository (branch :
multi_page_ring_buffer/devel_new)
https://github.com/petrepircalabu/xen/tree/multi_page_ring_buffer/devel
_new
> 
> > 
> > ---
> >  tools/libxc/include/xenctrl.h       |   2 +
> >  tools/libxc/xc_monitor.c            |   7 +
> >  tools/libxc/xc_private.h            |   3 +
> >  tools/libxc/xc_vm_event.c           |  49 +++++++
>
> > +        xenaccess->vm_event.domain_id,
> > +        xenaccess->vm_event.ring_page_count,
> > +        &xenaccess->vm_event.evtchn_port);
> > +
> > +    if (xenaccess->vm_event.ring_buffer == NULL && errno ==
> > EOPNOTSUPP)
> How would this situation ever arise? If there is a chance that you
> can't setup multi-page rings, perhaps adding a hypercall that would
> tell the user how many pages are max available for the ring is the
> better route. This just seems like guessing right now.
> 
The multi page ring buffer is mapped using
xenforeignmemory_map_resource which uses IOCTL_PRIVCMD_MMAP_RESOURCE.
This ioctl was added in kernel 4.18.1, which is a relatively new
kernel. If the monitor domain doesnt't recognize this hypercall it sets
errno to EOPNOTSUPP.
> > 
> > +    {
> > +        xenaccess->vm_event.ring_page_count = 1;
> > +        xenaccess->vm_event.ring_buffer =
> >              xc_monitor_enable(xenaccess->xc_handle,
> >                                xenaccess->vm_event.domain_id,
> >                                &xenaccess->vm_event.evtchn_port);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.