[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 0/6] Slotted channels for sync vm_events



On Thu, 2019-02-07 at 11:46 +0000, George Dunlap wrote:
> On 2/6/19 2:26 PM, Petre Ovidiu PIRCALABU wrote:
> > On Wed, 2018-12-19 at 20:52 +0200, Petre Pircalabu wrote:
> > > This patchset is a rework of the "multi-page ring buffer" for
> > > vm_events
> > > patch based on Andrew Cooper's comments.
> > > For synchronous vm_events the ring waitqueue logic was
> > > unnecessary as
> > > the
> > > vcpu sending the request was blocked until a response was
> > > received.
> > > To simplify the request/response mechanism, an array of slotted
> > > channels
> > > was created, one per vcpu. Each vcpu puts the request in the
> > > corresponding slot and blocks until the response is received.
> > > 
> > > I'm sending this patch as a RFC because, while I'm still working
> > > on
> > > way to
> > > measure the overall performance improvement, your feedback would
> > > be a
> > > great
> > > assistance.
> > > 
> > 
> > Is anyone still using asynchronous vm_event requests? (the vcpu is
> > not
> > blocked and no response is expected).
> > If not, I suggest that the feature should be removed as it
> > (significantly) increases the complexity of the current
> > implementation.
> 
> Could you describe in a bit more detail what the situation
> is?  What's
> the current state fo affairs with vm_events, what you're trying to
> change, why async vm_events is more difficult?
> 
The main reason for the vm_events modification was to improve the
overall performance in high throughput introspection scenarios. For
domus with a higher vcpu count, a vcpu could sleep for a certain period
of time while waiting for a ring slot to become available
(__vm_event_claim_slot)
The first patchset only increased the ring size, and the second
iteraton, based on Andrew Copper's comments, proposed a separate path
to handle synchronous events ( a slotted buffer for each vcpu) in order
to have the events handled independently of one another. To handle
asynchronous events, a dynamically allocated vm_event ring is used.
While the implementation is not exactly an exercise in simplicity, it
preserves all the needed functionality and offers fallback if the Linux
domain running the monitor application doesn't support
IOCTL_PRIVCMD_MMAP_RESOURCE.
However, the problem got a little bit more complicated when I tried
implementing the vm_events using an IOREQ server (based on Paul
Durrant's comments). For synchronous vm_events, it simplified things a
little, eliminating both the need for a special structure to hold the 
processing state and the evtchns for each vcpu.
The asynchronous events were a little more tricky to handle. The
buffered ioreqs were a good candidate, but the only thing usable is the
corresponding evtchn in conjunction with an existing ring. In order to
use them, a mock buffered ioreq should be created and transmitted, with
the only meaningful field being the ioreq type.

> I certainly think it would be better if you could write the new
> vm_event
> interface without having to spend a lot of effort supporting modes
> that
> you think nobody uses.
> 
> On the other hand, getting into the habit of breaking stuff, even for
> people we don't know about, will be a hindrance to community growth;
> a
> commitment to keeping it working will be a benefit to growth.
> 
> But of course, we haven't declared the vm_event interface 'supported'
> (it's not even mentioned in the SUPPORT.md document yet).
> 
> Just for the sake of discussion, would it be possible / reasonble,
> for
> instance, to create a new interface, vm_events2, instead?  Then you
> could write it to share the ioreq interface without having legacy
> baggage you're not using; we could deprecate and eventually remove
> vm_events1, and if anyone shouts, we can put it back.
> 
> Thoughts?
> 
>  -George
Yes, it's possible and it will GREATLY simplify the implementation. I
just have to make sure the interfaces are mutually exclusive.

Many thanks for your support,
Petre


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.