[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] vm_event: sync domctl





On Wed, Dec 23, 2015 at 8:14 PM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
On 23/12/2015 18:11, Tamas K Lengyel wrote:


On Wed, Dec 23, 2015 at 6:17 PM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
On 23/12/2015 15:41, Razvan Cojocaru wrote:
> On 12/23/2015 04:53 PM, Tamas K Lengyel wrote:
>> Introduce new vm_event domctl option which allows an event subscriber
>> to request all vCPUs not currently pending a vm_event request to be paused,
>> thus allowing the subscriber to sync up on the state of the domain. This
>> is especially useful when the subscribed wants to disable certain events
>> from being delivered and wants to ensure no more requests are pending on the
>> ring before doing so.
>>
>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
>> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
>> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
>> Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>> Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
> This certainly looks very interesting. Would xc_domain_pause() not be
> enough for your use case then?

I second this query. I would have thought xc_domain_pause() does
exactly what you want in this case.

The problem is in what order the responses are processed. I may not be correct about the logic but here is what my impression was: xc_domain_unpause resumes all vCPUs even if there is still a vm_event response that has not been processed. Now, if the subscriber set response flags (altp2m switch, singlestep toggle, etc) those actions would not be properly performed on the vCPU before it's resumed. If the subscriber processes all requests and signals via the event channel that the responses are on the ring, then calls xc_domain_unpause, we can still have a race between processing the responses from the ring and unpausing the vCPU.
Â
The code provided is racy, as it is liable to alter which pause
references it takes/releases depending on what other pause/unpause
actions are being made.

It's understood that the user would not use xc_domain_pause/unpause while using vm_event responses with response flags specified. Even then, it was already racy IMHO if the user called xc_domain_unpause before processing requests from the vm_event ring that originally paused the vCPU, so this doesn't change that situation.

Pausing is strictly reference counted. (or rather, it is since c/s 3eb1c70 "properly reference count DOMCTL_{,un}pausedomain hypercalls". Before then, it definitely was buggy.)

There is the domain pause count, and pause counts per vcpu. All domain pause operations take both a domain pause reference, and a vcpu pause reference on each vcpu. A vcpu is only eligible to be scheduled if its pause reference count is zero. If two independent tasks call vcpu_pause() on the same vcpu, it will remain paused until both independent tasks have called vcpu_unpause().

Having said this, I can well believe that there might be issues with the current uses of pausing.

The vital factor is that the entity which pauses a vcpu is also responsible for unpausing it, and it must be resistant to accidentally leaking its reference.

In this case, I believe that what you want to do is:

1) Identify condition requiring a sync
2) xc_domain_pause()
3) Process all of the pending vm_events
4) Synchronise the state
5) xc_domain_unpause()

All vcpus of the domain should stay descheduled between points 2 and 5. If this doesn't have the intended effect, then I suspect there is a bug in the pause reference handing of the vm_event subsystem.

Is this clearer, or have I misunderstood the problem?

The problem is with step 4&5 IMHO. The event channel notification AFAIK is asynchronous in that it just starts the processing of the pending vm_event responses and returns, doesn't wait for the responses to be all processed. Now if we progress to step 5, we might still have some responses on the ring which have not gotten processed yet, so there is a race-condition. There is currently no way to get a notification when all responses have been processed, so the best thing we can do is to make sure we can pause/unpause vCPUs without pending requests/responses as those are safe to be resumed, while restricting the other vCPUs to be only unpaused through the pending vm_event response. I hope this make sense.

That being said, I haven't yet encountered an instance where this racecondition was lost, so this is just a pre-caution.

Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.