[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] common/vm_event: Prevent guest locking with large max_vcpus

On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> wrote:
It is currently possible for the guest to lock when subscribing
to synchronous vm_events if max_vcpus is larger than the
number of available ring buffer slots. This patch no longer
blocks already paused VCPUs, fixing the issue for this use

Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
 xen/common/vm_event.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 82ce8f1..2005a64 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
      * See the comments above wake_blocked() for more information
      * on how this mechanism works to avoid waiting. */
     avail_req = vm_event_ring_available(ved);
-    if( current->domain == d && avail_req < d->max_vcpus )
+    if( current->domain == d && avail_req < d->max_vcpus &&
+        !atomic_read( &current->vm_event_pause_count ) )
         vm_event_mark_and_pause(current, ved);

Hi Razvan,
I would also like to have the change made in this patch that unblocks the vCPUs as soon as a spot opens up on the ring. Doing just what this patch has will not solve the problem if there are asynchronous events used.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.