[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH 10/10] [HACK] alloc pages: enable preemption early


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Tue, 23 Feb 2021 02:34:58 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5zq9Qig5UFOKzlvYCb9mxH6RosiS9ZEQ9G2xfCdQj3A=; b=oXlgQq/rGZu34TqCZD2aVCFucXcRx68BoF3MKTdWtQUXSbKkZNJunVk1y9Sz8rSSRiX0wvF1KolC2FJ6tmJ5s9p+Jm5Co+o0szwht6JXkuY5/C0VjSPvQQHcRlztQNvx3lMnltDRsOaDU3jGyuWvGwPACWrVpW8TtvI88pAKdTJ94NWgEDjnNil4NObohnSfGG7WFqfMRXsgh2YAOR8Ej2PN1OOvycBY32hZns+YZnSrD++lZkewZcAD3eil+r3hJfn7Yb4bhX8BzD/hPEgr29kvCQhuu6m5Ny02DXFi4Mg8exkeYjIOSc51m1Ydro6Q4DQcehMVP08S5AY6SplDKw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BebwjZqHmfzIbjnBu/31UGt8bl9qKaqCFPPRLnE8/VERJllMVuTwKTV4XKObOhpubLOxfb1fi2ytVMhqpAjrEdyjEIpAG4MT8/fb0Aoe+p5NxhycZHm9Ucrg2j6YxnIAnww71q333AhjB1Ky+wyqdNXzSnoEr9QV5v9+hFr6pbjPKbT1BaXSb1qh/uZcfHx4HGKX9HeBDdqyJpLcnRHyF04yYPw3UOr7/y1S3/g1zyhG9V8NWsIdZYNtvpjsoLH+Q8gptxkK3FA4x0Uc7CdI5cjPnEve19hk6ZaNosFTqjevM1lgg4SYH+N3nvbZ/sYXJQoBOsyBFAXypfsgjVIsqw==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=epam.com;
  • Cc: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 23 Feb 2021 02:35:26 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXCYx6WMhDzv7reECPf5h9r1HBJg==
  • Thread-topic: [RFC PATCH 10/10] [HACK] alloc pages: enable preemption early

This code moves spin_unlock() and rcu_unlock_domain() earlier in the
code just to decrease time we spent with preemption disabled. Proper
fix is to replace spinlocks with mutexes, but mutexes are not
implemented yet.

With this patch enabled, allocation huge number of pages (e.g. 1GB of
RAM) does not leads to problems with latency in time-critical
domains.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
---
 xen/common/memory.c     |  4 ++--
 xen/common/page_alloc.c | 21 ++-------------------
 2 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 76b9f58478..73c175f64e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1390,6 +1390,8 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
             pv_shim_online_memory(args.nr_extents, args.extent_order);
 #endif
 
+        rcu_unlock_domain(d);
+
         switch ( op )
         {
         case XENMEM_increase_reservation:
@@ -1403,8 +1405,6 @@ long do_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
             break;
         }
 
-        rcu_unlock_domain(d);
-
         rc = args.nr_done;
 
         if ( args.preempted )
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 1744e6faa5..43c2f5d6e0 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -996,6 +996,8 @@ static struct page_info *alloc_heap_pages(
     if ( d != NULL )
         d->last_alloc_node = node;
 
+    spin_unlock(&heap_lock);
+
     for ( i = 0; i < (1 << order); i++ )
     {
         /* Reference count must continuously be zero for free pages. */
@@ -1025,8 +1027,6 @@ static struct page_info *alloc_heap_pages(
 
     }
 
-    spin_unlock(&heap_lock);
-
     if ( first_dirty != INVALID_DIRTY_IDX ||
          (scrub_debug && !(memflags & MEMF_no_scrub)) )
     {
@@ -2274,23 +2274,6 @@ int assign_pages(
         goto out;
     }
 
-#ifndef NDEBUG
-    {
-        unsigned int extra_pages = 0;
-
-        for ( i = 0; i < (1ul << order); i++ )
-        {
-            ASSERT(!(pg[i].count_info & ~PGC_extra));
-            if ( pg[i].count_info & PGC_extra )
-                extra_pages++;
-        }
-
-        ASSERT(!extra_pages ||
-               ((memflags & MEMF_no_refcount) &&
-                extra_pages == 1u << order));
-    }
-#endif
-
     if ( pg[0].count_info & PGC_extra )
     {
         d->extra_pages += 1u << order;
-- 
2.29.2



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.