[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] call sched_destroy_domain before cpupool_rm_domain



commit 117f67350fd18b11ab09d628b4edea3364b09441
Author:     Nathan Studer <nate.studer@xxxxxxxxxxxxxxx>
AuthorDate: Wed Nov 6 10:21:09 2013 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Wed Nov 6 10:21:09 2013 +0100

    call sched_destroy_domain before cpupool_rm_domain
    
    The domain destruction code, removes a domain from its cpupool
    before attempting to destroy its scheduler information.  Since
    the scheduler framework uses the domain's cpupool information
    to decide on which scheduler ops to use, this results in the
    the wrong scheduler's destroy domain function being called
    when the cpupool scheduler and the initial scheduler are
    different.
    
    Correct this by destroying the domain's scheduling information
    before removing it from the pool.
    
    Signed-off-by: Nathan Studer <nate.studer@xxxxxxxxxxxxxxx>
    Reviewed-by: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
---
 xen/common/domain.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index ce20323..8c9b813 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -727,10 +727,10 @@ static void complete_domain_destroy(struct rcu_head *head)
 
     rangeset_domain_destroy(d);
 
-    cpupool_rm_domain(d);
-
     sched_destroy_domain(d);
 
+    cpupool_rm_domain(d);
+
     /* Free page used by xen oprofile buffer. */
 #ifdef CONFIG_XENOPROF
     free_xenoprof_pages(d);
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.