[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [Xen-changelog] Simpler domid allocation.
Hi Keir,This change worries me a bit because I believe it increases the likelihood of subtle race conditions in the tools. It's now likely that if one destroys a domain and immediately creates a new domain, when the tools finally see the VIRQ, they'll be very confused since the domid appears to be valid. The situation is worse for a destroyed domain. A destroy does not generate a VIRQ in which case if the destroy and create happen within whatever the tools polling interval is, it will appear to the tool that the destroy just didn't work. The solution would be to have a completely serialized tool chain although that prevents having multiple simulatenous tools running at the same time or a distribute tool chain where most things don't require a central daemon. The old domid allocation was odd but it kept life easier by making race conditions difficult to create. Regards, Anthony Liguori Xen patchbot -unstable wrote: # HG changeset patch # User kaf24@xxxxxxxxxxxxxxxxxxxx # Node ID 8d04aa7b42805d6d15c651abe13249c4d2eefaf7 # Parent 1d026c7023d28aad1a409638d4f1db518a80bdfd Simpler domid allocation. Signed-off-by: Keir Fraser <keir@xxxxxxxxxxxxx> diff -r 1d026c7023d2 -r 8d04aa7b4280 xen/common/dom0_ops.c --- a/xen/common/dom0_ops.c Thu Jul 14 23:48:06 2005 +++ b/xen/common/dom0_ops.c Fri Jul 15 07:53:46 2005 @@ -37,55 +37,6 @@ put_domain(d); return 0; -} - -/* - * Allocate a free domain id. We try to reuse domain ids in a fairly low range,- * only expanding the range when there are no free domain ids. This is to keep - * domain ids in a range depending on the number that exist simultaneously,- * rather than incrementing domain ids in the full 32-bit range. - */ -static int allocate_domid(domid_t *pdom) -{ - static spinlock_t domid_lock = SPIN_LOCK_UNLOCKED; - static domid_t curdom = 0; - static domid_t topdom = 101; - int err = 0; - domid_t dom; - - spin_lock(&domid_lock); - - /* Try to use a domain id in the range 0..topdom, starting at curdom. */ - for ( dom = curdom + 1; dom != curdom; dom++ ) - { - if ( dom == topdom ) - dom = 1; - if ( is_free_domid(dom) ) - goto exit; - } - - /* Couldn't find a free domain id in 0..topdom, try higher. */ - for ( dom = topdom; dom < DOMID_FIRST_RESERVED; dom++ ) - { - if ( is_free_domid(dom) ) - { - topdom = dom + 1; - goto exit; - } - } - - /* No free domain ids. */ - err = -ENOMEM; - - exit: - if ( err == 0 ) - { - curdom = dom; - *pdom = dom; - } - - spin_unlock(&domid_lock); - return err; } static void getdomaininfo(struct domain *d, dom0_getdomaininfo_t *info) @@ -217,18 +168,33 @@ domid_t dom; struct vcpu *v; unsigned int i, cnt[NR_CPUS] = { 0 }; - + static spinlock_t alloc_lock = SPIN_LOCK_UNLOCKED; + static domid_t rover = 0; + + spin_lock(&alloc_lock); dom = op->u.createdomain.domain; if ( (dom > 0) && (dom < DOMID_FIRST_RESERVED) ) { ret = -EINVAL; if ( !is_free_domid(dom) ) - break; - } - else if ( (ret = allocate_domid(&dom)) != 0 ) - { - break; + goto alloc_out; + } + else + { + for ( dom = rover + 1; dom != rover; dom++ ) + { + if ( dom == DOMID_FIRST_RESERVED ) + dom = 0; + if ( is_free_domid(dom) ) + break; + } + + ret = -ENOMEM; + if ( dom == rover ) + goto alloc_out; + + rover = dom; } /* Do an initial CPU placement. Pick the least-populated CPU. */ @@ -249,11 +215,12 @@ pro = i; ret = -ENOMEM; - if ( (d = do_createdomain(dom, pro)) == NULL ) - break; - - ret = 0;- + if ( (d = do_createdomain(dom, pro)) != NULL )+ ret = 0;+ + alloc_out:+ spin_unlock(&alloc_lock); + op->u.createdomain.domain = d->domain_id; copy_to_user(u_dom0_op, op, sizeof(*op)); } _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |