[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 12/12] xen/domain: Allocate d->vcpu[] in domain_create()



>>> On 15.08.18 at 16:03, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 15/08/18 14:11, Jan Beulich wrote:
>>>>> On 13.08.18 at 12:01, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> @@ -423,6 +436,11 @@ struct domain *domain_create(domid_t domid,
>>>  
>>>      sched_destroy_domain(d);
>>>  
>>> +    if ( d->max_vcpus )
>>> +    {
>>> +        d->max_vcpus = 0;
>>> +        XFREE(d->vcpu);
>>> +    }
>>>      if ( init_status & INIT_arch )
>>>          arch_domain_destroy(d);
>> I'm not sure it is a good idea to free the vcpus this early, in particular
>> before arch_domain_destroy().
> 
> Actually, this positioning is deliberate, so as not to change the
> current behaviour of arch_domain_destroy().
> 
> Before this patch, d-vcpu[] was guaranteed to be NULL in the
> arch_domain_destroy() call, and I don't currently trust it to work
> properly if changed.  All of this cleanup logic needs further improvements.

Oh, good point.

>>> --- a/xen/common/domctl.c
>>> +++ b/xen/common/domctl.c
>>> @@ -554,16 +554,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>>> u_domctl)
>>>  
>>>          ret = -EINVAL;
>>>          if ( (d == current->domain) || /* no domain_pause() */
>>> -             (max > domain_max_vcpus(d)) )
>>> +             (max != d->max_vcpus) )   /* max_vcpus set up in createdomain 
>>> */
>>>              break;
>>>  
>>> -        /* Until Xenoprof can dynamically grow its vcpu-s array... */
>>> -        if ( d->xenoprof )
>>> -        {
>>> -            ret = -EAGAIN;
>>> -            break;
>>> -        }
>>> -
>>>          /* Needed, for example, to ensure writable p.t. state is synced. */
>>>          domain_pause(d);
>>>  
>>> @@ -581,38 +574,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
>>> u_domctl)
>>>              }
>>>          }
>>>  
>>> -        /* We cannot reduce maximum VCPUs. */
>>> -        ret = -EINVAL;
>>> -        if ( (max < d->max_vcpus) && (d->vcpu[max] != NULL) )
>>> -            goto maxvcpu_out;
>>> -
>>> -        /*
>>> -         * For now don't allow increasing the vcpu count from a non-zero
>>> -         * value: This code and all readers of d->vcpu would otherwise need
>>> -         * to be converted to use RCU, but at present there's no tools side
>>> -         * code path that would issue such a request.
>>> -         */
>>> -        ret = -EBUSY;
>>> -        if ( (d->max_vcpus > 0) && (max > d->max_vcpus) )
>>> -            goto maxvcpu_out;
>>> -
>>>          ret = -ENOMEM;
>>>          online = cpupool_domain_cpumask(d);
>>> -        if ( max > d->max_vcpus )
>>> -        {
>>> -            struct vcpu **vcpus;
>>> -
>>> -            BUG_ON(d->vcpu != NULL);
>>> -            BUG_ON(d->max_vcpus != 0);
>>> -
>>> -            if ( (vcpus = xzalloc_array(struct vcpu *, max)) == NULL )
>>> -                goto maxvcpu_out;
>>> -
>>> -            /* Install vcpu array /then/ update max_vcpus. */
>>> -            d->vcpu = vcpus;
>>> -            smp_wmb();
>>> -            d->max_vcpus = max;
>>> -        }
>>>  
>>>          for ( i = 0; i < max; i++ )
>>>          {
>> With all of this dropped, I think the domctl should be renamed. By
>> dropping its "max" input at the same time, there would then also
>> no longer be a need to check that the value matches what was
>> stored during domain creation.
> 
> I'm still looking to eventually delete the hypercall, but we need to be
> able to clean up all domain/vcpu allocations without calling
> complete_domain_destroy, or rearrange the entry logic so
> complete_domain_destroy() can be reused for a domain which isn't
> currently in the domlist.
> 
> Unfortunately, I think this is going to be fairly complicated, I think.

Especially when we expect this to take some time, I think it would
be quite helpful for the domctl to actually say what it does until
then, rather than retaining its current (then misleading) name.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.