[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/sched: fix sched_move_domain() for domain without vcpus
On 06.09.21 13:23, Jan Beulich wrote: On 06.09.2021 13:18, Andrew Cooper wrote:On 06/09/2021 12:14, Andrew Cooper wrote:On 06/09/2021 12:00, Juergen Gross wrote:In case a domain is created with a cpupool other than Pool-0 specified it will be moved to that cpupool before any vcpus are allocated. This will lead to a NULL pointer dereference in sched_move_domain(). Fix that by tolerating vcpus not being allocated yet. Fixes: 70fadc41635b9b6 ("xen/cpupool: support moving domain between cpupools with different granularity") Reported-by: Bertrand Marquis <bertrand.marquis@xxxxxxx> Reviewed-by: Bertrand Marquis <bertrand.marquis@xxxxxxx> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> --- xen/common/sched/core.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 8d178baf3d..79c9100680 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -671,6 +671,10 @@ int sched_move_domain(struct domain *d, struct cpupool *c)for ( unit_idx = 0; unit_idx < n_units; unit_idx++ ){ + /* Special case for move at domain creation time. */ + if ( !d->vcpu[unit_idx * gran] ) + break; + unit = sched_alloc_unit_mem(); if ( unit ) {I think the logic would be clearer if you wrap the entire for loop in if ( d->max_vcpus ).And of course, this is wrong. Turns out the domain_has_vcpus() predicate still hasn't been committed, but d->vcpu[0] is the correct internal.Which in turn might want to be done by setting n_units to zero when d->vcpus[0] is NULL? Yes, this would be possible. OTOH my variant is more robust in case not all vcpus are allocated, but I guess this will explode somewhere else anyway. In case I don't get any other comment today I'll change the patch to set n_units to 0 if d->vcpus[0] is NULL. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |