[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] xen/sched: fix cpu hotplug


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 9 Aug 2022 08:15:27 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ot2c4bL9669ksxKlItYytiYsvDYnn9xXJUSAVuSlkXw=; b=mD2frlOzcpS8SB1iMRtYAo61rYbGjBlX/jiZfb5I6R0iBa6na2Mo1WXEbfJ/7O40jZGi5GuvMNKCQxMjxhWEmcjm/0umuDshAYwcKn9pruYW7eNSS9K/r4sEJOGhAKBMzoNuaOzpBG424GYFSXnuLcsQ1my+XIBf3/W2QzpW8D6Oct7g+HIf1KOPtU2inUDKu1vBXPYy6MPMDN4hktgeIf4wrEnO/HkLCIcJaNBdJ5l1Fh80US3EhC6CsMVebnY1Y55Que1hGmH1xcPvS9HC9tp187h1U5uHnI+8pUCDBhJi0Sf9sfCWppfSKJPFt3Di5jKRTRJ3G3QUNpqSI4AWJw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GCSJXukgBouTtyLvAB2G/88ff52BmacMzPZiz122Vwbj5VYExuqYpMI/SAH7OrdGuWMjTh5PpBqEOGm4uSpR3w/ecRRP4v2ZgctKnsKlfdc4jjZfkaGOMXr8vEWEeRw2idC9zEgKQINWekMWGpkwZNs9CDuHeCo5t/C5VQD8efPmcaxxNKpkZQ4ne03KxBMAlpgApNyUb60FQPW31kzQsaaIXuByLrDuBQDP10afloiah5jnwm1vOnYOE6pMBPCQgs8ghi4zlK6vT5wkxnz2tWdtnOHvmdY6neifSvjyrzzWvqelE6dy1YPFRw3ToFvxmlv1prn26RnGn6TILvpejg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, Gao Ruifeng <ruifeng.gao@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 09 Aug 2022 06:15:41 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 08.08.2022 12:21, Juergen Gross wrote:
> On 03.08.22 11:53, Jan Beulich wrote:
>> On 02.08.2022 15:36, Juergen Gross wrote:
>>>       switch ( action )
>>>       {
>>>       case CPU_DOWN_FAILED:
>>> +        if ( system_state <= SYS_STATE_active )
>>> +        {
>>> +            if ( mem )
>>> +            {
>>> +                if ( memchr_inv(&mem->affinity, 0, sizeof(mem->affinity)) )
>>> +                    cpupool_free_affin_masks(&mem->affinity);
>>
>> I don't think the conditional is really needed - it merely avoids two
>> xfree(NULL) invocations at the expense of readability here. Plus -
> 
> Okay.
> 
>> wouldn't this better be part of ...
>>
>>> +                schedule_cpu_rm_free(mem, cpu);
>>
>> ... this anyway?
> 
> This would add a layering violation IMHO.
> 
>>
>>> @@ -1042,12 +1065,32 @@ static int cf_check cpu_callback(
>>>       case CPU_DOWN_PREPARE:
>>>           /* Suspend/Resume don't change assignments of cpus to cpupools. */
>>>           if ( system_state <= SYS_STATE_active )
>>> +        {
>>>               rc = cpupool_cpu_remove_prologue(cpu);
>>> +            if ( !rc )
>>> +            {
>>> +                ASSERT(!mem);
>>> +                mem = schedule_cpu_rm_alloc(cpu);
>>> +                rc = mem ? cpupool_alloc_affin_masks(&mem->affinity) : 
>>> -ENOMEM;
>>
>> Ah - here you actually want a non-boolean return value. No need to
>> change that then in the earlier patch (albeit of course a change
>> there could be easily accommodated here).
>>
>> Along the lines of the earlier comment this 2nd allocation may also
>> want to move into schedule_cpu_rm_alloc(). If other users of the
>> function don't need the extra allocations, perhaps by adding a bool
>> parameter.
> 
> I could do that, but I still think this would pull cpupool specific needs
> into sched/core.c.

But the struct isn't cpupool specific, and hence controlling the setting up
of the field via a function parameter doesn't really look like a layering
violation to me. While imo the end result would be more clean (as in - all
allocations / freeing in one place), I'm not going to insist (not the least
because I'm not maintainer of that code anyway).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.