[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 15 Aug 2022 14:00:52 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Nljlc6iGhKQdJGEjzecpgPBDCVnyO1+0i9il7CBYTxE=; b=WDnJfdcKsWlaW/kSXXUwax/ax2YyRqJmbLp+fE5ALOusglrP3oakltGOiG94miN5kHQ4D0k1W1NK9ZcCibQxP2apBQybxHLFb0CEgwiIDbJv1CC7PxrUJXgA4Cv1YfM6b1uNuZLBGpnrvQdhdXOR/RTWlr791DBbrU7DN9f3LRWh/xS9Lp0E4SlcAhlNO9FV7O2pK3bfdbsLT3I/+/YPDiV6SJ7Jymoq5D31MfXREH46dlfIZ9yGKAVYYq+4RZf3UP9HPfz9hpuYllVLVAzplLIiiyjQrKnxRxyc4JnSM2eX5UoMXM67IShGzyxLVzlCu2V5we5EFyfpiRMNrrHLYg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X+kRkVemDzQ487ZYVaPIDZs7rt3XXYc2ZdciAsFBUb33onuemDfI7veKcoy8OyuW7TtjIny1+Bc3SNBx+EkQ0z6l3/6VC+lznP3iqJVW/WaHKAqme8bf5QrJtyahKHHTYW8xk89IP72ttWbXINY8KrDzV2l418nFIw55LfPq4qk6i+uz/P94Gy2E2xknODX7K5Ilkd1o0rCkdNjvPQCLQR2GWqXSOEyttfK4/w1/nLoc1DIilPyduTJRhY2/6R5ZabjxtxaCVfXLwaffhUou/Ju1rDisCafHyRSSgQqECSTji3hA/6TyRyWW7vKnyyPssZV3S15tOk6Nyds+smfoVw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxx>, Dario Faggioli <dfaggioli@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 15 Aug 2022 12:01:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.08.2022 13:55, Juergen Gross wrote:
> On 15.08.22 13:52, Jan Beulich wrote:
>> On 15.08.2022 13:04, Juergen Gross wrote:
>>> --- a/xen/common/sched/core.c
>>> +++ b/xen/common/sched/core.c
>>> @@ -3237,6 +3237,65 @@ out:
>>>       return ret;
>>>   }
>>>   
>>> +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
>>> +{
>>> +    struct cpu_rm_data *data;
>>> +    const struct sched_resource *sr;
>>> +    unsigned int idx;
>>> +
>>> +    rcu_read_lock(&sched_res_rculock);
>>> +
>>> +    sr = get_sched_res(cpu);
>>> +    data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 
>>> 1);
>>> +    if ( !data )
>>> +        goto out;
>>> +
>>> +    data->old_ops = sr->scheduler;
>>> +    data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
>>> +    data->ppriv_old = sr->sched_priv;
>>
>> Repeating a v1 comment:
>>
>> "At least from an abstract perspective, doesn't reading fields from
>>   sr require the RCU lock to be held continuously (i.e. not dropping
>>   it at the end of this function and re-acquiring it in the caller)?"
>>
>> Initially I thought you did respond to this in some way, but when
>> looking for a matching reply I couldn't find one.
> 
> Oh, sorry.
> 
> The RCU lock is protecting only the sr, not any data pointers in the sr
> are referencing. So it is fine to drop the RCU lock after reading some
> of the fields from the sr and storing it in the cpu_rm_data memory.

Hmm, interesting. "Protecting only the sr" then means what exactly?
Just its allocation, but not its contents?

Plus it's not just the pointers - sr->granularity also would better not
increase in the meantime ... Quite likely there's a reason why that also
cannot happen, yet even then I think a brief code comment might be
helpful here.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.