[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] arinc653: avoid array overrun


  • To: Jürgen Groß <jgross@xxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Wed, 25 Mar 2026 16:43:26 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TJ3Ia7HHiDIHFDKjoU5aNoyIZsQbWwQfyFhF1mXh0Ek=; b=cvrIA/yihVwAHEUim6YYdzMDOyoozyN3i9TQ9x6LnnPoVVMjCXzLlbhpwc8F4XXXYwkdFiqzsbQ2fSCuG7fR7iNOoCFsXy+IMetgy7ciQuvXwaa0Z3MEJM/lSR0J7NuBGPueilIelsgaGJVzth0hyG7jFDzYbiT0P/YFtAaHCFisuBrLM1xR+uNS5u2UqSsGJxC4uvMietWvbK2KX5x8Fbk+AIEkqavhEtOeeOsPOEWlIvH/6ZLgHngh2O7WX3PTh1TOqTxcDAn5R/elrmQRrA9ZuqfPyn63AzIadcS68vZPmU7WAH++fjsAReS4wSDRGSMKK7gfACo2T176KZ0dDg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=OLTOlT4gPyVcaleGj+dDp4ygX/3sf2lLDdir1ulZzQY3QV8TA4+nW83fmipu5selO3kd3KRsOPu7Ut4Hc305ZUEJOTeQdn9/PYu9d2w21B6rmZL3FperVRyQcPzLtg65gAXL4XhTCRDxv3Ns2Jq6aB8Dg+UzWjRZPNfHbcaoiNNsAsFkUumUm3JyPJNV1w9JHCAkKXYkIZpKJQke/lcCpP3NfDJBwJnu2zQifCCvRqctBGI9NsUuWEUA3TX8VCOjVOZ2gp959tzl2RsqcWpfdakGaQrUbNcjDgGaq7aIT7w3AjJbQlrfKDLT0bQITEP8hHcPSApoGzEmFaQc/kkEbQ==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=amd.com header.i="@amd.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Cc: Nathan Studer <nathan.studer@xxxxxxxxxxxxxxx>, Stewart Hildebrand <stewart@xxxxxxx>, Kamil Frankowicz <kamil.frankowicz@xxxxxxx>
  • Delivery-date: Wed, 25 Mar 2026 20:43:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 3/25/26 09:22, Jürgen Groß wrote:
> On 25.03.26 13:55, Jan Beulich wrote:
>> Incrementing ->sched_index between bounds check and array access may
>> result in accessing one past the array when that is fully filled
>> (->num_schedule_entries == ARINC653_MAX_DOMAINS_PER_SCHEDULE).
>>
>> Fixes: 22787f2e107c ("ARINC 653 scheduler")
>> Reported-by: Kamil Frankowicz <kamil.frankowicz@xxxxxxx>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Reviewed-by: Juergen Gross <jgross@xxxxxxxx>

Reviewed-by: Stewart Hildebrand <stewart@xxxxxxx>

Thanks for this.

> 
> with ...
> 
>> ---
>> Jürgen, provided I understood him correctly, suggests that something like
>>
>>      while ( now >= sched_priv->next_switch_time )
>>      {
>>          sched_priv->sched_index++;
>>          ASSERT(sched_priv->sched_index < sched_priv->num_schedule_entries);
>>          sched_priv->next_switch_time +=
>>              sched_priv->schedule[sched_priv->sched_index].runtime;
>>      }
>>
>> should also be valid to move to, due to constraints applied by
>> arinc653_sched_set().

Not quite, because major_frame is allowed to be larger than the sum of the
runtimes, and in that case the ASSERT would trigger during the idle period.

>> I'm hesitant to make such a change though, not
>> really knowing the scheduler; the change here looks more obviously correct
>> to me. Albeit the Fixes: tag may thus want dropping.
> 
> the Fixes: tag dropped, as the constraints mentioned are IMO really enough
> to avoid an issue.

No, the constraints aren't enough, the out-of-bounds access would occur during
an idle period of a fully filled schedule. I suggest keeping the Fixes: tag.

> 
> I agree that the current code is far from obviously correct, so your patch
> is definitively an improvement.
> 
> 
> Juergen




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.