[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Discussion on the delayed start of major frame with ARINC653 scheduler


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: "Choi, Anderson" <Anderson.Choi@xxxxxxxxxx>
  • Date: Thu, 26 Jun 2025 03:50:30 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=boeing.com; dmarc=pass action=none header.from=boeing.com; dkim=pass header.d=boeing.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector5401; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=p69J/5sr60GizvBupwyAViTfwTc/Zat/sv06XApCAks=; b=L7P3tR9dUWcrHaSCWOGevW9EvRmDhncRatp6Vycmod4nwjDGOm9sdtasZQvXFMSNEtB0lUAd/eOTwvshz08bm6199/woC81Rh3WuBVeBVWQGKiBMiX3MWPk98ozx9HPpgvL3Jw6x9WbrGzFqY30uTUE9NFTVrke4mzMJf5/cVYOm4wThfxDYVwBWO5wz5aCCqWv36NtWdAdirnr+0OIBTnkOv2RKNPYFDZSRRLSO8oJe4CnvFloV4jxemZkehnSJQfiUV5XTOtmZdX9AYBBjKkhKVTEBjJZ4AeXMhxqGe2bprwalm6nx8tpyA87u5aLK3w1ZpP3w4s8+Fd/bNehYvg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none; b=Wc8AFOKaggjMsvGO1295jP/L9tq87FzfYeb1Uw10Ex06lnU/5PpYH/K65xvXD+b4xQJUT4dJwmsQIqH59whvhWv8ZulYOWeDk5tcWjP4DU6YSSsyJCNt0gfKwaShADngqwKdQp9EiQ/MXD+6qGKIAmAVOWnE7e6ql/QvSZWfD+oWx+9JOwuFi1SRJXBijpVclApJzHff5Kt9E4JGA1je/ssU/lVQTPbZm2UmnhevCk7g+G0hI7th2k2w2iXlR7bL541CtPpRc2eUbBDGFMQFSCs3JtwxlLxkFon5Jh3AboxGL7qNoFdal16xUQyHIEBaN4PYHjf7Ngl9Nzrd6C7wcw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=boeing.com;
  • Cc: "nathan.studer@xxxxxxxxxxxxxxx" <nathan.studer@xxxxxxxxxxxxxxx>, "stewart@xxxxxxx" <stewart@xxxxxxx>, "Weber (US), Matthew L" <matthew.l.weber3@xxxxxxxxxx>, "Whitehead (US), Joshua C" <joshua.c.whitehead@xxxxxxxxxx>
  • Delivery-date: Thu, 26 Jun 2025 03:51:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AdvmTXSjjVtbwyT/QSCLI/dKN0kF9Q==
  • Thread-topic: Discussion on the delayed start of major frame with ARINC653 scheduler

We are observing a slight delay in the start of major frame with the current 
implementation of ARINC653 scheduler, which breaks the determinism in the 
periodic execution of domains.

This seems to result from the logic where the variable "next_major_frame" is 
calculated based on the current timestamp "now" at a653sched_do_schedule().

static void cf_check
a653sched_do_schedule(
<snip>
    else if ( now >= sched_priv->next_major_frame )
    {
        /* time to enter a new major frame
         * the first time this function is called, this will be true */
        /* start with the first domain in the schedule */
        sched_priv->sched_index = 0;
        sched_priv->next_major_frame = now + sched_priv->major_frame;
        sched_priv->next_switch_time = now + sched_priv->schedule[0].runtime;
    }

Therefore, the inherent delta between "now" and the previous "next_major_frame" 
is added to the next start of major frame represented by the variable 
"next_major_frame".

And I think the issue can be fixed with the following change to use 
"next_major_frame" as the base of calculation.

diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c index 
930361fa5c..15affad3a3 100644
--- a/xen/common/sched/arinc653.c
+++ b/xen/common/sched/arinc653.c
@@ -534,8 +534,11 @@ a653sched_do_schedule(
          * the first time this function is called, this will be true */
         /* start with the first domain in the schedule */
         sched_priv->sched_index = 0;
-        sched_priv->next_major_frame = now + sched_priv->major_frame;
-        sched_priv->next_switch_time = now + sched_priv->schedule[0].runtime;
+
+        do {
+            sched_priv->next_switch_time = sched_priv->next_major_frame + 
sched_priv->schedule[0].runtime;
+            sched_priv->next_major_frame += sched_priv->major_frame;
+        } while ((now >= sched_priv->next_major_frame) || (now >= 
sched_priv->next_switch_time));
     }
     Else

Can I get your advice on this subject?

Should you have any questions about the description, please let me know.

Here are the details to reproduce the issue on QEMUARM64.

[Xen version]
- 4.19 (43aeacff8695850ee26ee038159b1f885e69fdf)

[ARINC653 pool configuration]
- name="Pool-arinc"
- sched="arinc653"
- cpus=["3"]

[Dom1 configuration]
- name = "dom1"
- kernel = "/etc/xen/dom1/Image"
- ramdisk = "/etc/xen/dom1/guest.cpio.gz"
- extra = "root=/dev/loop0 rw nohlt"
- memory = 256
- vcpus = 1
- pool = "Pool-arinc"

[Major frame configuration]
$ a653_sched -p Pool-arinc dom1:10 :10 //20 msec (Dom1 10 msec : Idle 10 msec)

[Collecting xentrace dump]
$ xentrace -D -T 5 -e 0x2f000 /tmp/xentrace.bin

Parsed xentrace shows that its runstate change from 'runnable' to 'running', 
which means the start of major frame, is slightly shifted every period.
Below are the first 21 traces since dom1 has started running. With the given 
major frame of 20 msec, the 21st major frame should have started at 0.414553536 
sec (0.01455336 + 20 msec * 20).
However, it started running at 0.418066096 sec which results in 3.5 msec of 
shift, which will be eventually long enough to wrap around the whole major 
frame (roughly after 120 periods).
 
0.014553536 ---x d?v? runstate_change d1v0 runnable->running
0.034629712 ---x d?v? runstate_change d1v0 runnable->running
0.054771216 ---x d?v? runstate_change d1v0 runnable->running
0.075080608 -|-x d?v? runstate_change d1v0 runnable->running
0.095236544 ---x d?v? runstate_change d1v0 runnable->running
0.115390144 ---x d?v? runstate_change d1v0 runnable->running
0.135499040 ---x d?v? runstate_change d1v0 runnable->running
0.155614784 ---x d?v? runstate_change d1v0 runnable->running
0.175833744 ---x d?v? runstate_change d1v0 runnable->running
0.195887488 ---x d?v? runstate_change d1v0 runnable->running
0.216028656 ---x d?v? runstate_change d1v0 runnable->running
0.236182032 ---x d?v? runstate_change d1v0 runnable->running
0.256302368 ---x d?v? runstate_change d1v0 runnable->running
0.276457472 ---x d?v? runstate_change d1v0 runnable->running
0.296649296 ---x d?v? runstate_change d1v0 runnable->running
0.316753856 ---x d?v? runstate_change d1v0 runnable->running
0.336909120 ---x d?v? runstate_change d1v0 runnable->running
0.357329936 ---x d?v? runstate_change d1v0 runnable->running
0.377691744 |||x d?v? runstate_change d1v0 runnable->running
0.397747008 |||x d?v? runstate_change d1v0 runnable->running
0.418066096 -||x d?v? runstate_change d1v0 runnable->running

However, with the suggested change applied, we can obtain the deterministic 
behavior of arinc653 scheduler, where every major frame starts 20 msec apart.
 
0.022110320 ---x d?v? runstate_change d1v0 runnable->running
0.041985952 ---x d?v? runstate_change d1v0 runnable->running
0.062345824 ---x d?v? runstate_change d1v0 runnable->running
0.082145808 ---x d?v? runstate_change d1v0 runnable->running
0.101957360 ---x d?v? runstate_change d1v0 runnable->running
0.122223776 ---x d?v? runstate_change d1v0 runnable->running
0.142334352 ---x d?v? runstate_change d1v0 runnable->running
0.162126256 ---x d?v? runstate_change d1v0 runnable->running
0.182261984 ---x d?v? runstate_change d1v0 runnable->running
0.202001840 |--x d?v? runstate_change d1v0 runnable->running
0.222070800 ---x d?v? runstate_change d1v0 runnable->running
0.242137680 ---x d?v? runstate_change d1v0 runnable->running
0.262313040 ---x d?v? runstate_change d1v0 runnable->running
0.282178128 ---x d?v? runstate_change d1v0 runnable->running
0.302071328 ---x d?v? runstate_change d1v0 runnable->running
0.321969216 ---x d?v? runstate_change d1v0 runnable->running
0.341958464 ---x d?v? runstate_change d1v0 runnable->running
0.362147136 ---x d?v? runstate_change d1v0 runnable->running
0.382085296 ---x d?v? runstate_change d1v0 runnable->running
0.402076560 ---x d?v? runstate_change d1v0 runnable->running
0.421985456 ---x d?v? runstate_change d1v0 runnable->running

Thanks,
Anderson



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.