[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Call schedule set on arinc653 scheduler?



On Tue, Jun 9, 2015 at 1:48 PM, Nathan Studer <nate.studer@xxxxxxxxx> wrote:
On Thu, Jun 4, 2015 at 11:08 AM, Mr Idris <mr@xxxxxxxxxxxx> wrote:
> On Thu, Jun 4, 2015 at 3:14 PM, Nathan Studer <nate.studer@xxxxxxxxx> wrote:
>>
>> On Wed, Jun 3, 2015 at 10:34 AM, Mr Idris <mr@xxxxxxxxxxxx> wrote:
>> > On Wed, Jun 3, 2015 at 4:28 PM, Mr Idris <mr@xxxxxxxxxxxx> wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I have managed to call arinc653_scheduler_set.c without error. The
>> >> message when i run it like this
>> >>
>> >> not error
>> >> not error
>> >> hypercall bounce and schedule set finish *
>> >> true
>> >>
>> >> * this message because i set on xc_sched_arinc653_schedule_set().
>> >>
>> >>
>> >> but when i try 'xl list -v' still VM is not running
>> >
>> >
>> > I'm sorry accidentally i press send but i haven't finished.
>> >
>> > I continue, but when i try 'xl list -v' still VM is not running like
>> > this :
>> > Name                    ID ÂMem VCPUs   State
>> > Time(s) ÂUUID              Reason-Code ÂSecurity Label
>> > Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â0Â 6771Â Â Â1Â Â Âr-----
>> > 10.0 00000000-0000-0000-0000-000000000000Â Â Â Â -Â Â Â Â Â Â Â Â -
>> > Debian                   Â1 Â512  Â1  Â------
>> > 0.0 938b9c5b-8d9d-402a-9be0-0e0cc4cf67dc    -        -
>> >
>> >
>> > Something weird after the small program run, the computer is becoming
>> > really
>> > slow. Is it something related to runtime?
>>
>> That's how you know it's working! The arinc653 scheduler is not work
>> conserving or pre-emptive, so you should expect some performance
>> degradation. It probably should not be that bad, so I think it is a
>> symptom of the problem below.
>>
>> > Does anyone have any idea what change I need to make to get the
>> > scheduler to
>> > run
>> > the VM? I appreciate the help.
>>
>> From the attached program, which is similar to your previous program:
>>
>> sched.sched_entries[0].vcpu_id = 0;
>> sched.sched_entries[0].runtime = 30;
>> sched.major_frame += sched.sched_entries[0].runtime;
>>
>> The runtime field is in units of nanoseconds. 30 nanoseconds is
>> orders of magnitude shorter than the context switch time. I'm not
>> sure what the scheduler would do with a runtime this small, but it
>> would not be pretty. For most configurations, the slice runtimes
>> should be in the milliseconds range, so multiple your runtimes by
>> 1000000, and see if that fixes your issue.
>>
>> sched.sched_entries[*].runtime = 10000000;Â /* 10 ms */
>>
>>Â Â Â Nate
>>
>
> After i changed runtime value to 1000000 or greater and run again. It was
> suddenly hang with panic on CPU 0 with error message :

What are the exact runtimes you are using for Dom-0 and the VM? The
default timeslice is 10ms (10000000), so that's usually a good value
to use for each.

>
> (XEN) Assertion 'local_irq_is_enabled()' failed at smp.c:55
> (XEN) WARNING WARNING WARNING: Avoiding recursive gdb.
> (XEN) ----[ Xen-4.4.1 x86_64 debug=y Not tainted ]----
> (XEN) CPU:Â Â 0
> (XEN) RIP:Â Â e008:[<ffff82d080129707>] on_selected_cpus+0x7/0xd6
> (XEN) RFLAGS: 0000000000010046Â ÂCONTEXT: hypervisor
> (XEN) rax: 0000000000000046Â Ârbx: ffff82d08013a9c8Â Ârcx: 0000000000000000
> (XEN) rdx: 0000000000000000Â Ârsi: ffff82d08013a9c8Â Ârdi: ffff82d0802d7c18
> (XEN) rbp: ffff82d0802d7c58Â Ârsp: ffff82d0802d7c10Â Âr8:Â 0000000000000004
> (XEN) r9: 000000000000003f Âr10: 0000000000000000 Âr11: 0000000000000246
> (XEN) r12: 0000000000000000Â Âr13: 0000000000000000Â Âr14: ffff82d0802d7d38
> (XEN) r15: 0000000000000000 Âcr0: 000000008005003b Âcr4: 00000000000426f0
> (XEN) cr3: 00000000df888000Â Âcr2: 0000000000989740
> (XEN) ds: 0000Â Âes: 0000Â Âfs: 0000Â Âgs: 0000Â Âss: e010Â Âcs: e008
> (XEN) Xen stack trace from rsp=ffff82d0802d7c10:
> (XEN)Â Â ffff82d08012984e 0000000000000000 0000000000000000 0000000000000000
> (XEN)Â Â 0000000000000000 ffff82d0802735f0 0000000000000001 ffff82d0802f9200
> (XEN)Â Â 0000000000989740 ffff82d0802d7cc8 ffff82d08013af14 0000000000200e8c
> (XEN)Â Â 0000000000000000 00000002030bc067 000000000000000e 0000000000000092
> (XEN)Â Â 0000000000989740 ffff82d0802d7ce8 000000000000000e 0000000000000000
> (XEN)Â Â 0000000000989740 ffff8302154ff000 0000000000000000 ffff82d0802d7ce8
> (XEN)Â Â ffff82d0801892b7 ffff8302154ff000 ffff82d0802d7d38 ffff82d0802d7d28
> (XEN)Â Â ffff82d080190631 0000000000000086 ffff8300dfb98000 0000000000989680
> (XEN)Â Â 0000003222af7456 ffff82d0802d7e68 0000000000000000 00007d2f7fd282a7
> (XEN)Â Â ffff82d08022a33d 0000000000000000 ffff82d0802d7e68 0000003222af7456
> (XEN)Â Â 0000000000989680 ffff82d0802d7e20 ffff8302154fd010 0000000000000246
> (XEN)Â Â 0000003222b7f318 ffff8300df6fe060 0000000000000002 0000000000000086
> (XEN)Â Â 0000003226424461 0000000000000005 ffff82d080274620 0000000000000005
> (XEN)Â Â 0000000e00000000 ffff82d0801254ae 000000000000e008 0000000000010002
> (XEN)Â Â ffff82d0802d7de0 000000000000e010 0000000000000003 00ff82d080319728
> (XEN)Â Â 80000000802fa2a0 ffff8300dfb98000 0000003222af7456 ffff82d0803196e0
> (XEN)Â Â ffff82d0803196e8 0000000000000000 ffff82d0802d7eb0 ffff82d08012616c
> (XEN)Â Â ffff82d0802d7e60 ffff82d080319700 00000000002d7e60 ffff82d0803196e0
> (XEN)Â Â ffff8302154d3f70 ffff82d080319880 ffff82d0802d7eb0 ffff82d08012c7b6
> (XEN)Â Â ffff82d0802d0000 0000000000000246 0000003222aebd61 ffff82d0802eff00
> (XEN) Xen call trace:
> (XEN)Â Â [<ffff82d080129707>] on_selected_cpus+0x7/0xd6
> (XEN)Â Â [<ffff82d08013af14>] __trap_to_gdb+0x130/0x9fc
> (XEN)Â Â [<ffff82d0801892b7>] debugger_trap_fatal+0x15/0x2c
> (XEN)Â Â [<ffff82d080190631>] do_page_fault+0x456/0x536
> (XEN)Â Â [<ffff82d08022a33d>] handle_exception_saved+0x2e/0x6c
> (XEN)Â Â [<ffff82d0801254ae>] a653sched_do_schedule+0x10a/0x1de
> (XEN)Â Â [<ffff82d08012616c>] schedule+0x116/0x5df
> (XEN)Â Â [<ffff82d080129359>] __do_softirq+0x81/0x8c
> (XEN)Â Â [<ffff82d0801293b2>] do_softirq+0x13/0x15
> (XEN)Â Â [<ffff82d08015f355>] idle_loop+0x64/0x74
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Assertion 'local_irq_is_enabled()' failed at smp.c:55
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> (XEN) WARNING WARNING WARNING: Avoiding recursive gdb.
>
> this is the output from 'xl info'
>
> host         Â: boaman
> release        : 3.2.0-4-amd64
> version        : #1 SMP Debian 3.2.65-1+deb7u2
> machine        : x86_64
> nr_cpus        : 1
> max_cpu_id      Â: 0
> nr_nodes       Â: 1
> cores_per_socket   Â: 1
> threads_per_core   Â: 1
> cpu_mhz        : 2826
> hw_caps        :
> bfebfbff:20100800:00000000:00000900:0408e3fd:00000000:00000001:00000000
> virt_caps       : hvm
> total_memory     Â: 8123
> free_memory      : 745
> sharing_freed_memory Â: 0
> sharing_used_memory  : 0
> outstanding_claims  Â: 0
> free_cpus       : 0
> xen_major       : 4
> xen_minor       : 4
> xen_extra       : .1
> xen_version      : 4.4.1
> xen_caps       Â: xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler     : arinc653
> xen_pagesize     Â: 4096
> platform_params    : virt_start=0xffff800000000000
> xen_changeset     :
> xen_commandline    : placeholder loglvl=all guest_loglvl=all
> com1=115200,8n1,0x3f8,5 console=com1,vga gdb=com1 kgdboc=com1,115200
> sched=arinc653 maxcpus=1
> cc_compiler      : gcc (Debian 4.7.2-5) 4.7.2
> cc_compile_by     : manam
> cc_compile_domain   :
> cc_compile_date    : Wed Jun 3 11:55:42 CEST 2015
> xend_config_format  Â: 4

Are you using the arinc653 scheduler as is? (I saw your earlier
e-mail thread about writing a scheduler based on the arinc653 one.)

  ÂNate

>
> Regards,
> Idris

Hi Nathan,

Thank you. Now i can run arinc653 scheduler with fresh install of xen.

I use the script that i wrote.

Regards,
Idris
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.