[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Call schedule set on arinc653 scheduler?



On Thu, Jun 4, 2015 at 3:14 PM, Nathan Studer <nate.studer@xxxxxxxxx> wrote:
On Wed, Jun 3, 2015 at 10:34 AM, Mr Idris <mr@xxxxxxxxxxxx> wrote:
> On Wed, Jun 3, 2015 at 4:28 PM, Mr Idris <mr@xxxxxxxxxxxx> wrote:
>>
>> Hi all,
>>
>> I have managed to call arinc653_scheduler_set.c without error. The
>> message when i run it like this
>>
>> not error
>> not error
>> hypercall bounce and schedule set finish *
>> true
>>
>> * this message because i set on xc_sched_arinc653_schedule_set().
>>
>>
>> but when i try 'xl list -v' still VM is not running
>
>
> I'm sorry accidentally i press send but i haven't finished.
>
> I continue, but when i try 'xl list -v' still VM is not running like this :
> Name                    ID ÂMem VCPUs   State
> Time(s) ÂUUID              Reason-Code ÂSecurity Label
> Domain-0Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â0Â 6771Â Â Â1Â Â Âr-----
> 10.0 00000000-0000-0000-0000-000000000000Â Â Â Â -Â Â Â Â Â Â Â Â -
> Debian                   Â1 Â512  Â1  Â------
> 0.0 938b9c5b-8d9d-402a-9be0-0e0cc4cf67dc    -        -
>
>
> Something weird after the small program run, the computer is becoming really
> slow. Is it something related to runtime?

That's how you know it's working! The arinc653 scheduler is not work
conserving or pre-emptive, so you should expect some performance
degradation. It probably should not be that bad, so I think it is a
symptom of the problem below.

> Does anyone have any idea what change I need to make to get the scheduler to
> run
> the VM? I appreciate the help.

From the attached program, which is similar to your previous program:

sched.sched_entries[0].vcpu_id = 0;
sched.sched_entries[0].runtime = 30;
sched.major_frame += sched.sched_entries[0].runtime;

The runtime field is in units of nanoseconds. 30 nanoseconds is
orders of magnitude shorter than the context switch time. I'm not
sure what the scheduler would do with a runtime this small, but it
would not be pretty. For most configurations, the slice runtimes
should be in the milliseconds range, so multiple your runtimes by
1000000, and see if that fixes your issue.

sched.sched_entries[*].runtime = 10000000;Â /* 10 ms */

  ÂNate

Â
After i changed runtime value to 1000000 or greater and run again. It was suddenly hang with panic on CPU 0 with error message :

(XEN) Assertion 'local_irq_is_enabled()' failed at smp.c:55
(XEN) WARNING WARNING WARNING: Avoiding recursive gdb.
(XEN) ----[ Xen-4.4.1 x86_64 debug=y Not tainted ]----
(XEN) CPU:ÂÂÂ 0
(XEN) RIP:ÂÂÂ e008:[<ffff82d080129707>] on_selected_cpus+0x7/0xd6
(XEN) RFLAGS: 0000000000010046ÂÂ CONTEXT: hypervisor
(XEN) rax: 0000000000000046ÂÂ rbx: ffff82d08013a9c8ÂÂ rcx: 0000000000000000
(XEN) rdx: 0000000000000000ÂÂ rsi: ffff82d08013a9c8ÂÂ rdi: ffff82d0802d7c18
(XEN) rbp: ffff82d0802d7c58ÂÂ rsp: ffff82d0802d7c10ÂÂ r8:Â 0000000000000004
(XEN) r9:Â 000000000000003fÂÂ r10: 0000000000000000ÂÂ r11: 0000000000000246
(XEN) r12: 0000000000000000ÂÂ r13: 0000000000000000ÂÂ r14: ffff82d0802d7d38
(XEN) r15: 0000000000000000ÂÂ cr0: 000000008005003bÂÂ cr4: 00000000000426f0
(XEN) cr3: 00000000df888000ÂÂ cr2: 0000000000989740
(XEN) ds: 0000ÂÂ es: 0000ÂÂ fs: 0000ÂÂ gs: 0000ÂÂ ss: e010ÂÂ cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802d7c10:
(XEN)ÂÂÂ ffff82d08012984e 0000000000000000 0000000000000000 0000000000000000
(XEN)ÂÂÂ 0000000000000000 ffff82d0802735f0 0000000000000001 ffff82d0802f9200
(XEN)ÂÂÂ 0000000000989740 ffff82d0802d7cc8 ffff82d08013af14 0000000000200e8c
(XEN)ÂÂÂ 0000000000000000 00000002030bc067 000000000000000e 0000000000000092
(XEN)ÂÂÂ 0000000000989740 ffff82d0802d7ce8 000000000000000e 0000000000000000
(XEN)ÂÂÂ 0000000000989740 ffff8302154ff000 0000000000000000 ffff82d0802d7ce8
(XEN)ÂÂÂ ffff82d0801892b7 ffff8302154ff000 ffff82d0802d7d38 ffff82d0802d7d28
(XEN)ÂÂÂ ffff82d080190631 0000000000000086 ffff8300dfb98000 0000000000989680
(XEN)ÂÂÂ 0000003222af7456 ffff82d0802d7e68 0000000000000000 00007d2f7fd282a7
(XEN)ÂÂÂ ffff82d08022a33d 0000000000000000 ffff82d0802d7e68 0000003222af7456
(XEN)ÂÂÂ 0000000000989680 ffff82d0802d7e20 ffff8302154fd010 0000000000000246
(XEN)ÂÂÂ 0000003222b7f318 ffff8300df6fe060 0000000000000002 0000000000000086
(XEN)ÂÂÂ 0000003226424461 0000000000000005 ffff82d080274620 0000000000000005
(XEN)ÂÂÂ 0000000e00000000 ffff82d0801254ae 000000000000e008 0000000000010002
(XEN)ÂÂÂ ffff82d0802d7de0 000000000000e010 0000000000000003 00ff82d080319728
(XEN)ÂÂÂ 80000000802fa2a0 ffff8300dfb98000 0000003222af7456 ffff82d0803196e0
(XEN)ÂÂÂ ffff82d0803196e8 0000000000000000 ffff82d0802d7eb0 ffff82d08012616c
(XEN)ÂÂÂ ffff82d0802d7e60 ffff82d080319700 00000000002d7e60 ffff82d0803196e0
(XEN)ÂÂÂ ffff8302154d3f70 ffff82d080319880 ffff82d0802d7eb0 ffff82d08012c7b6
(XEN)ÂÂÂ ffff82d0802d0000 0000000000000246 0000003222aebd61 ffff82d0802eff00
(XEN) Xen call trace:
(XEN)ÂÂÂ [<ffff82d080129707>] on_selected_cpus+0x7/0xd6
(XEN)ÂÂÂ [<ffff82d08013af14>] __trap_to_gdb+0x130/0x9fc
(XEN)ÂÂÂ [<ffff82d0801892b7>] debugger_trap_fatal+0x15/0x2c
(XEN)ÂÂÂ [<ffff82d080190631>] do_page_fault+0x456/0x536
(XEN)ÂÂÂ [<ffff82d08022a33d>] handle_exception_saved+0x2e/0x6c
(XEN)ÂÂÂ [<ffff82d0801254ae>] a653sched_do_schedule+0x10a/0x1de
(XEN)ÂÂÂ [<ffff82d08012616c>] schedule+0x116/0x5df
(XEN)ÂÂÂ [<ffff82d080129359>] __do_softirq+0x81/0x8c
(XEN)ÂÂÂ [<ffff82d0801293b2>] do_softirq+0x13/0x15
(XEN)ÂÂÂ [<ffff82d08015f355>] idle_loop+0x64/0x74
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'local_irq_is_enabled()' failed at smp.c:55
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
(XEN) WARNING WARNING WARNING: Avoiding recursive gdb.
Â
this is the output from 'xl info'

hostÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : boaman
releaseÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : 3.2.0-4-amd64
versionÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : #1 SMP Debian 3.2.65-1+deb7u2
machineÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : x86_64
nr_cpusÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : 1
max_cpu_idÂÂÂÂÂÂÂÂÂÂÂÂ : 0
nr_nodesÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : 1
cores_per_socketÂÂÂÂÂÂ : 1
threads_per_coreÂÂÂÂÂÂ : 1
cpu_mhzÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : 2826
hw_capsÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : bfebfbff:20100800:00000000:00000900:0408e3fd:00000000:00000001:00000000
virt_capsÂÂÂÂÂÂÂÂÂÂÂÂÂ : hvm
total_memoryÂÂÂÂÂÂÂÂÂÂ : 8123
free_memoryÂÂÂÂÂÂÂÂÂÂÂ : 745
sharing_freed_memoryÂÂ : 0
sharing_used_memoryÂÂÂ : 0
outstanding_claimsÂÂÂÂ : 0
free_cpusÂÂÂÂÂÂÂÂÂÂÂÂÂ : 0
xen_majorÂÂÂÂÂÂÂÂÂÂÂÂÂ : 4
xen_minorÂÂÂÂÂÂÂÂÂÂÂÂÂ : 4
xen_extraÂÂÂÂÂÂÂÂÂÂÂÂÂ : .1
xen_versionÂÂÂÂÂÂÂÂÂÂÂ : 4.4.1
xen_capsÂÂÂÂÂÂÂÂÂÂÂÂÂÂ : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_schedulerÂÂÂÂÂÂÂÂÂ : arinc653
xen_pagesizeÂÂÂÂÂÂÂÂÂÂ : 4096
platform_paramsÂÂÂÂÂÂÂ : virt_start=0xffff800000000000
xen_changesetÂÂÂÂÂÂÂÂÂ :
xen_commandlineÂÂÂÂÂÂÂ : placeholder loglvl=all guest_loglvl=all com1=115200,8n1,0x3f8,5 console=com1,vga gdb=com1 kgdboc=com1,115200 sched=arinc653 maxcpus=1
cc_compilerÂÂÂÂÂÂÂÂÂÂÂ : gcc (Debian 4.7.2-5) 4.7.2
cc_compile_byÂÂÂÂÂÂÂÂÂ : manam
cc_compile_domainÂÂÂÂÂ :
cc_compile_date : Wed Jun 3 11:55:42 CEST 2015
xend_config_formatÂÂÂÂ : 4

Regards,
Idris
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.