[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] PCI Passthrough, Radeon 7950 and Windows 7 64-bit



Well, then as i said before: I thing xm is a lot more stable right now
then xl and i don't really see any benefit of using xl right now.

Actually I tried using xl instead of xm last week and but run intu
issues with creating vifs when using nat. Interesting to see that
there are also issues with vga passthrough.

Are there - right now - any benefits of using xl instead of xm? I
thought that it's easier to run stubdoms with xl, but after reading
that stubdoms don't bring you any significant performance increate
when using PVOPS-HVMs, i gave up on trieing to get stubdoms running.
So it would be nice to here if migrating to xl is actually something
to look into in the (near) future.

2012/6/27 Casey DeLorme <cdelorme@xxxxxxxxx>:
> I am using the xl toolstack, with Xen 4.2. ÂThat's probably the difference,
> I tried the pci flags (both inline and stand-alone versions of them) without
> any changes.
>
> On Tue, Jun 26, 2012 at 7:19 AM, Matthias
> <matthias.kannenberg@xxxxxxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> did some testing in my lunch break:
>>
>> # dmesg -c && xm dmesg -c
>> # xm destroy WINDOMU
>> # dmesg
>> [257342.161103] xen1: port 1(work) entered disabled state
>> [257342.162777] xen1: port 1(work) entered disabled state
>> [257345.003372] irq 18: nobody cared (try booting with the "irqpoll"
>> option)
>> [257345.004847] Pid: 11048, comm: qemu-dm Not tainted 3.4.2-xen #2
>> [257345.006208] Call Trace:
>> [257345.006208] Â[<ffffffff800085d5>] dump_trace+0x85/0x1c0
>> [257345.006208] Â[<ffffffff8088c036>] dump_stack+0x69/0x6f
>> [257345.006208] Â[<ffffffff800c9936>] __report_bad_irq+0x36/0xe0
>> [257345.006208] Â[<ffffffff800c9c8b>] note_interrupt+0x1fb/0x240
>> [257345.006208] Â[<ffffffff800c72c4>] handle_irq_event_percpu+0x94/0x1d0
>> [257345.006208] Â[<ffffffff800c7464>] handle_irq_event+0x64/0x90
>> [257345.006208] Â[<ffffffff800ca7b4>] handle_fasteoi_irq+0x64/0x120
>> [257345.006208] Â[<ffffffff80008468>] handle_irq+0x18/0x30
>> [257345.006208] Â[<ffffffff805c1cc4>] evtchn_do_upcall+0x1c4/0x2e0
>> [257345.006208] Â[<ffffffff808a308e>] do_hypervisor_callback+0x1e/0x30
>> [257345.006208] Â[<ffffffff8014802a>] fget_light+0x3a/0xc0
>> [257345.006208] Â[<ffffffff80159c65>] do_select+0x315/0x660
>> [257345.006208] Â[<ffffffff8015a15a>] core_sys_select+0x1aa/0x2e0
>> [257345.006208] Â[<ffffffff8015a347>] sys_select+0xb7/0x110
>> [257345.006208] Â[<ffffffff808a29bb>] system_call_fastpath+0x1a/0x1f
>> [257345.006208] Â[<00007f1a6d0081d3>] 0x7f1a6d0081d2
>> [257345.006208] handlers:
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] [<ffffffff80658a80>] usb_hcd_irq
>> [257345.006208] Disabling IRQ #18
>> # xm dmesg
>> <nothing here>
>> # xm create WINDOMU
>> booted without problems and i played some Diablo 3 for testing
>> purpose: no performance change, everything normal
>>
>> Then i analysed the kernel output in dmesg from the kill and found out
>> that the IRQ 18 is not in fact my graphic card what i always thought,
>> but my usb host controler which i have forwarded to the domU, too. So
>> appereantly there is no nothing vga related in the dmesg. Then I
>> thought okay, if the dom0 in fact doesn't care about the vga state, it
>> might actually be the domU and the only thing i guess can be
>> responsible for that is this in my domU config:
>>
>> #######################
>> # Â PCI Power Management:
>> #
>> # Â If it's set, the guest OS will be able to program D0-D3hot states of
>> the
>> # PCI device for the purpose of low power consumption.
>> #
>> pci_power_mgmt=1
>> #######################
>>
>> My new theory is that this allowes the domU (and therefor my windows)
>> to reset the vga on boot so i don't have the problems you have..
>>
>> Question is: Are you using this option in your domU or might that be
>> the difference in our configs?
>>
>>
>>
>>
>> 2012/6/26 Radoslaw Szkodzinski <astralstorm@xxxxxxxxx>:
>> > On Tue, Jun 26, 2012 at 2:51 AM, Casey DeLorme <cdelorme@xxxxxxxxx>
>> > wrote:
>> >> I agree with you that Xen has an awareness, but what I read suggested
>> >> that
>> >> the DomU is supposed to be responsible for the reset.
>> >
>> > Quite silly, many OSes and drivers don't care about device shutdown on
>> > "poweroff".
>> > Why should they, the power usually will be off real soon anyway.
>> > I think currently only Linux cares enough - due to the need to support
>> > kexec.
>> >
>> > Suspend to ram is different matter. Perhaps that avenue would be
>> > useful to explore.
>> > (Attempting to S2R the VM instead of shutdown...)
>> >
>> > Still, it'd be nice for Xen to force reset the devices (FLR, then
>> > D0->D3->D0 ASPM, then finally PCI bus reset as a last resort) when the
>> > VM stops using the devices.
>> > Better safe than sorry.
>> >
>> >> In any event, please
>> >> do post your results. ÂIf you don't have the same performance
>> >> degradation
>> >> and you can help identify where our configurations differ, it could
>> >> help fix
>> >> the problem which would be awesome.
>> >
>> > --
>> > RadosÅaw SzkodziÅski
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.