[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Xen 4.5, SeaBIOS, Radeon 5770 VGA Passthrough and Windows 10... Works.



Initially, I wasn't going to pay any attention to Windows 10 release, but 
coincidentally, since 3 weeks or so ago, I started to have some really annoying 
issues with the GPLPV NIC, which crashes near constantly on my main WXP x64 VM 
since I installed a game (Path of Exile). I have to disable the NIC from 
Network Connections and reenable it again for it to work, a disruption which 
makes online gaming a pain in the butt, and to add insult to the injury, the 
Driver crashes seems to be predictable, since usually I have a sucession of 
crashes that consistently happens after some elapsed time (Usually 45 minutes). 
I recall having had a near identical issue around a year ago or so, but got 
solved upgrading the GPLPV Drivers (From GPLPV 0.11.0.372 to EJBPV 1.0.1105), 
which I'm currently using. As the webpage of the GPLPV Drivers dissapeared some 
months ago and I didn't hear anything else from the author, I don't think I can 
get any support on this one even if I wanted to run the debug Drivers to try to 
track what the hell is making it crash like this.
Worse yet, I also got stuck on the uninstall procedure, since even if the GPLPV 
Drivers gets removed and there is no /GPLPV on WXP line in boot.ini, Windows 
still insist on detecting unknow devices like the NIC attached to XenBus 
instead of the standard QEMU emulated devices, so I could not try using the 
Realtek RTL8139 or Intel e1000 to see if they behave the same. Setting 
xen_platform_pci = 0 on the DomU Config File makes WXP x64 BSOD on boot. Yet, 
on a fresh Windows install with xen_platform_pci = 1, it sees the QEMU emulated 
devices before installing the GPLPV Drivers, so it seems that it permanently 
changes something that makes the related PCI Devices attached to XenBus, and I 
didn't figured out how to revert that without doing a clean install.
So, since I didn't had a clear idea about how to solve this issue, I decided to 
give a go to the W10 x64 RTM build...


My Dom0 is still sitting on a 3 or 4 months old Arch Linux install with Xen 4.5 
using qemu-xen and Linux Kernel 4.0.1, passing through the integrated Realtek 
Sound Card of my Supermicro X10SAT and a rather old Radeon 5770.

The first try was creating a VM with OVMF Firmware, using a LVM volume for 
storage, and installing from the mounted Windows 10 ISO, without doing 
passthrough. Sucess on first try, no issues. However, after I added the pci = 
line with any device (I tried Video Card + Sound Card, then merely Sound Card), 
OVMF didn't even POST at all. It doesn't get to TianoCore splash screen, but it 
doesn't seem to freeze or crash, neither, since on xl list it appears on r----- 
status, and the time ocassionally increases. But still, after waiting several 
minutes, it doesn't POST.
The second try was using SeaBIOS. I installed once without passthrough and 
again, everything works. Passing through the Realtek and Radeon produced 
unexpected results: Realtek worked fine, the Radeon was detected, but as soon 
as it install the Drivers (Either Catalyst manually or letting Windows Update 
download it), the SDL window became black, yet the Monitor attached to the 
Radeon didn't turned on. The VM seems to not have crashed since I could still 
hear sound from a Youtube video I was running. After rebooting a few times I 
still got a black screen after Windows splash screen, so I decided to format 
and try again from scratch.
The third try was like the second, but with the devices already attached from 
the very first POST, so they were detected during installation. Again, 
installing the Catalyst killed the screen. It worked in Safe Mode with the SDL 
window displaying Windows 10 instead of merely a black screen, but getting to 
Safe Mode itself was not an easy feat.
Since I was bored already, I decided to try to use my WXP x64 VM again. It 
outputted video on the SDL window instead of turning on the Monitor that the 
Radeon has, and the Radeon itslef reported a yellow exclamation mark on Device 
Manager (Code 43 if I recall correctly). At that point I figured out that I 
actually needed to restart the computer. Did so, and tried to create the 
Windows 10 VM as the very first thing. SUCESS. Monitor turned on, and 
everything working nearly as intended. A curious thing is that by default, 
Windows wants to use BOTH the Radeon, and the emulated Standard VGA that 
outputs in a SDL window on Dom0 as Extended Desktop. Disabling the Standard VGA 
from Device Manager doesn't actually seem to disable it at all, since Extended 
Desktop still works, so you have also to tell Windows to use just the Monitor 
attached to the Radeon.
The reason why it worked on the fourth try was that I was still on the same 
session worth days and several restarts of the WXP x64 VM. Seems that the 
Windows 10 Catalyst likes to take the Radeon ONLY if I boot with it first, not 
if WXP x64 grabbed it first, and viceversa also applies according to my 
testing. Basically, if during a Dom0 session I start either Windows 10 or 
Windows XP x64, I have to stick with that unless I want to restart the entire 
computer.


There is also the VM reboot behavior. WXP x64 with Catalyst 12.1 is the same as 
described here:
http://www.gossamer-threads.com/lists/xen/users/348174

Resume: First boot perfect, second boot Radeon fixed always at max Power State, 
third boot perfect, fourth boot always at max Power State, and so on. It means 
that I have to effectively reboot the VM twice.

Windows 10 with Catalyst 15.7.1 behaves sligty different. The very first boot 
it works with the proper Power States, EVERY reboot after the first boot uses 
the max one. So in this regard, it means that I have to restart the computer to 
get everything working as intended, while WXP SP3 and WXP x64 needed just two 
consecutive VM reboots.


Also, since either Kernel 3.19 or 4.0, using xl create to start a VM with the 
Radeon attached, produces this:

# xl create myvmwithradeon.cfg
Parsing config from myvmwithradeon.cfg
libxl: error: libxl_pci.c:1034:libxl__device_pci_reset: The kernel doesn't 
support reset from sysfs for PCI device 0000:01:00.0

The VM is created and works anyways. I suppose that instead of error, it should 
say warning.



So basically, the good news is that you can get VGA Passthrough working on 
Windows 10 RTM. It also solved my original issue, so far, no NIC crashes from 
the emulated Intel e1000. If I don't send a mail ranting in 7 days, you can 
assume that it is still working fine.                                        
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.