[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] X11 problem with dom0 pvops kernel on Xen 4.0.1



On Tue, Sep 21, 2010 at 08:22:19AM -0400, John McDermott (U.S. Navy Employee) 
wrote:
> 
> On Sep 20, 2010, at 5:24 PM, Konrad Rzeszutek Wilk wrote:
> 
> >> The kernels boot fine, including video, as dom0. In fact, running as dom0, 
> >> they forward X11 no problem. When I install an F13 image as a pvops domU, 
> >> the domU cannot start X. I agree that sounds like an attempt to 
> >> pass-through is somehow screwing things up, but I have not configured the 
> >> guest for pass through.
> > 
> > Ok, so your problems have nothing to do with DRM/KMS nor X if you see X 
> > working
> > as Dom0 on your machine. You don't need those nopat, nomodeset options - 
> > those are only
> > needed if you can't get X working under Dom0 and we need to troubleshoot 
> > what is happening.
> 
> @Konrad, thanks, I will remove those options.
> 
> > 
> >> 
> >> Is there a known-good F13 image I could try to install, in case the one I 
> >> am trying is somehow wrong for this?
> > 
> > Well, the normal F13 works for me.. but maybe I am installing it 
> > differently than you.
> > 
> > What does 'lspci -vvv' and 'dmesg' show you under you DomU? And also can 
> > you attach
> > the 'Xorg.0.log' from DomU?
> 
> @Konrad, I posted a tarball earlier that has that stuff in it. Perhaps our 
> uber-paranoid firewall stripped it. I resend here:
> 

Got it.

Couple of things that glare at tme:
 (XEN) CPU0: VMX disabled by BIOS.
 (XEN) VMX: failed to initialise.
 (XEN) Intel machine check reporting enabled
 (XEN) I/O virtualisation disabled

So, no VMX == no HVM. And also no VT-d enabled. Which means when you
installed FC13 you were doing PV install (which is OK). If you want to
run HVM guests you will need to enable the VMX option "Virtualization" in the 
BIOS.

Next glaring thing. Your Dom0 Xorg log show that:
 [  1836.298] drmOpenByBusid: drmOpenMinor returns 7
 [  1836.298] drmOpenByBusid: drmGetBusid reports pci:0000:02:00.0
 [  1836.298] (EE) [drm] failed to open device
 [  1836.298] (WW) Falling back to old probe method for fbde

Xorg Nouveau driver tried to use the drm device but failed. This is
due to the fact that the nouvoua driver API is off-sync with the Xorg driver.
The in-kernel drivers shows:
pci 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
[drm] nouveau 0000:02:00.0: Detected an NV50 generation card (0x092980a2)
[drm] Initialized nouveau 0.0.15 20090420 for 0000:02:00.0 on minor 0

while the Xorg is:
[  1836.294] (II) Loading /usr/lib64/xorg/modules/drivers/nouveau_drv.so
[  1836.294] (II) Module nouveau: vendor="X.Org Foundation"
[  1836.294]    compiled for 1.8.2, module version = 0.0.16
[  1836.294]    Module class: X.Org Video Driver
[  1836.294]    ABI class: X.Org Video Driver, version 7.0

See the 0.0.15 and 0.0.16? They need to be in sync - and if you try just to
change the version in either file and see how that works - it wont (I tried).
You can get the DRM backport kernel drivers I did and I believe they are 0.0.16,
but that tree is a bit crusty (look up details on the PVOPS DRM wiki).

Either way, this is not a big problem - you are running X under Dom0, so
things are peachy.

The Xorg from the guest side shows this:
154 [    49.161] (EE) FBDEV(0): FBIOBLANK: Invalid argument

which is harmless. It basically is trying to see if it can suspend the screen,
similary to DPMS. Well, it really does not matter as the screen it has is
a VNC window, so...

The way you start you X by doing 'startx' is not kosher anymore. The Kernel
ModeSetting put things on its head such that you need to be more careful. The 
proper
way is to do 'exec /sbin/init 5' or 'telinit 5' in your VNC window for the 
guest.
That should work.


The "xenpro-guest-alpha.lspci.out" you attached is definitly not from your 
guest.
Can you re-run it on the guest?

I am curious to see your screenshots of your VNC window and the xterm from your
guest.
> 
> 
> ----
> What is the formal meaning of the one-line program
> #include "/dev/tty"
> 
> J.P. McDermott                                building 12
> Code 5542                                     mcdermott@xxxxxxxxxxxxxxxx
> Naval Research Laboratory     voice: +1 202.404.8301
> Washington, DC 20375, US      fax:   +1 202.404.7942
> 
> 
> 
> 
> 
> 
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.