[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2.0-rc4 bugs with GigaByte H77M-D3H + Core i7 3770


  • To: Javier Marcet <jmarcet@xxxxxxxxx>
  • From: Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>
  • Date: Thu, 30 Aug 2012 12:33:42 -0400
  • Cc: Xen Devel Mailing list <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 30 Aug 2012 16:34:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

On Thu, Aug 30, 2012 at 12:43:29PM +0200, Javier Marcet wrote:
> Hi,
> 
> I've just upgraded a server of mine from a Core i3 2100T to an i7 3770, in 
> order
> to do full virtualization with VTd.
> 
> I'm using kernel 3.5.2 and Xen from git://xenbits.xen.org/xen.git @ commit
> 37d7ccdc2f50d659f1eb8ec11ee4bf8a8376926d (Fri Aug 24).
> 
> Since there are various issues I'm gonna comment on them all. I'd appreciate
> if you help me deciding which bug reports to file, and where to file them.

Its easier if there are seperate emails and then we can track them
step-by-step.
> 
> Upon booting under the xen virtualizer everything works fine but I cannot
> suspend the machine and I have reception problems on the DVB-T tuners

Right. The suspend (well, the resume part) is not yet working.
> installed on the system.

That sounds familiar - but without more details its a bit unclear.
> 
> Besides that, xen can't read the cpu capabilities, or so reports virt-manager
> when creating a DomU. This results in being unable to boot any DomU due
> to ACPI errors.

Can you provide a dmesg or output of what you mean by that?
> 
> On the same kernel and machine, KVM can read the capabilities with no
> problems and guests work reliably.
> 
> On the other hand, booting without the xen virtualizer fixes the suspension
> and tuning problems but there are other issues.
> 
> I need to add the parameter intel_iommu=igfx_off to the kernel command line
> or I see half a second of these errors at the beginning of each boot:

Those .. being were? On the Xen line I suppose as the Linux kernel
should not see the Intel DMAR at all - or you have two OSes trying to
utilize it and both failing.
> 
> [    0.358278] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358278] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358286] DRHD: handling fault status reg 2
> [    0.358288] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358288] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358291] DMAR:[DMA Read] Request device [00:02.0] fault addr 9fac7000
> [    0.358291] DMAR:[fault reason 06] PTE Read access is not set
> [    0.358307] DRHD: handling fault status reg 3
> 
> Furthermore, later on, just after enabling the IOMMU, I get this:

How are you enabling the IOMMU? The logs you pointed to did not have any
of this in them? Can you also provide the 'xm dmesg' output please?

> 
> [    0.328564] DMAR: No ATSR found
> [    0.328580] IOMMU 1 0xfed91000: using Queued invalidation
> [    0.328582] IOMMU: Setting RMRR:
> [    0.328589] IOMMU: Setting identity map for device 0000:00:1d.0
> [0x9de36000 - 0x9de52fff]
> [    0.328606] IOMMU: Setting identity map for device 0000:00:1a.0
> [0x9de36000 - 0x9de52fff]
> [    0.328617] IOMMU: Setting identity map for device 0000:00:14.0
> [0x9de36000 - 0x9de52fff]
> [    0.328625] IOMMU: Prepare 0-16MiB unity mapping for LPC
> [    0.328630] IOMMU: Setting identity map for device 0000:00:1f.0
> [0x0 - 0xffffff]
> [    0.328705] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O
> [    0.328714] ------------[ cut here ]------------
> [    0.328718] WARNING: at
> /home/storage/src/ubuntu-precise/drivers/pci/search.c:44
> pci_find_upstream_pcie_bridge+0x51/0x68()
> [    0.328719] Hardware name: To be filled by O.E.M.
> [    0.328720] Modules linked in:
> [    0.328722] Pid: 1, comm: swapper/0 Not tainted 3.5.0-12-i3 #12~precise1
> [    0.328723] Call Trace:
> [    0.328727]  [<ffffffff8106ab0d>] warn_slowpath_common+0x7e/0x96
> [    0.328729]  [<ffffffff8106ab3a>] warn_slowpath_null+0x15/0x17
> [    0.328731]  [<ffffffff812992d5>] pci_find_upstream_pcie_bridge+0x51/0x68
> [    0.328733]  [<ffffffff814bd02e>] intel_iommu_device_group+0x64/0xb7
> [    0.328735]  [<ffffffff814b8a2b>] ? bus_set_iommu+0x3f/0x3f
> [    0.328738]  [<ffffffff814b86f2>] iommu_device_group+0x24/0x26
> [    0.328740]  [<ffffffff814b8a40>] add_iommu_group+0x15/0x33
> [    0.328742]  [<ffffffff8137ba61>] bus_for_each_dev+0x54/0x80
> [    0.328745]  [<ffffffff81cdaf83>] ? memblock_find_dma_reserve+0x13f/0x13f
> [    0.328746]  [<ffffffff814b8a25>] bus_set_iommu+0x39/0x3f
> [    0.328749]  [<ffffffff81d0367c>] intel_iommu_init+0x1aa/0x1ce
> [    0.328751]  [<ffffffff81cdaf96>] pci_iommu_init+0x13/0x3e
> [    0.328754]  [<ffffffff81002094>] do_one_initcall+0x7a/0x132
> [    0.328756]  [<ffffffff81cd2bac>] do_basic_setup+0x96/0xb4
> [    0.328758]  [<ffffffff81cd2533>] ? obsolete_checksetup+0xab/0xab
> [    0.328759]  [<ffffffff81cd2c82>] kernel_init+0xb8/0x12e
> [    0.328762]  [<ffffffff81615b24>] kernel_thread_helper+0x4/0x10
> [    0.328764]  [<ffffffff81cd2bca>] ? do_basic_setup+0xb4/0xb4
> [    0.328766]  [<ffffffff81615b20>] ? gs_change+0x13/0x13
> [    0.328768] ---[ end trace 9bacf275b2da9216 ]---

> 
> You can see dmesg logs, lspci and dmidecode data here:
> 
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-bare.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-normal.log
> http://dl.dropbox.com/u/12579112/logs/dmesg-3.5.0-12-i3-xen.log
> http://dl.dropbox.com/u/12579112/logs/dmidecode.log
> http://dl.dropbox.com/u/12579112/logs/interrupts.log
> http://dl.dropbox.com/u/12579112/logs/lspci.log
> 
> I'm willing to help with whatever is needed.
> 
> 
> -- 
> Javier Marcet <jmarcet@xxxxxxxxx>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.