[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/4] x86/dom0: prevent access to MMCFG areas for PVH Dom0



(Adding Chao again because my MUA seems to drop him each time)

On Mon, Sep 04, 2017 at 10:00:00AM +0100, Roger Pau Monné wrote:
> On Mon, Sep 04, 2017 at 02:25:10PM +0800, Chao Gao wrote:
> > On Thu, Aug 31, 2017 at 11:09:48AM +0100, Roger Pau Monne wrote:
> > >I tested Nehalem, Sandy Bridge and Haswell, but sadly not Ivy Bridge
> > >(in fact I didn't even know about Ivy Bridge, that's why I said all
> > >pre-Haswell).
> > >
> > >In fact I'm now trying with a Nehalem processor that seem to work, so
> > >whatever this issue is it certainly doesn't affect all models or
> > >chipsets.
> > 
> > Hi, Roger.
> > 
> > Last week, I borrowed a Sandy Bridge with Intel(R) Xeon(R) E5-2690
> > 2.7GHz and tested with 'dom0=pvh'. But I didn't see the machine hang.
> > 
> > I also tested on Haswell and found RMRRs in dmar are incorrect on my
> > haswell. The e820 on that machine is:
> > (XEN) [    0.000000] Xen-e820 RAM map:
> > (XEN) [    0.000000]  0000000000000000 - 000000000009a400 (usable)
> > (XEN) [    0.000000]  000000000009a400 - 00000000000a0000 (reserved)
> > (XEN) [    0.000000]  00000000000e0000 - 0000000000100000 (reserved)
> > (XEN) [    0.000000]  0000000000100000 - 000000006ff84000 (usable)
> > (XEN) [    0.000000]  000000006ff84000 - 000000007ac51000 (reserved)
> > (XEN) [    0.000000]  000000007ac51000 - 000000007b681000 (ACPI NVS)
> > (XEN) [    0.000000]  000000007b681000 - 000000007b7cf000 (ACPI data)
> > (XEN) [    0.000000]  000000007b7cf000 - 000000007b800000 (usable)
> > (XEN) [    0.000000]  000000007b800000 - 0000000090000000 (reserved)
> > (XEN) [    0.000000]  00000000fed1c000 - 00000000fed20000 (reserved)
> > (XEN) [    0.000000]  00000000ff400000 - 0000000100000000 (reserved)
> > (XEN) [    0.000000]  0000000100000000 - 0000002080000000 (usable)
> > 
> > And the RMRRs in DMAR are:
> > (XEN) [    0.000000] [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [    0.000000] [VT-D] endpoint: 0000:05:00.0
> > (XEN) [    0.000000] [VT-D]dmar.c:638:   RMRR region: base_addr 723b4000
> > end_addr 7a3f3fff
> > (XEN) [    0.000000] [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [    0.000000] [VT-D] endpoint: 0000:00:1d.0
> > (XEN) [    0.000000] [VT-D] endpoint: 0000:00:1a.0
> > (XEN) [    0.000000] [VT-D]dmar.c:638:   RMRR region: base_addr 723ac000
> > end_addr 723aefff
> > (Endpoint 05:00.0 is a RAID bus controller. Endpoints 00.1d.0 and 00.1a.0
> > are USB controllers.)
> > 
> > After DMA remapping is enabled, two DMA translation faults are reported
> > by VT-d:
> > (XEN) [    9.547924] [VT-D]iommu_enable_translation: iommu->reg =
> > ffff82c00021b000
> > (XEN) [    9.550620] [VT-D]iommu_enable_translation: iommu->reg =
> > ffff82c00021d000
> > (XEN) [    9.553327] [VT-D]iommu.c:921: iommu_fault_status: Primary
> > Pending Fault
> > (XEN) [    9.555906] [VT-D]DMAR:[DMA Read] Request device [0000:00:1a.0]
> > fault addr 7a3f5000, iommu reg = ffff82c00021d000
> > (XEN) [    9.558537] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > (XEN) [    9.559860] print_vtd_entries: iommu #1 dev 0000:00:1a.0 gmfn
> > 7a3f5
> > (XEN) [    9.561179]     root_entry[00] = 107277c001
> > (XEN) [    9.562447]     context[d0] = 2_1072c06001
> > (XEN) [    9.563776]     l4[000] = 9c0000202f171107
> > (XEN) [    9.565125]     l3[001] = 9c0000202f152107
> > (XEN) [    9.566483]     l2[1d1] = 9c000010727ce107
> > (XEN) [    9.567821]     l1[1f5] = 8000000000000000
> > (XEN) [    9.569168]     l1[1f5] not present
> > (XEN) [    9.570502] [VT-D]DMAR:[DMA Read] Request device [0000:00:1d.0]
> > fault addr 7a3f4000, iommu reg = ffff82c00021d000
> > (XEN) [    9.573147] [VT-D]DMAR: reason 06 - PTE Read access is not set
> > (XEN) [    9.574488] print_vtd_entries: iommu #1 dev 0000:00:1d.0 gmfn
> > 7a3f4
> > (XEN) [    9.575819]     root_entry[00] = 107277c001
> > (XEN) [    9.577129]     context[e8] = 2_1072c06001
> > (XEN) [    9.578439]     l4[000] = 9c0000202f171107
> > (XEN) [    9.579778]     l3[001] = 9c0000202f152107
> > (XEN) [    9.581111]     l2[1d1] = 9c000010727ce107
> > (XEN) [    9.582482]     l1[1f4] = 8000000000000000
> > (XEN) [    9.583812]     l1[1f4] not present
> > (XEN) [   10.520172] Unable to find XEN_ELFNOTE_PHYS32_ENTRY address
> > (XEN) [   10.521499] Failed to load Dom0 kernel
> > (XEN) [   10.532171] 
> > (XEN) [   10.535464] ****************************************
> > (XEN) [   10.542636] Panic on CPU 0:
> > (XEN) [   10.547394] Could not set up DOM0 guest OS
> > (XEN) [   10.553605] ****************************************
> > 
> > The fault address the devices failed to access is marked as reserved in
> > e820 and isn't reserved for the devices according to the RMRRs in DMAR.
> > So I think we can draw a conclusion that some existing BIOSs don't
> > expose correct RMRR to OS by DMAR. And we need a workaround such as
> > iommu_inclusive_mapping to deal with such kind of BIOS for both pv dom0
> > and pvh dom0.
> 
> So your box seems to be capable of generating faults. Missing RMRR
> regions is (sadly) expected, but at least you get faults and not a
> complete hang. Which chipset does this box have? Is it a C600/X79?
> 
> > 
> > As to the machine hang Roger observed, I have no idea on the cause. Roger,
> > have you ever seen the VT-d on that machine reporting a DMA
> > translation fault? If not, can you create one fault in native? I think
> > this can tell us whether the hardware's fault report function works well
> > or there are some bugs in Xen code. What is your opinion on this trial?
> 
> Is there any simple way to create such a fault? Does the IOMMU have
> some kind of self-test thing that can be used to generate a synthetic
> fault?
> 
> Thanks, Roger.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.