[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH] iommu: make no-quarantine mean no-quarantine
On 4/30/21, 3:15 AM, Jan Beulich wrote: > So far you didn't tell us what the actual crash was. I guess it's not > even clear to me whether it's Xen or qemu that did crash for you. But > I have to also admit that until now it wasn't really clear to me that > you ran Xen _under_ qemu - instead I was assuming there was an > interaction problem with a qemu serving a guest. I explained this in my OP, sorry if it was not clear: > Background: I am setting up a QEMU-based development and testing environment > for the Crucible team at Star Lab that includes emulated PCIe devices for > passthrough and hotplug. I encountered an issue with `xl pci-assignable-add` > that causes the host QEMU to rapidly allocate memory until getting > OOM-killed. As soon as Xen writes the IQT register, the host QEMU process locks up, starts allocating several hundred MB/sec, and is soon OOM-killed by the host kernel. On 4/30/21, 3:15 AM, Jan Beulich wrote: > Interesting. This then leaves the question whether we submit a bogus > command, or whether qemu can't deal (correctly) with a valid one here. I did some extra debugging to inspect the index values being written to IQT as well as the invalidation descriptors themselves and everything appeared fine to me on Xen's end. In fact, the descriptor written by queue_invalidate_context_sync upon map into dom_io is entirely identical to the one it writes upon unmap from dom0, which works without issue. This point towards a QEMU bug to me: (gdb) c Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101 (gdb) bt #0 queue_invalidate_context_sync (...) at qinval.c:85 #1 flush_context_qi (...) at qinval.c:341 #2 iommu_flush_context_device (...) at iommu.c:400 #3 domain_context_unmap_one (...) at iommu.c:1606 #4 domain_context_unmap (...) at iommu.c:1671 #5 reassign_device_ownership (...) at iommu.c:2396 #6 intel_iommu_assign_device (...) at iommu.c:2476 #7 assign_device (...) at pci.c:1545 #8 iommu_do_pci_domctl (...) at pci.c:1732 #9 iommu_do_domctl (...) at iommu.c:539 ... (gdb) print index $2 = 552 (gdb) print qinval_entry->q.cc_inv_dsc $3 = { lo = { type = 1, granu = 3, res_1 = 0, did = 0, sid = 256, fm = 0, res_2 = 0 }, hi = { res = 0 } } (gdb) c Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58 (gdb) bt #0 qinval_next_index (...) at qinval.c:58 #1 queue_invalidate_wait (...) at qinval.c:159 #2 invalidate_sync (...) at qinval.c:207 #3 queue_invalidate_context_sync (...) at qinval.c:106 ... (gdb) print tail $4 = 553 (gdb) c Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58 (gdb) bt #0 qinval_next_index (...) at qinval.c:58 #3 queue_invalidate_iotlb_sync (...) at qinval.c:120 #4 flush_iotlb_qi (...) at qinval.c:376 #5 iommu_flush_iotlb_dsi (...) at iommu.c:499 #6 domain_context_unmap_one (...) at iommu.c:1611 #7 domain_context_unmap (...) at iommu.c:1671 ... (gdb) print tail $5 = 554 (gdb) c Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58 (gdb) bt #0 qinval_next_index (...) at qinval.c:58 #1 queue_invalidate_wait (...) at qinval.c:159 #2 invalidate_sync (...) at qinval.c:207 #3 queue_invalidate_iotlb_sync (...) at qinval.c:143 ... (gdb) print tail $6 = 555 (gdb) c Thread 1 hit Breakpoint 5, qinval_next_index (...) at qinval.c:58 (gdb) bt #0 qinval_next_index (...) at qinval.c:58 #1 queue_invalidate_context_sync (...) at qinval.c:86 #2 flush_context_qi (...) at qinval.c:341 #3 iommu_flush_context_device (...) at iommu.c:400 #4 domain_context_mapping_one (...) at iommu.c:1436 #5 domain_context_mapping (...) at iommu.c:1510 #6 reassign_device_ownership (...) at iommu.c:2412 ... (gdb) print tail $7 = 556 (gdb) c Thread 1 hit Breakpoint 4, queue_invalidate_context_sync (...) at qinval.c:101 (gdb) print index $8 = 556 (gdb) print qinval_entry->q.cc_inv_dsc $9 = { lo = { type = 1, granu = 3, res_1 = 0, did = 0, sid = 256, fm = 0, res_2 = 0 }, hi = { res = 0 } } (gdb) c Continuing. Remote connection closed With output from dom0 and Xen like: [ 31.002214] e1000e 0000:01:00.0 eth1: removed PHC [ 31.694270] e1000e: eth1 NIC Link is Down [ 31.717849] pciback 0000:01:00.0: seizing device [ 31.719464] Already setup the GSI :20 (XEN) [ 83.572804] [VT-D]d0:PCIe: unmap 0000:01:00.0 (XEN) [ 808.092310] [VT-D]d32753:PCIe: map 0000:01:00.0 Good day, Scott
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |