[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] an assertion triggered when running Xen on a HSW desktop



On Tue, Jan 15, 2019 at 03:49:07AM -0700, Jan Beulich wrote:
> >>> On 15.01.19 at 11:27, <roger.pau@xxxxxxxxxx> wrote:
> > On Tue, Jan 15, 2019 at 03:16:01AM -0700, Jan Beulich wrote:
> >> >>> On 15.01.19 at 10:44, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> >>  -----Original Message-----
> >> > [snip]
> >> >> >> (XEN) Xen call trace:
> >> >> >> (XEN)    [<ffff82d08025ccbc>] iommu_map+0xba/0x176
> >> >> >> (XEN)    [<ffff82d0804182d8>] iommu_hwdom_init+0xef/0x220
> >> >> >> (XEN)    [<ffff82d08043716c>] dom0_construct_pvh+0x189/0x129e
> >> >> >> (XEN)    [<ffff82d08043e53c>] construct_dom0+0xd4/0xb14
> >> >> >> (XEN)    [<ffff82d08042d8ef>] __start_xen+0x2710/0x2830
> >> >> >> (XEN)    [<ffff82d0802000f3>] __high_start+0x53/0x55
> >> >> >> (XEN)
> >> >> >> (XEN)
> >> >> >> (XEN) ****************************************
> >> >> >> (XEN) Panic on CPU 0:
> >> >> >> (XEN) Assertion 'IS_ALIGNED(dfn_x(dfn), (1ul << page_order))' failed 
> >> >> >> at
> >> >> iommu.c:323
> >> >> >> (XEN) ****************************************
> >> >> >
> >> >> >Oh, this was added by Paul quite recently. You seem to be using a
> >> >> >rather old commit (a5b0eb3636), is there any reason for using such an
> >> >> >old baseline?
> >> >> 
> >> >> I was using the master branch. Your patch below did fix this issue.
> >> > 
> >> > Given this failure and the fact that valid orders differ between 
> >> > different 
> >> > architectures, I propose we change the argument to the iommu_map/unmap 
> >> > wrapper functions from an order to a count, thus making it clear that 
> >> > there 
> > 
> >> > is no alignment restriction.
> >> 
> >> But the whole idea is for there to be an alignment restriction, such
> >> that it is easy to determine whether large page mappings can be
> >> used to satisfy the request. What's the exact case where a caller
> >> absolutely has to pass in a mis-aligned (dfn,size) tuple?
> > 
> > Taking PVH Dom0 builder as an example, it's possible to have a RAM
> > region that starts on a 4K only aligned address. The natural operation
> > in that case would be to try to allocate a memory region as big as
> > possible up to the next 2MB boundary. Hence it would be valid to
> > attempt to populate this 4K only aligned address using an order > 0
> > and < 9 (2MB order). The alternative here if the asserts are not
> > removed would be to open-code a loop in the caller that iterates
> > creating a bunch of order 0 mappings up to the 2MB boundary. The
> > overhead in that case would be quite big, so I don't think we want to
> > go down that route (also we would end up with a bunch of loops in the
> > callers).
> 
> I'm afraid I'm now more confused than before: If there's a RAM
> region aligned to no better than 4k, how can this possibly be
> populated with an order-greater-than-zero allocation?

Why not? You can request a memory chunk of order 5 from
alloc_domheap_pages for example and pass that to
guest_physmap_add_page. That would be a perfectly fine operation to do
in order to reach a memory address that's aligned to a 2MB boundary.

The other option as said above is to force the caller to then have a
loop that performs a bunch of order 0 guest_physmap_add_page until it
reaches a 2MB aligned address.

> And even
> if I re-phrased your reply to mean an arbitrary alignment / order
> less than 9, then populating this with such a smaller order is still
> fine, and requesting the IOMMU mapping with that smaller order
> is still not going to trip the ASSERT() in question.

But the caller is then forced to iterate over the region and populate
it with order 0 calls to guest_physmap_add_page, which introduces a
lot of overhead.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.