[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/mm/p2m: don't needlessly limit MMIO mapping order to 4k
>>> On 17.10.18 at 16:24, <paul.durrant@xxxxxxxxxx> wrote: > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -2081,14 +2081,11 @@ static unsigned int mmio_order(const struct domain *d, > unsigned long start_fn, unsigned long nr) > { > /* > - * Note that the !iommu_use_hap_pt() here has three effects: > - * - cover iommu_{,un}map_page() not having an "order" input yet, > - * - exclude shadow mode (which doesn't support large MMIO mappings), > - * - exclude PV guests, should execution reach this code for such. > - * So be careful when altering this. > + * PV guests or shadow-mode HVM guests must be restricted to 4k > + * mappings. Since you've already posted a patch to add order parameters to IOMMU map/unmap, I'd prefer the respective part of the comment to go away only when the order value really can be passed through. This then requires permitting non-zero values only when the IOMMUs also allow for the respective page sizes. I am in particular not convinced that the time needed to carry out the hypercall is going to be low enough even for 512 4k pages: You need to take into account flushes, including those potentially needed for ATS. You don't provide any proof that flushes are always delayed and batched, nor do I think this is uniformly the case. > @@ -2096,8 +2093,12 @@ static unsigned int mmio_order(const struct domain *d, > * set_typed_p2m_entry() when it needs to zap M2P entries > * for a RAM range. > */ && > - !(start_fn & ((1UL << PAGE_ORDER_1G) - 1)) && (nr >> PAGE_ORDER_1G) > && > - hap_has_1gb ) > + !(start_fn & ((1UL << PAGE_ORDER_1G) - 1)) && > + (nr >> PAGE_ORDER_1G) && > + hap_has_1gb && > + /* disable 1G mappings if we need to keep the IOMMU in sync */ > + !need_iommu_pt_sync(d) I have to admit that I don't understand this v2 addition. How is keeping-in-sync relevant for 1G but not 2M pages? > + ) Misplaced paren. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |