[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Shattering superpages impact on IOMMU in Xen





6 апр. 2017 г. 22:22 пользователь "Julien Grall" <julien.grall@xxxxxxx> написал:
Hi Oleksandr,
Hi Julien.


On 04/06/2017 07:59 PM, Oleksandr Tyshchenko wrote:
Hi, guys.

Seems, it was only my fault. The issue wasn't exactly in shattering,
the shattering just increased probability for IOMMU page faults to
occur. I didn't do clean_dcache for the page table entry after
updating it. So, with clean_dcache I don't see page faults when
shattering superpages!
BTW, can I configure domheap pages (which I am using for the IOMMU
page table) to be uncached? What do you think?

I am not sure if you suggest to configure all the domheap pages to be uncached or only a limited number.
I meant a limited number. Only pages the IOMMU page table was built from.

In the case where you switch all domheap to uncached, you will have some trouble when copy data to/from the guest in hypercall because of mismatch attribute.

In the case where you only configure some of the domheap pages, you will remove the advantage of 1GB mapping of the domheap in the hypervisor table and will increase the memory usage of Xen. Also, you will have to be careful when switching back and forth the domheap memory attribute between cache and uncache.
I got it. For me this means that performing cache flush after updating page table entry is the safest and easiest way.

If the IOMMU is not able to snoop the cache, then the way forward is to use a clean_dcache operation after writing a page table entry. This is how we deal in the p2m code.
Agree.

As we update page table in an atomic way (no BBM sequence) and the reason caused page faults was found, I think that the IPMMU driver can declare superpage capability?

Thank you.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.