[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/domctl: lower loglevel of XEN_DOMCTL_memory_mapping



>>> On 11.09.15 at 14:05, <malcolm.crossley@xxxxxxxxxx> wrote:
> The flush_all(FLUSH_CACHE) in mtrr.c will result in a flush_area_mask for 
> all CPU's in the host.
> It will more time to issue a IPI to all logical cores the more core's there 
> are. I admit that
> x2apic_cluster mode may speed this up but not all hosts will have that 
> enabled.
> 
> The data flush will force all data out to memory controllers and it's 
> possible that CPU's in
> difference package have cached data all corresponding to a particular memory 
> controller which will
> become a bottleneck.
> 
> In worst case, with large delay between XEN_DOMCTL_memory_mapping hypercalls 
> and on a 8 socket
> system you may end up writing out 45MB (L3 cache) * 8 = 360MB to a single 
> memory controller every 64
> pages (256KiB) of domU p2m updated.

True.

Considering that BARs need to be properly aligned in both guest
and host address spaces, I wonder why we aren't using large
pages to map such huge BARs then. As it looks this would require
redefining the semantics of the domctl once again, but that's not
a big problem since - it's a domctl. I'll see if I can cook up something
(assuming that hosts used for passing through devices with such
huge BARs will have support for at least 2Mb pages in both EPT
[NPT always has] and IOMMU).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.