[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] x86/hvm: don't treat MMIO pages as special ones regarding cache attributes



> -----Original Message-----
> From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> Sent: 10 September 2020 12:06
> To: paul@xxxxxxx
> Cc: 'Jan Beulich' <jbeulich@xxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; 
> 'Andrew Cooper'
> <andrew.cooper3@xxxxxxxxxx>; 'Wei Liu' <wl@xxxxxxx>
> Subject: Re: [PATCH] x86/hvm: don't treat MMIO pages as special ones 
> regarding cache attributes
> 
> On Thu, Sep 10, 2020 at 11:53:10AM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > > Sent: 10 September 2020 11:35
> > > To: Jan Beulich <jbeulich@xxxxxxxx>
> > > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Andrew Cooper 
> > > <andrew.cooper3@xxxxxxxxxx>; Wei Liu
> <wl@xxxxxxx>;
> > > Paul Durrant <paul@xxxxxxx>
> > > Subject: Re: [PATCH] x86/hvm: don't treat MMIO pages as special ones 
> > > regarding cache attributes
> > >
> > > On Thu, Sep 10, 2020 at 11:27:49AM +0200, Jan Beulich wrote:
> > > > On 09.09.2020 16:50, Roger Pau Monne wrote:
> > > > > MMIO regions below the maximum address on the memory map can have a
> > > > > backing page struct that's shared with dom_io (see x86
> > > > > arch_init_memory and it's usage of share_xen_page_with_guest), and
> > > > > thus also fulfill the is_special_page check because the page has the
> > > > > Xen heap bit set.
> > > > >
> > > > > This is incorrect for MMIO regions when is_special_page is used by
> > > > > epte_get_entry_emt, as it will force direct MMIO regions mapped into
> > > > > the guest p2m to have the cache attributes set to write-back.
> > > > >
> > > > > Add an extra check in epte_get_entry_emt in order to detect pages
> > > > > shared with dom_io (ie: MMIO regions) and don't force them to
> > > > > write-back cache type on that case.
> > > >
> > > > Did you consider the alternative of not marking those pages as Xen
> > > > heap ones? In particular when looking at it from this angle I
> > > > consider it at least odd for non-RAM (or more precisely non-heap)
> > > > pages to get marked this way.
> > >
> > > I wasn't sure whether this could cause issues in other places of the
> > > code that would rely on this fact and such change seemed more risky
> > > IMO.
> > >
> > > > And I can't currently see anything
> > > > requiring them to be marked as such - them being owned by DomIO is
> > > > all that's needed as it seems.
> > >
> > > Should those pages then simply be assigned to dom_io and set the
> > > appropriate flags (PGC_allocated | 1), or should
> > > share_xen_page_with_guest be modified to not set the PGC_xen_heap
> > > flag?
> > >
> > > I see that such addition was done in a2b4b8c2041, but I'm afraid I
> > > don't fully understand why share_xen_page_with_guest needs to mark
> > > pages as Xen heap.
> > >
> >
> > In general they are not marked Xen heap then they will be considered 
> > domheap and will go through the
> normal free-ing path on domain destruction. Of course this doesn't apply for 
> a system domain that
> never gets destroyed.
> 
> Hm, OK, the original commit (a2b4b8c2041) mentions that marking them
> as Xen heap is done to signal that the virtual address for those is
> not needed (and not available?).
> 
> For the MMIO regions I'm not sure it matters much, since those are not
> assigned to the domain, but just mapped into it. The MMIO regions are
> not shared with the domain accessing them, but rather with dom_io.
> 
> It's still not clear to me what option would be better: modify
> share_xen_page_with_guest to not mark pages as Xen heap, or implement
> something different to assign MMIO pages to dom_io without setting
> the Xen heap flag.

Would using map_mmio_regions() work?

  Paul

> 
> Thanks, Roger.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.