[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/4] x86/HVM: implement memory read caching



On Thu, Sep 20, 2018 at 12:39:59AM -0600, Jan Beulich wrote:
> >>> On 19.09.18 at 17:57, <wei.liu2@xxxxxxxxxx> wrote:
> > On Tue, Sep 11, 2018 at 07:15:19AM -0600, Jan Beulich wrote:
> >> Emulation requiring device model assistance uses a form of instruction
> >> re-execution, assuming that the second (and any further) pass takes
> >> exactly the same path. This is a valid assumption as far use of CPU
> >> registers goes (as those can't change without any other instruction
> >> executing in between), but is wrong for memory accesses. In particular
> >> it has been observed that Windows might page out buffers underneath an
> >> instruction currently under emulation (hitting between two passes). If
> >> the first pass translated a linear address successfully, any subsequent
> >> pass needs to do so too, yielding the exact same translation.
> > 
> > Not sure I follow. If the buffers are paged out between two passes, how
> > would caching the translation help?  Yes you get the same translation
> > result but the content of the address pointed to by the translation
> > result could be different, right?
> 
> If we accessed that memory, yes. But the whole point here is to avoid
> memory accesses during retry processing, when the same access has
> already occurred during an earlier round. As noted on another sub-
> thread, the term "cache" here may be a little misleading, as it's not
> there to improve performance (this, if so, would just be a desirable
> side effect), but to guarantee correctness. I've chosen this naming for
> the lack of a better alternative.
> 
> So during replay/retry, inductively by all previously performed
> accesses coming from this cache, the result is going to be the same
> as that of the previous run. It's just that, for now, we use _this_
> cache only for page table accesses. But don't forget that there is
> at least one other cache in place (struct hvm_vcpu_io's
> mmio_cache[]).
> 
> For the paged-out scenario this means that despite the leaf page
> table entry having changed to some non-present one between the
> original run through emulation code and the replay/retry after
> having received qemu's reply, since that PTE won't be read again
> the original translation will be (re)used.

Right. I got your idea up to this point.

I would appreciate if you can put the following paragraphs into commit
message.

> 
> For the actual data page in this scenario, while you're right that its
> contents may have changed, there are a couple of aspects to take
> into consideration:
> - We must be talking about an insn accessing two locations (two
>   memory ones, one of which is MMIO, or a memory and an I/O one).
> - If the non I/O / MMIO side is being read, the re-read (if it occurs
>   at all) is having its result discarded, by taking the shortcut through
>   the first switch()'s STATE_IORESP_READY case in hvmemul_do_io().
>   Note how, among all the re-issue sanity checks there, we avoid
>   comparing the actual data.
> - If the non I/O / MMIO side is being written, it is the OSes
>   responsibility to avoid actually moving page contents to disk while
>   there might still be a write access in flight - this is no different in
>   behavior from bare hardware.
> - Read-modify-write accesses are, as always, complicated, and
>   while we deal with them better nowadays than we did in the past,
>   we're still not quite there to guarantee hardware like behavior in
>   all cases anyway. Nothing is getting worse by the changes made
>   here, afaict.
> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.