[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XenGT is still regressed on master



>>> On 11.03.19 at 11:21, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 11 March 2019 09:35
>> 
>> >>> On 08.03.19 at 19:37, <igor.druzhinin@xxxxxxxxxx> wrote:
>> > On 08/03/2019 16:14, Jan Beulich wrote:
>> >> One caveat though: What do you suggest to do with page straddling
>> >> accesses? The commit introducing these functions was, after all to
>> >> deal with this special case. The in-flight request we observe there
>> >> could be for the leading or trailing part of the access that's being
>> >> re-executed. While these could perhaps be told apart by looking at
>> >> the low address bits, similarly what do you do for multi-part (but
>> >> perhaps well aligned) accesses like cmps*, vgather*, or vscatter*?
>> >
>> > I don't think there is a problem here: IOREQs are issued sequentially
>> > for each part of the access. hvmemul_linear_mmio_access() makes sure one
>> > chunk of the access isn't straddling a page boundary and we're finishing
>> > the requested part immediately after an IOREQ for it got issued. I don't
>> > see how it could enter linear_{read,write}() for a wrong part unless the
>> > emulation layer above is broken.
>> 
>> First of all - there's no way to bypass linear_{read,write}(). The
>> question is what path is to be taken inside those functions.
>> 
>> Of the multiple memory accesses (besides the insn fetch) that an insn
>> may do, some may access RAM and some may access MMIO. So for
>> certain of the accesses we may _need_ to call
>> hvm_copy_{from,to}_guest_linear(), while for others we may want
>> (need) to bypass it as you have outlined.
>> 
>> Perhaps, besides the examples given before, a PUSH/POP from/to MMIO
>> is another relevant case to consider: In the common case the stack
>> would be in RAM.
>> 
>> And the page straddling change in particular was to deal with cases
>> where an individual access would have part of it map to RAM and
>> another part go to MMIO. I.e. we'd again go both routes within
>> linear_{read,write}() in the course of emulating a single insn.
>> 
>> The fact that we're issuing IOREQs sequentially doesn't help the
>> situation - upon re-execution all prior memory accesses will be re-
>> played. It's just that previously completed IOREQs won't be re-
>> issued themselves, due to their results having got recorded into the
>> struct hvm_mmio_cache instances hanging off of struct vcpu. See
>> hvmemul_phys_mmio_access(), hvmemul_find_mmio_cache(), and
>> hvmemul_linear_mmio_access().
> 
> AIUI the problem is not one of re-issuing or caching here. The problem is 
> that something has changed from MMIO to memory within the scope of a single 
> emulation meaning that we take the wrong path in linear_read() or 
> linear_write(). The crucial thing is that if we start an MMIO for an access 
> then we need to make sure we treat re-issuing of the same access as MMIO, 
> even 
> if RAM has magically appeared in the P2M in the meantime.

I agree. I've merely tried to explain to Igor what the overall picture is,
to make clear what cases all need to continue to work with whatever
change might be considered.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.