[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 07/17] x86/hvm: add length to mmio check op



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 25 June 2015 14:47
> To: Paul Durrant; Jan Beulich
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 07/17] x86/hvm: add length to mmio check op
> 
> On 25/06/15 14:38, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> >> Sent: 25 June 2015 14:38
> >> To: Paul Durrant; Jan Beulich
> >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> >> Subject: Re: [PATCH v4 07/17] x86/hvm: add length to mmio check op
> >>
> >> On 25/06/15 14:36, Paul Durrant wrote:
> >>>> -----Original Message-----
> >>>> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> >>>> Sent: 25 June 2015 14:34
> >>>> To: Jan Beulich
> >>>> Cc: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> >>>> Subject: Re: [PATCH v4 07/17] x86/hvm: add length to mmio check op
> >>>>
> >>>> On 25/06/15 13:46, Jan Beulich wrote:
> >>>>>>>> On 25.06.15 at 14:21, <andrew.cooper3@xxxxxxxxxx> wrote:
> >>>>>> On 24/06/15 12:24, Paul Durrant wrote:
> >>>>>>> When memory mapped I/O is range checked by internal handlers,
> the
> >>>> length
> >>>>>>> of the access should be taken into account.
> >>>>>>>
> >>>>>>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> >>>>>>> Cc: Keir Fraser <keir@xxxxxxx>
> >>>>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> >>>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> >>>>>>>
> >>>>>> For what purpose?  The length of the access doesn't affect which
> >> handler
> >>>>>> should accept the IO.
> >>>>>>
> >>>>>> This length check now causes an MMIO handler to not claim an
> access
> >>>>>> which straddles the upper boundary.
> >>>>>>
> >>>>>> It is probably fine to terminate such an access early, but it isn't 
> >>>>>> fine
> >>>>>> to pass such a straddled access to the default ioreq server.
> >>>>> No, without involving the length in the check we can end up with
> >>>>> check() saying "Yes, mine" but read() or write() saying "Not me".
> >>>>> What I would agree with is for the generic handler to split the
> >>>>> access if the first byte fits, but the final byte doesn't.
> >>>> I discussed this with Paul over lunch.  I had not considered how IO gets
> >>>> forwarded to the device model for shared implementations.
> >>>>
> >>>> Is it reasonable to split a straddled access and direct the halves at
> >>>> different handlers? This is not in line with how other hardware behaves
> >>>> (PCIe will reject any straddled access).  Furthermore, given small MMIO
> >>>> regions and larger registers, there is no guarantee that a single split
> >>>> will suffice.
> >>>>
> >>>> I see in the other thread going on that a domain_crash() is deemed ok
> >>>> for now, which is fine my me.
> >>>>
> >>> I think that also allows me to simplfy the patch since I don't have to
> modify
> >> the mmio_check op any more. I simply call it once for the first byte of the
> >> access and, if it accepts, verify that it also accepts the last byte of the
> access.
> >>
> >> At that point, I would say it would be easier to modify the claim check
> >> to return "yes/straddled/no" rather than calling it twice.
> > That's excessive code churn, I think. The check functions are generally
> cheap and the second call is only made if the first accepts.
> 
> You are already churning everything anyway by inserting an extra
> parameter.  I do think it would make the logic cleaner and easier to
> follow (which IMO takes precedent over churn).
> 

No, my point was that by making the second call I don't need to add the extra 
parameter. Wait for the revised patch... it's about 6 lines long now ;-)

  Paul

> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.