[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 13/25] argo: implement the register op



Adding the introspection guys.

On Fri, Jan 04, 2019 at 08:47:04AM -0700, Jan Beulich wrote:
> >>> On 04.01.19 at 16:35, <roger.pau@xxxxxxxxxx> wrote:
> > On Fri, Jan 04, 2019 at 06:22:19AM -0700, Jan Beulich wrote:
> >> >>> On 04.01.19 at 09:57, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Fri, Dec 21, 2018 at 03:05:03PM -0800, Christopher Clark wrote:
> >> >> On Thu, Dec 20, 2018 at 4:52 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> >> >> wrote:
> >> >> >
> >> >> > On Wed, Dec 19, 2018 at 09:41:59PM -0800, Christopher Clark wrote:
> >> >> > > On Wed, Dec 12, 2018 at 8:48 AM Roger Pau Monné 
> >> >> > > <roger.pau@xxxxxxxxxx> 
> > wrote:
> >> >> > > >
> >> >> > > > On Fri, Nov 30, 2018 at 05:32:52PM -0800, Christopher Clark wrote:
> >> >> > Then I wonder why you need such check in any case if the code can
> >> >> > handle such cases, the more than the check itself is racy.
> >> >> 
> >> >> OK, so at the root of the question here is: does it matter what the p2m
> >> >> type of the memory is at these points:
> >> >> 
> >> >> 1) when the gfn is translated to mfn, at the time of ring registration
> >> > 
> >> > This is the important check, because that's where you should take a
> >> > reference to the page. In this case you should check that the page is
> >> > of ram_rw type.
> >> > 
> >> >> 2) when the hypervisor writes into guest memory:
> >> >>     - where the tx_ptr index is initialized in the register op
> >> >>     - where ringbuf data is written in sendv
> >> >>     - where ring description data is written in notify
> >> > 
> >> > As long as you keep a reference to the pages that are part of the ring
> >> > you don't need to do any checks when writing/reading from them. If the
> >> > guest messes up it's p2m and does change the gfn -> mfn mappings for
> >> > pages that are part of the ring that's the guest problem, the
> >> > hypervisor still has a reference to those pages so they won't be
> >> > reused.
> >> 
> >> For use cases like introspection this may not be fully correct,
> >> but it may also be that my understanding there isn't fully
> >> correct. If introspection agents care about _any_ writes to
> >> a page, hypervisor ones (which in most cases are merely
> >> writes on behalf of the guest) might matter as well. I think
> >> to decide whether page accesses need to be accompanied
> >> by any checks (and if so, which ones) one needs to
> >> - establish what p2m type transitions are possible for a
> >>   given page,
> >> - verify what restrictions may occur "behind the back" of
> >>   the entity wanting to do the accesses,
> >> - explore whether doing the extra checking at p2m type
> >>   change time wouldn't be better than at the time of access.
> > 
> > Maybe this is use-case is different, but how does introspection handle
> > accesses to the shared info page or the runstate info for example?
> > 
> > I would consider argo to be the same in this regard.
> 
> Not exactly: The shared info page is special in any event. For
> runstate info (and alike - there's also struct vcpu_time_info)
> I'd question correctness of the current handling. If that's
> wrong already, I'd prefer if the issue wasn't spread.

There are also grants, which when used together with another guest on
the same host could allow to bypass introspection AFAICT? (unless
there's some policy applied that limit grant sharing to trusted
domains)

TBH I'm not sure how to handle hypoervisor accesses with
introspection.  My knowledge of introspection is fairly limited, but
it pauses the guest and sends a notification to an in guest agent. I'm
not sure this is applicable to hypervisor writes, since it's not
possible to pause hypervisor execution and wait for a response from a
guest agent.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.