[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM





On Mon, Sep 29, 2014 at 3:37 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Mon, 2014-09-29 at 14:17 +0200, Tamas K Lengyel wrote:
> On Mon, Sep 29, 2014 at 1:36 PM, Tamas K Lengyel
> <tklengyel@xxxxxxxxxxxxx> wrote:
>         The ARM virtualization extension provides 2-stage paging, a
>         similar mechanisms
>         to Intel's EPT, which can be used to trace the memory accesses
>         performed by
>         the guest systems. This series sets up the necessary
>         infrastructure in the ARM code
>         to deliver the event on R/W/X traps. Finally, we turn on the
>         compilation of
>         mem_access and mem_event on ARM and perform the necessary
>         changes to the tools side.
>
>
> While the series is marked for-4.5, I certainly don't mind having
> these remaining parts delayed till 4.6 as nothing depends on this
> feature being in 4.5. It would make some security researchers life a
> lot easier if they could install a stable Xen release with this
> feature already in-place. Beyond that I don't think there is an
> audience for it (yet). IMHO I think its close to be done but if the
> general feel is that it wasn't reviewed enough and there is some
> hesitance, I'm OK with a couple more rounds of reviews.

I think we've got most (all?) of the generic/x86 code refactoring in and
we could consider taking some of the obvious ARM refactoring (e.g. Add
"p2m_set_permission and p2m_shatter_page helpers.")

That would be most welcome so I don't have to carry those patches.
 
but that the bulk of
the functionality is now 4.6 material.

I'm sorry to be reaching this conclusion after previously being fairly
confident but the change to the copy to/from guest in the previous round
made me take a step back and realise that I had gotten caught up in all
the excitement/rush to get things in. Looking at it with a more level
head now it is touching some pretty core code, with potentially unknown
performance implications and we are quite a way past the feature freeze
already.

The patch you reference in the previous round was newly added has been refactored in this round to avoid adding overhead. If it's your feeling that there might be some other similar cases and you want to delay so you have more time to look at it just be sure, that's perfectly understandable, but IMHO in this version there is no indication that we are adding any unreasonable overhead.
 

Having got the major bits of refactoring in should make this series far
easier to rebase once the 4.6 dev cycle opens and/or for folks who want
to make use of this stuff to apply locally.

Certainly and that is fair.
 

Sorry again for not reaching this conclusion sooner.

> It has also been proposed that a proper analysis for overhead be
> performed on this series as to show it does not add too much overhead
> to non-mem_access users. What that entails is unclear to me and IMHO
> it's not an easy task considering all the corner-cases and use-cases
> that would need to be covered to be comprehensive. It has been my goal
> during this series to minimize the overhead added and to be on-par
> with the x86 side, but I'm afraid a more in-depth analysis is not
> something I can contribute. Of course, if specific instances of
> overhead added are pointed out in the code that can be avoided, I'm
> happy to do so.

What we would need is some evidence that there is no regression when
xenaccess is not in use for at least some common benchmarks. e.g.
hackbench and kernbench for example. Not asking for every corner case
etc, just some basic stuff. Stefano do you have anything thoughts on
other small/simple benchmarks?

Ian.

I don't see how those benchmarks would be meaningful for this series. During normal operations, the only overhead for the domain would be in the trap handlers checking the boolean flag if mem_access is in use in case a permission fault happened in the second stage translation.. which I have never observed happening during my tests. So those benchmarks don't really exercise any paths that mem_access touches.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.