[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for-4.5 v11 0/9] Mem_event and mem_access for ARM
Please use plain text for emails. On Mon, 29 Sep 2014, Tamas K Lengyel wrote: > On Mon, Sep 29, 2014 at 3:37 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote: > > It has also been proposed that a proper analysis for overhead be > > performed on this series as to show it does not add too much overhead > > to non-mem_access users. What that entails is unclear to me and IMHO > > it's not an easy task considering all the corner-cases and use-cases > > that would need to be covered to be comprehensive. It has been my goal > > during this series to minimize the overhead added and to be on-par > > with the x86 side, but I'm afraid a more in-depth analysis is not > > something I can contribute. Of course, if specific instances of > > overhead added are pointed out in the code that can be avoided, I'm > > happy to do so. > > What we would need is some evidence that there is no regression when > xenaccess is not in use for at least some common benchmarks. e.g. > hackbench and kernbench for example. Not asking for every corner case > etc, just some basic stuff. Stefano do you have anything thoughts on > other small/simple benchmarks? > > Ian. > > > I don't see how those benchmarks would be meaningful for this series. During > normal operations, the only overhead for the domain would be in > the trap handlers checking the boolean flag if mem_access is in use in case a > permission fault happened in the second stage translation.. which > I have never observed happening during my tests. So those benchmarks don't > really exercise any paths that mem_access touches. That is why you should be pretty confident that the benchmarks won't be a problem for you :-) FYI it is pretty common to ask for benchmarks for series that change code on the hot path. I would suggest to run kernbench (http://ck.kolivas.org/apps/kernbench/kernbench-0.50/kernbench) 3 times on a VM without your series, 3 times on a VM with your series, without using mem_access and 3 times on a VM with your series and using mem_access (the last run is not actually required but would be useful to know). Then send out the results to the list together with your configuration (hardware, amount of memory and physical cpus, amount of memory and number of vcpus assigned to the VM, software versions, etc), maybe on your 0/9 patch. If you don't own any off-the-shelf hardware and you cannot disclose benchmarks figures for the hardware you have, then please send out the results in terms of overhead: something like "my series makes kernbench 1% slower without using mem_access and 10% slower using mem_access". I would strongly prefer to have the full results though. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |