[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 0/7] Implement support for external IPT monitoring



On Wed, Jun 17, 2020 at 11:23 AM Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
>
> On 17/06/2020 17:27, Tamas K Lengyel wrote:
> >>>>>> What semantics do you want for the buffer becoming full?  Given that
> >>>>>> debugging/tracing is the goal, I presume "pause vcpu on full" is the
> >>>>>> preferred behaviour, rather than drop packets on full?
> >>>>>>
> >>>>> Right now this is a ring-style buffer and when it would become full it 
> >>>>> would simply wrap and override the old data.
> >>>> How does the consumer spot that the data has wrapped?  What happens if
> >>>> data starts getting logged, but noone is listening?  What happens if the
> >>>> consumer exits/crashes/etc and stops listening as a consequence?
> >>>>
> >>>> It's fine to simply state what will happen, and possibly even "don't do
> >>>> that then", but the corner cases do at least need thinking about.
> >>> AFAIU the current use-case is predominantly to be used in conjunction
> >>> with VMI events where you want to be able to see the trace leading up
> >>> to a particular vmexit. So in the case when the buffer is wrapped
> >>> in-between events and data is lost that's not really of concern.
> >> That's all fine.  I imagine the output here is voluminous, and needs
> >> help being cut down as much as possible.
> >>
> >> On a tangent, I presume you'd like to include VM-fork eventually, which
> >> ought to include copying the trace buffer on fork?
> > I would eventually like to use it to reconstruct the branch history so
> > we can update AFL's coverage map with that instead of having to do the
> > current breakpoint-singlestep dance. But for that I would only care
> > about the trace starting after the fork, so copying the parent's PT
> > buffer is not needed. We'll also probably only use PT if the branch
> > history is larger than what LBR can hold. I asked Michal to name the
> > hypercall interface "vmtrace" for this reason so we can add other
> > stuff like LBR later using the same interface (which I already
> > implemented in https://github.com/tklengyel/xen/commits/lbr).
>
> I was wondering when someone was going to want LBR data like this.
> Can't you borrow the LBR-stitching tricks from Linux's perf to recover
> the call trace even when its deeper than the LBR stack?

TBH I only spent like an hour putting it together so I haven't
investigated the topic too much. But thanks for the tip, first I heard
about this LBR-stitching trick ;)

>
> What about PEBS?  ISTR there is a fairly complicated matrix of which
> features work in combination.

There is also BTS.. I would assume it would take some experimentation
to figure out what works and when and in what combination. Right now I
have no plans for doing that experimentation or adding support for
additional tracers.

>
>
> As for naming, we should definitely have something fairly generic.
> AFAICT, it would be applicable to ARM's CoreSight facilities as well.

IMHO XEN_DOMCTL_vmtrace would be a good name for controlling these features.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.