[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LTTng Xen port : finally in a repository near you



* INAKOSHI Hiroya (inakoshi.hiroya@xxxxxxxxxxxxxx) wrote:
> Mathieu Desnoyers wrote:
> > * INAKOSHI Hiroya (inakoshi.hiroya@xxxxxxxxxxxxxx) wrote:
> >> Hi Mathieu,
> >>
> >> thanks for your reply.  I can understand your opinion very well but a
> >> concern is that cpu ids on a guest OS are different from those on Xen
> >> because they are virtualized.  The number of vcpus in a guest OS is also
> >> different from that of pcpus as you mentioned.  I wondered if the two
> >> traces could be merged directly.  If you translate vcpu ids to pcpu ids
> >> in writing records in the trace buffer in Xen, this concern is solved in
> >> a natural way.
> >>
> > 
> > When you are executing code in dom0 or domUs, how do you plan to get the
> > physical CPU number on which the tracing is done ?
> 
> I am considering the way where dom0 or domUs call hypercalls to write
> records in the Xen's trace buffer.  In this setting, the vcpu info is
> located in the xen kernel stack and the pcpu is the one performing the
> hypercall.  So, I can resolve the mapping between vcpu id and pcpu id.
> 

The performance hit that goes with going through an hypercall for each
traced event would be too high. Typically, changing ring level involves
executing an interrupt routine, which takes a few thousands nanoseconds.
My tracing probes runs within the traced ring in about 270ns (as tested
on a Pentium 4 3GHz).

Mathieu

> Regards,
> Hiroya
> 
> > 
> >> Mathieu Desnoyers wrote:
> >>> * INAKOSHI Hiroya (inakoshi.hiroya@xxxxxxxxxxxxxx) wrote:
> >>>> Hi Mathieu,
> >>>>
> >>>> I am interested in LTTng-xen because I thought that it would be nice if
> >>>> I can get traces on both xen and guest linux at the same time.  I
> >>>> reviewed LTTng-xen and found that
> >>>>
> >>>> * LTTng and LTTng-xen have a quite similar structure,
> >>>> * a trace buffer resides in a hypervisor for LTTng-xen,
> >>>> * it is currently impossible to get traces from guest linux because
> >>>> there is no LTTng for 2.6.18-xen kernel, as you mentioned.
> >>>>
> >>>> I had coarsely ported LTTng to 2.6.18-xen, though it is only for
> >>>> i386.  Now I can get traces on xen and guest linux simultaneously, even
> >>>> though they put records in different trace buffers.
> >>> Hi Ikanoski,
> >>>
> >>> We did the same kind of coarse 2.6.18 port at our lab internally to get
> >>> traces from both Linux and Xen. The fact that the traces are recorded in
> >>> different buffers does not change anything to the fact that those trace
> >>> files can be copied in the same trace directory so they can be parsed
> >>> together by LTTV (traces coming from dom0, domUs and hypervisor). They
> >>> are synchronized by using the TSCs (hopefully, you will configure your
> >>> system to get a reliable TSC on AMD and older intels, see the
> >>> ltt-test-tsc kernel module in recent LTTng versions and ltt.polymtl.ca
> >>> website for info on that matter).
> >>>
> >>>
> >>>> Then I thought that
> >>>> it would be more useful if they put records in xen's trace buffer and I
> >>>> can analyze events
> >>> LTTV merges the information from all the valid trace files that appears
> >>> within the trace directory, so the analysis can be done on data coming
> >>> from userspace, kernels and hypervisor.
> >>>
> >>>> from xen and linux guests with a single lttd and
> >>>> lttctl running on Domain-0.  Do you have an opinion about that?
> >>>>
> >>> lttctl-xen and lttd-xen, although being quite similar to lttd and
> >>> lttctl, use hypercalls to get the data. The standard lttctl/lttd uses
> >>> debugfs files as a hook to the trace buffers.
> >>>
> >>> As a distribution matter, I prefer to leave both separate for now,
> >>> because lttctl-xen and lttd-xen is highly tied to the Xen tree.
> >>>
> >>> Also, merging the information within the buffers between Xen and Dom0 is
> >>> not such a great idea: The Hypervisor and dom0 can have a different
> >>> number of CPUs (Xen : real CPUs, dom0: vcpus). Since I use per-cpu
> >>> buffers, it does not fit.
> >>>
> >>> Also, I don't want dom0 to overwrite data from the Xen buffers easily:
> >>> it's better if we keep some protection between dom0 and the Hypervisor.
> >>>
> >>> Thanks for looking into this, don't hesitate to ask further questions,
> >>>
> >>> Mathieu
> >>>
> >>>> Regards,
> >>>> Hiroya
> >>>>
> >>>>
> >>>> Mathieu Desnoyers wrote:
> >>>>> Hello,
> >>>>>
> >>>>> I made a working version of the LTTng tracer for xen-unstable for x86.
> >>>>> Here is the pointer to my repository so you can try it out :
> >>>>>
> >>>>> hg clone http://ltt.polymtl.ca/cgi-bin/hgweb.cgi xen-unstable-lttng.hg
> >>>>>
> >>>>> Basic usage :
> >>>>>
> >>>>> (see lttctl-xen -h)
> >>>>>
> >>>>> lttctl-xen -c
> >>>>>
> >>>>> (in a different console)
> >>>>> lttd-xen -t /tmp/xentrace1
> >>>>>
> >>>>> (in the 1st console)
> >>>>> lttctl-xen -s
> >>>>>
> >>>>> (tracing is active)
> >>>>>
> >>>>> lttctl-xen -q
> >>>>> lttctl-xen -r
> >>>>>
> >>>>> lttd-xen should automatically quit after writing the last buffers as
> >>>>> soon as lttctl-xen -r is issued.
> >>>>>
> >>>>> Then, you must copy the XML facilities :
> >>>>>
> >>>>> (see the http://ltt.polymtl.ca > QUICKSTART to see how to install the
> >>>>> ltt-control package which contains the XML facilities in your system)
> >>>>>
> >>>>> lttctl-xen -e -t /tmp/xentrace1
> >>>>>
> >>>>> View in the visualiser : (see the QUICKSTART to see how to install it)
> >>>>>
> >>>>> lttv -m textDump -t /tmp/xentrace1
> >>>>>
> >>>>> (not tested yet) : to visualize a dom0 trace with the xen hypervisor
> >>>>> information, one would have to collect the dom0 kernel trace and the Xen
> >>>>> trace and open them together with :
> >>>>> lttv -m textDump -t /tmp/xentrace1 -t /tmp/dom0trace
> >>>>>
> >>>>> The current Linux kernel instrumentation is for 2.6.20. A backport might
> >>>>> be needed to 2.6.18 if there is no proper Xen support in 2.6.20 (I have
> >>>>> not followed the recent developments).
> >>>>>
> >>>>>
> >>>>> Currently broken/missing :
> >>>>>
> >>>>> - Ressources are not freed when the trace channels are destroyed. So you
> >>>>>   basically have to reboot between taking different traces.
> >>>>> - My code in the hypervisor complains to the console that subbuffers
> >>>>>   have not been fully read when the trace channels are destroyed. The
> >>>>>   error printing is just done too fast : lttd-xen is still there and
> >>>>>   reading the buffers at that point. It will get fixed with proper
> >>>>>   ressource usage tracking of both Xen and lttd-xen (same as the first
> >>>>>   point above).
> >>>>> - x86_64 not tested, powerpc local.h and ltt.h missing (should be ripped
> >>>>>   from my Linux kernel LTTng).
> >>>>>
> >>>>>
> >>>>> Cheers,
> >>>>>
> >>>>> Mathieu
> >>>>>
> >>>>>
> >>>>>
> >>>>> * Mathieu Desnoyers (compudj@xxxxxxxxxxxxxxxxxx) wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>> My name is Mathieu Desnoyers, I am the current maintainer of the Linux 
> >>>>>> Trace
> >>>>>> Toolkit project, known as LTTng. This is a tracer for the 2.6 Linux 
> >>>>>> kernels
> >>>>>> oriented towards high performance and real-time applications.
> >>>>>>
> >>>>>> I have read your tracing thread and I am surprised to see how much 
> >>>>>> things
> >>>>>> you would like in a tracer are already implemented and tested in 
> >>>>>> LTTng. I am
> >>>>>> currently porting my tracer to Xen, so I think it might be useful for 
> >>>>>> you to
> >>>>>> know what it provides. My goal is to do not duplicate the effort and 
> >>>>>> save
> >>>>>> everyone some time.
> >>>>>>
> >>>>>> Here follows some key features of LTTng :
> >>>>>>
> >>>>>> Architecture independant data types
> >>>>>> Extensible event records
> >>>>>> Self-describing traces
> >>>>>> Variable size records
> >>>>>> Fast (200 ns per event record)
> >>>>>> Highly reentrant
> >>>>>> Does not disable interrupts
> >>>>>> Does not take lock on the critical path
> >>>>>> Supports NMI tracing
> >>>>>> Analysis/visualization tool (LTTV)
> >>>>>>
> >>>>>> Looking at the integration of the existing LTTng implementation into 
> >>>>>> Xen, I
> >>>>>> came up with those two points for my Christmas whichlist :
> >>>>>>
> >>>>>> Additionnal functionnalities that would be nice to have in Xen :
> >>>>>>
> >>>>>> - RCU-style updates : would allow freeing the buffers without impact 
> >>>>>> on tracing.
> >>>>>>     * I guess I could currently use :
> >>>>>>       for_each_domain( d )
> >>>>>>         for_each_vcpu( d, v )
> >>>>>>           vcpu_sleep_sync(v);
> >>>>>>       I think it will have a huge impact on the system, but it would 
> >>>>>> only be
> >>>>>>       performed before trace buffers free.
> >>>>>>
> >>>>>> - Polling for data in Xen from a dom0 process.
> >>>>>>   Xentrace currently polls the hypervisor each 100ms to see if there 
> >>>>>> is data
> >>>>>>   that needs to be consumed. Instead of an active polling, it would be 
> >>>>>> nice to
> >>>>>>   use the dom0 OS capability to put a process to sleep while waiting 
> >>>>>> for a
> >>>>>>   resource. It would imply creating a module, loaded in dom0, that 
> >>>>>> would wait
> >>>>>>   for a specific virq coming from the Hypervisor to wake up such 
> >>>>>> processes.
> >>>>>>   We could think of exporting a complete poll() interface through 
> >>>>>> sysfs or
> >>>>>>   procfs that would be a directory filled with the resources exported 
> >>>>>> from the
> >>>>>>   Hypervisor to dom0 (which could include wait for resource freed, 
> >>>>>> useful when
> >>>>>>   shutting down a domU instead of busy looping). It would help dom0 to 
> >>>>>> schedule
> >>>>>>   other processes while a process is waiting for the Hypervisor.
> >>>>>>
> >>>>>>
> >>>>>> You might also be interested in looking at :
> >>>>>> - the website (http://ltt.polymtl.ca)
> >>>>>> - LTTng Xen port design document (this one is different from the one 
> >>>>>> posted by
> >>>>>>   Jimi)
> >>>>>>   
> >>>>>> (http://ltt.polymtl.ca/svn/ltt/branches/poly/doc/developer/lttng-xen.txt)
> >>>>>> - OLS 2006 paper "The LTTng tracer : A Low Impact Performance and 
> >>>>>> Behavior
> >>>>>>   Monitor for GNU/Linux"
> >>>>>>   (http://ltt.polymtl.ca/papers/desnoyers-ols2006.pdf)
> >>>>>>
> >>>>>>
> >>>>>> Questions and constructive comments are welcome.
> >>>>>>
> >>>>>> Mathieu
> >>>>>>
> >>>>>>
> >>>>>> OpenPGP public key:              
> >>>>>> http://krystal.dyndns.org:8080/key/compudj.gpg
> >>>>>> Key fingerprint:     8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 
> >>>>>> 9A68 
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Xen-devel mailing list
> >>>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
> >>>>>> http://lists.xensource.com/xen-devel
> >>>>>>
> >>>>
> >>
> > 
> 
> 

-- 
Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.