[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Oprofile Report



Hi,

Yes, when I try to execute 'opreport --symbols --debug-info' command after doing profiling for a .c file (let operf ./test),

I am expecting 'test.c:line_no' in linear info field as you are getting 'exact_counts.c:13' in the following: 

samples %        linenr info                 image name               symbol name
10       66.6667  (no location information)   no-vmlinux               /no-vmlinux
2        13.3333  exact_counts.c:13           exact_counts             main
1         6.6667  exact_counts.c:10           exact_counts             f_65535x
1         6.6667  exact_counts.c:9            exact_counts             f_997x
1         6.6667  (no location information)   ld-2.17.so               _dl_fini

I tried to install debuginfo using 
#debuginfo-install kernel command.

But when I execute 'opreport --symbols --debug-info', I get only (no location information)  in all of my samples.

What may be the reason?

On Thu, Mar 23, 2017 at 8:08 PM, Michael Petlan <mpetlan@xxxxxxxxxx> wrote:
On Thu, 23 Mar 2017, dhara buch wrote:
Hello,
I am doing profiling with command, 

operf ./test --events=BR_INST_RETIRED
where test.c is a simple c language file. 

then, I am trying to collect information with command
 
opreport --symbols --debug-info

>From the documents of Oprofile, I assume that the above command lists profiling result as per the symbols i.e. I can get result showing samples, linear info, image name and symbol name.

As per my command I expect my file name (test.c) to be in the linear info and image name, but shows [no location information] in linear info. The filename does not get listed in image
name also. 

What is lacking?

Hi, I think you need to rebuild the test with '-g' switch.
If the test binary does not have debuginfo, opreport cannot
resolve that.

Me trying with debuginfo:

samples  %        linenr info                 image name               symbol name
10       66.6667  (no location information)   no-vmlinux               /no-vmlinux
2        13.3333  exact_counts.c:13           exact_counts             main
1         6.6667  exact_counts.c:10           exact_counts             f_65535x
1         6.6667  exact_counts.c:9            exact_counts             f_997x
1         6.6667  (no location information)   ld-2.17.so               _dl_fini

And without:

samples  %        linenr info                 image name               symbol name
10       55.5556  (no location information)   no-vmlinux               /no-vmlinux
3        16.6667  (no location information)   exact_counts             f_65535x
2        11.1111  (no location information)   exact_counts             main
1         5.5556  (no location information)   ld-2.17.so               _dl_add_to_slotinfo
1         5.5556  (no location information)   ld-2.17.so               _dl_next_tls_modid
1         5.5556  (no location information)   ld-2.17.so               _dl_relocate_object

Is this the problem you are asking about?


I tried to set vmlinux with operf --vmlinux option also where vmlinux file in in /usr/lib/debug/lib/4*/vmlinux, still the above commands do not list test file entries.

The "--vmlinux" option is there for enabling this for samples
obtained in kernelspace.

By default (assuming you have no kernel debuginfo available),
operf marks all the samples taken in kernel (e.g. your program
called a syscall and the sample was taken when the syscall was
being executed or you profile systemwide) as "no-vmlinux".

This is sufficient if you care about userspace only and not about
the "time" spent in kernel.

If you care about kernelspace, you need the '--vmlinux' option
with correct path specified.


What is lacking?

Thank you,

Dhara buch



Has this helped?

Cheers,
Michael


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.