[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] swiotlb=force in Konrad's xen-pcifront-0.8.2 pvops domU kernel with PCI passthrough



On Fri, Nov 19, 2010 at 2:36 PM, Dan Magenheimer
<dan.magenheimer@xxxxxxxxxx> wrote:
>> From: Keir Fraser [mailto:keir@xxxxxxx]
>> Sent: Friday, November 19, 2010 10:58 AM
>> To: Dante Cinco; Jeremy Fitzhardinge
>> Cc: Xen-devel; mathieu.desnoyers@xxxxxxxxxx; Chris Mason; Andrew
>> Thomas; Konrad Rzeszutek Wilk
>> Subject: Re: [Xen-devel] swiotlb=force in Konrad's xen-pcifront-0.8.2
>> pvops domU kernel with PCI passthrough
>>
>> On 19/11/2010 17:52, "Dante Cinco" <dantecinco@xxxxxxxxx> wrote:
>>
>> > How do I check if rdtsc emulation is on? Does 'xm debug-keys s' do
>> it?
>> >
>> > (XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch
>> > input to DOM0)
>> > (XEN) TSC marked as reliable, warp = 0 (count=2)
>> > (XEN) dom1: mode=0,ofs=0xca6f68770,khz=2666017,inc=1
>> > (XEN) No domains have emulated TSC
>>
>> TSC emulation is not enabled.
>
> I *think* "No domains have emulated TSC" will be printed
> if there are no domains other than dom0 currently running,
> so this may not be definitive.

The pvops domU was running when I captured that Xen console output and
I also looked at /var/log/xen/xend.log and saw 'tsc_mode 0' without
explicitly setting tsc_mode in the domain's cfg file.

>
> Also note that tsc_mode=0 means "do the right thing for
> this hardware platform" but, if the domain is saved/restored
> or live-migrated, TSC will start being emulated. See
> tscmode.txt in xen/Documentation for more detail.

We have not done any save/restore on domU.

>
> Lastly, I haven't tested this code in quite some time,
> the code for PV and HVM is different, and I've never
> tested it with xl, only with xm.  So bitrot is possible,
> though hopefully unlikely.
>
> Thanks,
> Dan
>

pvclock_clocksource_read is no longer the top symbol (was 28% of the
CPU samples) in the latest xenoprofile report. I had mistakenly
attributed the huge I/O performance gain (from 119k IOPS to 209k IOPS)
to the act of killing ntpd but that was not the case. In fact, the
performance gain was due to turning off lock stat. I had enabled lock
stat in the kernel to try to track down the lock-associated symbols in
the profile report. I had forgotten that I had turned off lock stat
(echo 0 >/proc/sys/kernel/lock_stat) just before I killed ntpd. When I
disabled lock stat in the kernel, I was able to get 209k IOPS without
killing ntpd.

The latest xenoprofile report doesn't even have
pvclock_clocksource_read in the top 10. All the I/O processing in domU
(domID=1) is done in our kernel driver modules so domain1-modules is
expected to be at the top of the list.

CPU: Intel Architectural Perfmon, speed 2665.97 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a
unit mask of 0x00 (No unit mask) count 100000
samples  %        image name               app name                 symbol name
542839   17.2427  domain1-modules          domain1-modules
/domain1-modules
378968   12.0375
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           xen_spin_unlock
250342    7.9518  vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug
vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug
mutex_spin_on_owner
206585    6.5620  xen-syms-4.1-unstable    domain1-xen
syscall_enter
123021    3.9076
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           lock_release
103703    3.2940
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           __lock_acquire
100973    3.2073  domain1-xen-unknown      domain1-xen-unknown
/domain1-xen-unknown
94449     3.0001
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           hypercall_page
67145     2.1328  xen-syms-4.1-unstable    domain1-xen
restore_all_guest
64460     2.0475
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           xen_spin_trylock
62415     1.9825
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           xen_restore_fl_direct
51822     1.6461
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           native_read_tsc
45901     1.4580  vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug
vmlinux-2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug
pvclock_clocksource_read
44398     1.4103
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           debug_locks_off
42191     1.3402
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           find_next_bit
41913     1.3313
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           do_raw_spin_lock
41424     1.3158
vmlinux-2.6.36-rc7-pvops-kpcif-08-2-domu-5.14.dcinco-debug
domain1-kernel           lock_acquire
39275     1.2475  xen-syms-4.1-unstable    domain1-xen
do_xen_version

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.