[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

3rd Request Re: 2nd request -- Re: [Xen-users] QueryPerformaceCounter and TSC returns bad results with Windows HVM




James Miller wrote:
James Miller wrote:
Hi everyone,

We have a server with 2 sockets, 4 cores each, running CentOS release 5 (Final) and Xen 3.0.3. On that server we have 7 Windows 2003 DomUs. We've locked each one to a specific core and each has a single CPU HAL. We have an application which uses QueryPerformanceCounter. On very rare occasions a call to QPC returns a bad result causing our processes to disconnect from the server.
I've read about some bugs with HPC
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1291
I've also read something about not using APCI somewhat mitigates this issue.

Could someone tell me if this issues is being or has been resolved by later versions of Xen?

I have, below, a sample configuration of a Windows HVM DomU as well as the output of xm vcpu-list. Please let me know if there is any additional information I can provide to help resolve this issue and I greatly appreciate all your help.


Sample config
import os, re
arch = os.uname()[4]
if re.search('64', arch):
   arch_libdir = 'lib64'
else:
   arch_libdir = 'lib'
name = 'evalvm01.vm'
kernel = '/usr/lib/xen/boot/hvmloader'
builder='hvm'
boot='c'
memory = 2048
vif = [ 'type=ioemu, vifname=vifevalvm01, mac=aa:00:ef:39:a0:08 ,bridge=xenbr1' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
vnc=1
sdl=0
vcpus = 1
cpus ="1"
vnclisten='0.0.0.0'
vncpasswd='xxxxxxxxxx'
disk = [ 'phy:/dev/vg01/evalvm01,ioemu:hda,w']
acpi=1
vncunused=1
vncdisplay=1

xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
Domain-0                           0     0     0   r--  2794806.2 0
Domain-0                           0     1     -   --p       2.0 any cpu
Domain-0                           0     2     -   --p       1.1 any cpu
Domain-0                           0     3     -   --p       1.4 any cpu
Domain-0                           0     4     -   --p       0.9 any cpu
Domain-0                           0     5     -   --p       0.9 any cpu
Domain-0                           0     6     -   --p       0.8 any cpu
Domain-0                           0     7     -   --p       0.9 any cpu
evalvm01.vm                      118     0     1   -b-   27903.4 1
evalvm02.vm                      112     0     2   -b-   10507.8 2
evalvm03.vm                      113     0     3   -b-    2445.3 3
evalvm04.vm                      116     0     4   -b-   79872.1 4
evalvm05.vm                      119     0     5   -b-   38717.1 5
evalvm06.vm                      115     0     6   -b-   98750.3 6
evalvm07.vm                      117     0     7   r--   65222.2 7


Jim



Hi everyone,

I hate to be a pest but this is becoming a _REAL_ show stopper for us going forward with using Xen. I would appreciate any thoughts, if I'm asking in the wrong way or not providing enough info _PLEASE_ let me know.

About the only other thing I can offer is that the problem happens very infrequently but when it does the process will report, for example, that it went 2,451,046ms w/out talking to the server that's like ~40min meaning the QPC gave an erroneous response. Almost as if a 64bit counter (I think it's a 64bit counter) was rolling over?.!

Anyway, thanks in advance for any suggestions or assistance

Jim





Soooo one of our programmers wrote a little program to test the issue and here's the results:

C:\RS>qpc_test.exe
FREQUENCY = 3579545 cycles per second (369E99)
COUNTER = 26206348760 cycles (61A0525D8)
THRESHOLD exceeded: diff= 8589970453, threshold = 500, threshold * freq / 1000 =
1789772
THRESHOLD exceeded: last_counter = 749471789887, this_Counter = 758061760340



CAN ANYONE PLEASE COMMENT ON THIS ISSUE?
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.