[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: Fix for get_s_time()
Yes. Its an AMD Warthog. I'll try some other platforms when I get a chance. -Dave Keir Fraser wrote: So does the below indicate that 92529 of your HPET accesses took between 0 and 63 cycles? That seems rather short for an access off-chip and to the southbridge. -- Keir On 28/4/08 19:40, "Dave Winchell" <dwinchell@xxxxxxxxxxxxxxx> wrote:read_64_main_counter() On stime: (VMM) cycles per bucket 64 (VMM) (VMM) 0: 0 78795 148271 21173 15902 47704 89195 121962 (VMM) 8: 83632 51848 17531 12987 10976 8816 9120 8608 (VMM) 16: 5685 3972 3783 2518 1052 710 608 469 (VMM) 24: 277 159 83 46 34 23 19 16 (VMM) 32: 9 6 7 3 4 8 5 6 (VMM) 40: 9 7 14 13 17 25 22 29 (VMM) 48: 25 19 35 27 30 26 21 23 (VMM) 56: 17 24 12 27 22 18 10 22 (VMM) 64: 19 16 16 16 28 18 23 16 (VMM) 72: 22 22 12 14 21 19 17 19 (VMM) 80: 18 14 10 14 11 12 8 18 (VMM) 88: 16 10 17 14 10 8 11 11 (VMM) 96: 10 10 0 175 read_64_main_counter() Going to the hardware: (VMM) cycles per bucket 64 (VMM) (VMM) 0: 92529 148423 27850 12532 28042 43336 60516 59011 (VMM) 8: 36895 14043 8162 6857 7794 7401 5099 2986 (VMM) 16: 1636 1066 796 592 459 409 314 248 (VMM) 24: 206 195 138 97 71 45 35 34 (VMM) 32: 33 36 40 40 25 26 25 26 (VMM) 40: 37 23 18 30 27 30 34 44 (VMM) 48: 38 19 25 23 23 25 21 27 (VMM) 56: 28 24 43 80 220 324 568 599 (VMM) 64: 610 565 611 699 690 846 874 788 (VMM) 72: 703 542 556 613 605 603 559 500 (VMM) 80: 485 493 512 578 561 594 575 614 (VMM) 88: 759 851 895 856 807 770 719 958 (VMM) 96: 1127 1263 0 18219 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |