[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Windows SMP
On Mon, Dec 29, 2008 at 8:14 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote: > On 29/12/2008 02:59, "Venefax" <venefax@xxxxxxxxx> wrote: > >> I had to disable both, and PAE. Only APIC=0 would not make any difference. I >> will some further testing with Citrix Xenserver 5, using the same virtual >> machine and another copy with their vmpd drivers. I bet that there is no >> difference in performance. It seems to be a Xen architectural issue. Any >> ideas? > > The problem is almost certainly APIC related. APIC=0 actually has no effect > for a multi-processor HVM guest, since APICs are architecturally absolutely > required in x86 multi-processor systems. > > The problem is most likely lots of emulated APIC TPR writes slowing things > down. Possible fixes: > 1. Run a Windows guest with the 'lazy TPR' optimisation -- w2k3sp2+, w2k8, > vista. Or run 64-bit Windows which will write TPR in a different way which > most Intel/AMD CPUs can virtualise efficiently. > 2. Run a new enough Intel processor which has automatic TPR handling even > for 32-bit Windows guests. Does "PTR handling" have a cpu feature flag I can grep for in /proc/cpuinfo, or perhaps there is a list of cpus that support that feature? I run Xen on two systems, a dual Xeon 5420 and a Core 2 Quad, neither seem to suffer from bad performance with SMP Windows hvm's but most of the windows hvm's I run are not heavily loaded so I would like to know if my system is likely to suffer from this problem as it would probably become noticible once load increases. thx Andy > 3. Run Citrix drivers which patch Windows to avoid TPR writes. > > -- Keir > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |