[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare



On Fri, 2012-09-14 at 14:11 +0100, Adam Goryachev wrote:
> On 14/09/12 18:04, Ian Campbell wrote:
> > On Thu, 2012-09-13 at 13:25 +0100, Adam Goryachev wrote:
> >> Then, the user ran the above process, and got consistently, results of
> >> approx 2500 transactions per second
> > 
> > Are you certain the GPLPV drivers have taken hold and you aren't using
> > emulated devices?
> 
> Within Windows, Device Manager shows the Disk Drives as "XEV PV DISK
> SCSI Disk Device", this is the newest one which it detected and
> installed after I changed the config from hda to xvda.
> 
> > I don't know how you can tell from within Windows but from dom0 you can
> > look in the output of "xenstore-ls -fp" for the "state" node associated
> > with each device frontend -- they should be in state 4 (connected).
> 
> root@pm08:~# xenstore-ls -fp|grep state|grep vbd
> /local/domain/0/backend/vbd/8/51712/state = "4"   (n0,r8)
> /local/domain/8/device/vbd/51712/state = "4"   (n8,r0)
> 
> I assume dom id 8 is the VM, and dom0 is the first line above.
> 
> > [...]
> >> memory        = 4096
> >> shadow_memory    = 12
> > 
> > This seems low to me. The default is 1M per CPU, plus 8K per M of RAM,
> > which is 4M + 8*4096K = 4M+32M = 36M. Do you have any reason to second
> > guess this? (Usually this option is used to increase shadow RAM where
> > the workload demands it).
> 
> OK, I must admit I have no idea, I copied this value from an example a
> long time ago, and I've just copied it into each new vm as I go.
> 
> From here:
> http://wiki.prgmr.com/mediawiki/index.php/Chapter_12:_HVM:_Beyond_Paravirtualization
> It says:
> The shadow_memory directive specifies the amount of memory to use for
> shadow page tables. (Shadow page tables, of course, are the
> aforementioned copies of the tables that map process-virtual memory to
> physical memory.) Xen advises allocating at least 2KB per MB of domain
> memory, and âa fewâ MB per virtual CPU. Note that this memory is in
> addition to the domUâs allocation specified in the memory line.
> 
> I'm not really sure where to find definitive documentation on all the
> config file options within xen....

http://xenbits.xen.org/docs/4.2-testing/ has man pages for the config
files. These are also installed on the host as part of the build.

If you are using xend then the xm ones are a bit lacking. However xl is
mostly compatible with xm so the xl manpages largely apply. There's also
a bunch of stuff on http://wiki.xen.org/wiki.

> (XEN) HVM: ASIDs enabled.
> (XEN) SVM: Supported advanced features:
> (XEN)  - Nested Page Tables (NPT)
> (XEN)  - Last Branch Record (LBR) Virtualisation
> (XEN)  - Next-RIP Saved on #VMEXIT
> (XEN)  - Pause-Intercept Filter
> (XEN) HVM: SVM enabled
> (XEN) HVM: Hardware Assisted Paging (HAP) detected
> (XEN) HVM: HAP page sizes: 4kB, 2MB, 1GB
> 
> I'm guessing that is a yes to HAP and NPT but no for EPT....
> 
> This is a AMD Phenom(tm) II X6 1100T Processor

EPT is the Intel equivalent of NPT so you wouldn't have that one.

> >> device_model    = '/usr/lib/xen-default/bin/qemu-dm'
> >> localtime    = 1
> >> name        = "vm1"
> >> cpus        = "2,3,4,5"    # Which physical CPU's to allow
> > 
> > Have you pinned dom0 to use pCPU 1 and/p pCPUs > 6?
> 
> No, how should I pin dom0 to cpu0 ?

dom0_vcpus_pin as described in
http://xenbits.xen.org/docs/4.2-testing/misc/xen-command-line.html

> Also, xm vcpu-list shows this:
> xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     0   r--   34093.4 any cpu
> Domain-0                             0     1     5   -b-    1239.3 any cpu
> Domain-0                             0     2     1   -b-    1134.4 any cpu
> Domain-0                             0     3     3   -b-    1049.9 any cpu
> Domain-0                             0     4     0   -b-    1340.5 any cpu
> Domain-0                             0     5     2   -b-    1123.2 any cpu
> vm1                                  9     0     2   -b-      20.5 2-5
> vm1                                  9     1     4   -b-      15.2 2-5
> vm1                                  9     2     3   -b-      14.9 2-5
> vm1                                  9     3     4   -b-      15.1 2-5
> 
> I've set the vm to use cpus 2,3,4,5 but how do I force it so:
> vcpu 0 = 2
> vcpu 1 = 3
> vcpu 2 = 4
> vcpu 3 = 5
> 
> Without running:
> xm vcpu-pin vm1 0 2
> xm vcpu-pin vm1 1 3
> xm vcpu-pin vm1 2 4
> xm vcpu-pin vm1 3 5

You have:
        cpus = "2,3,4,5"
which means "let all the guests VCPUs run on any of PCPUS 2-5".

It sounds like what you are asking for above is:
        cpus = [2,3,4,5]
Which forces guest vcpu0=>pcpu=2, 1=>3, 2=>4 and 3=>5.

Subtle I agree.

Do you have a specific reason for pinning? I'd be tempted to just let
the scheduler do its thing unless/until you determine that it is causing
problems.

> > How many dom0 vcpus have you configured?
> 
> I assume by default it takes all of them...

Correct. dom0_max_vcpus will adjust this for you.

> > Does your system have any NUMA properties?
> 
> I don't really understand this question.... is there a simple method to
> check? It is a AMD Phenom(tm) II X6 1100T Processor on a reasonable
> desktop motherboard, nothing fancy....
> 
> > And as James suggests it would also be useful to benchmark iSCSI running
> > in dom0 and perhaps even running on the same system without Xen (just
> > Linux) using the same kernel. I'm not sure if VMware offers something
> > similar which could be used for comparison.
> 
> Well, that is where things start to get complicated rather quickly...
> There are a lot of layers here, but I'd prefer to look at the issues
> closer to xen first, since vmware was working from an identically
> configured san/etc, so nothing at all has changed there. Ultimately, the
> san is using 3 x SSD in RAID5. I have done various testing in the past
> from plain linux (with older kernel 2.6.32 from debian stable) and
> achieved reasonable figures (I don't recall exactly).

I was worried about the Linux side rather than the SAN itself, but it
sounds like you've got that covered.

> Thank you for your responses, if there is any further information I can
> provide, or additional suggestions you are able to make, I'd be really
> appreciative.
> 
> Regards,
> Adam
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.