[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Poor Windows 2003 + GPLPV performance compared to VMWare



Hello Adam,

On Sep 14, 2012, at 10:57 AM, Adam Goryachev
<mailinglists@xxxxxxxxxxxxxxxxxxxxxx> wrote:

<snip>
>>>> And as James suggests it would also be useful to benchmark iSCSI running
>>>> in dom0 and perhaps even running on the same system without Xen (just
>>>> Linux) using the same kernel. I'm not sure if VMware offers something
>>>> similar which could be used for comparison.
>>>
>>> Well, that is where things start to get complicated rather quickly...
>>> There are a lot of layers here, but I'd prefer to look at the issues
>>> closer to xen first, since vmware was working from an identically
>>> configured san/etc, so nothing at all has changed there. Ultimately, the
>>> san is using 3 x SSD in RAID5. I have done various testing in the past
>>> from plain linux (with older kernel 2.6.32 from debian stable) and
>>> achieved reasonable figures (I don't recall exactly).
>>
>> I was worried about the Linux side rather than the SAN itself, but it
>> sounds like you've got that covered.
>
> At this stage, the limiting performance should be the single gig
> ethernet for the physical machine to connect to the network. (The san
> side has 4 x gig ethernet).
</snip>

I'm finding myself fascinated with this thread you've started, lots of
details going on and I'm really hopeful you figure this out.  However,
in case you don't, I may have a suggestion:

Is it an option for you to connect this DomU to your iSCSI LUN
directly?  Bypass the initiator in Dom0, and the uncertainty of your
disk assignment to the DomU?  With Windows prior to NT6, you of course
need to download from Microsoft and install the iSCSI initiator, but
with that, you could create a dedicated LUN on your SAN and use that
device as the backing store for your application's data. If you like
the idea, try installing the initiator and connecting to a small RAM
disk on your SAN (or something where you know the storage IOPs won't
be a limiting factor) and benchmark the disk with IOMeter or
CrystalDiskMark, compare that to the performance of the Xen-mapped
disk, and see if that will yield the appropriate throughput for your
needs.

If you want to go deeper down the rabbit hole (so to speak), you could
also try booting the DomU directly from your SAN, as Xen bundles iPXE
as its HVM network boot ROM.  With your DomU already existing as raw
data on an iSCSI LUN, you could basically install the initiator and
sanbootconf package, configure a DHCP reservation (or the ROM, if the
NVRAM storage works with the Xen NIC), and boot right up.

And finally, and also the deepest down the rabbit hole that I'd
suggest going, if your host supports PCI passthrough *and* you have a
"spare" NIC available, you could assign that NIC directly to your DomU
and use my first suggestion.  The DomU will be "tied" to the host at
that point though, so if you're looking to leverage migration or
failover, it's not a good idea :P

Best of luck to you, and, while I hope you don't need my suggestions,
I'd be glad to be of any assistance if you have some questions!

Cheers,
Andrew Bobulsky

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.