[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Planning - question about storage

Fajar A. Nugraha [11.06.2009 15:55]:
> 2009/6/11 Werner Flamme <werner.flamme@xxxxxx>:
>> Hi everyone,
>> I am very new to Xen but will migrate 3 SAP ERP servers plus 1 SolMan
>> into a Xen environment next year, probably running Novell's SLES (as we
>> use SLES right now).
> Wow, you're the first person I know that will be running SAP on top of
> Xen. Let us know how it goes.

*gulp* :-)

For us, it is "power consumption". 3 Boxes plus 1 fat RAID use more
power from the wall outlets that two without RAID ;-)

> AFAIK since SAP license cost is much more expensive compared to
> hardware cost, hardware savings (thru virtualization, for example)
> will be insignificant compared to license cost, so we focus saving
> efforts on license.

Our company decided to give SAP access to everyone who has a contract
for at least one year. So we have about 700 users in average, since most
scientists stay for 3 to 7 years...

>> The machines will be app servers only, no DB - the DB is "outsourced" to
>> a common Oracle RAC this year (maintained by a colleague).
>> So I will get 2 boxes and now fiddle around with possible configuration
>> of storage. Due to SAP notes the SAP binaries must be stored on the RAC
>> and made accessible from there, for example via NFS or SAN. In our
>> company, we are used to SANing ;-), so my boxes will have some FC boards
>> to connect to the RAC's 2 FC switches. Am I right that I can access the
>> block device from SAN (will be one big thing with OCFS2) inside dom0 and
>> pass them via phy: to the domUs?
> And use OCFS2 on domU? should be possible. Although it'd be simpler if
> you import it via nfs on domU.

Yes, use OCFS2 on domU. I hate NFS ;-) Used it once and it was terribly
slow. Maybe it has changed in the meantime. At least it is a fallback then.

>> Next: the OS of the domU. I plan to have some local disk space (about
>> 300 GB per box). I think on something like a drbd drive (RAID-1) that is
>> separated into chunks for each domU. The dom0 forwards the respective
>> part to the domU. The domUs use this disk space for their operating
>> system. Somewhere (like in /bigthing/) the big shared storage is mounted
>> and the necessary symlinks for SAP are set.
> I still have doubts against drbd, but other user has reported it gives
> acceptable performance, so you may want to search the list archive for
> his particular setup.

Ah, if it is only for performance... ;-) the data on this drbd disk is
used for OS only. So it will mainly be used for startup - and if it adds
some 10 seconds, who cares? :-) I prefer stability over performance, and
if drbd is reliable it fulfills its purpose OK.

> If it were me I'd simply put domU's storage on SAN if I need migration
> (you already have SAN anyway, right?) or on mirrored local disk (for
> maximum I/O throughput)

Yes, I have SAN - an own SAN for SAP now, not shared with the RAC. For
DB migration I have to copy all data files from my storage to RAC's
storage. Except for the OS everything is on the SAN now. The drbd part
is for the OS only, to boot from.

Hm, I'm still thinking too much hardware-bound ;-) Of course I can have
some files containing the OS on the SAN - just that SAP warns doing so.
In SAPnote 962334, you find:
SAP strongly advises to place Xen virtual machines in raw devices or
partitions. Do not use a file as virtual device for a virtual machine.
If you run an SAP database instance on such a file based virtual machine
the I/O performance will drop dramatically compared to a raw setup.
Nevertheless, you may also use logical volumes instead of raw devices.

We also advise you to use an external storage for the virtual machines
when using a SAP database instance performance wise. For SAP application
servers, local disks are sufficient.

Since I will be dealing with SAP application servers only... :-)

Thanks for your input, Fajar!


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.