[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Need some assistance in layout

Thanks for the input. Please see replies inline below.
Donny B.

On 10/26/2011 3:34 PM, Florian Heigl wrote:
If you can invest a week or one and a half in the re-design to make it
all fancy:

- Upgrade from Centos5 for better overall performance
Upgrade from Centos 5.5 to what? The only direct upgrade path is to Centos 5.7. I would actually like to move to something newer like Centos 6 or Fedora 15(16).
- Use OpenNebula in a small "private cloud" setup, this very very well
covers the GUI bit.
I have never even heard of OpenNebula till now. Looking at it, it appears that it will suit our needs very well. Plus it can use Xen, KVM, and VMWare as a backend is a plus.
- Big "partitions" might waste a raid arrays performance (but that
depends on too many factors. I like many small bits, others do not,
and in general cache sorts it out better than my preferences would ;)
All of our domu disks are in an LVM setup. The only reason for the big 6TB or 8TB array is due to the way we have to keep backups. Backuppc which we use for our backups deduplicates and compresses files. So in that 6TB currently we have approximately:
  • 856 full backups of total size 107743.45GB (prior to pooling and compression),
  • 655 incr backups of total size 1311.85GB (prior to pooling and compression).

- Raid6 definitely kills performance. Consider a Raid10.
Understandable. The Raid6 was chosen for space and resilience. Originally we only had Xen1 and needed the extra space. If we can get a few addon modules for the SAN then we can migrate to a raid10 for performance. 
- Live migration can come along easily with i.e. OpenNebula,
- the same goes for load balancing via live migration between hosts,
for which there are nice scripts these days.
It does appear as so. I am looking into this further.
- i have spent some half year on deploying a HA setup on xen.
old-style heartbeat + drbd in domUs and there were some caveats
   mostly that you cannot see a link-down in the host from point of domU.
   yes, you can bond in dom0
   no, that is NOT solving the problem. if you have a double failure,
or if the bridge in dom0 has an
   issue, then your domU will NOT notice that.
   No, using arp_monitor & bonding in domU was not a solution either,
since the arpping did NOT
   work in CentOS 5.4

   It doesn't get better with DR:BD with is latency-wary more than
usual inside a domU.
   Oh, and did I mention on-the-wire corruption going unnoticed until
you finally found any place that has hw
   offloading that didn't work.

   Heartbeat v1 showed a general lack of gracefulness when dealing
with such issues.

I have looked at DRBD before and liked what I saw but did not care to basically lose half my disk space. Using the SAN as a shared storage medium should help with that though. My reasoning for bonding the interfaces was not totally for failsafe but rather for speed. Although I can not say for a fact that the speed has shown an improvement over a single link.

- My personal recommendation would be to get working Xen-Ready NICs
from Solarflare
if you want to do anything that goes into clustering inside domUs, or
need high LAN performance.

Alternatively, I wonder if Remus is not the one-and-best-ever
solution. But so far I don't have it working :)

As for the redo-everything with a GUI factor, you could also give
Oracle VM3 a test ride.
Hmmyeah, over the last 6 or 7 years in Xen GUI land, I'd say the thing
sticking out has been OpenNebula (oh ok and Eucalyptus when it was
still vaporware with screenshots), and dom0 wise I haven't seen
anything that is remotely as good as OVM. Sadly that is being locked
down into an appliance now :)

I have also looked into enomaly and their offerings but it seems geared more toward KVM now. I do think I will investigate the OpenNebula more. Thanks for all the input.

Xen-users mailing list
Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.