[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] IO intensive guests - how to design for best performance
Hi I am trying to engineer a HA xen solution for a specific application workload. I will use: *) 2 multicore systems (maybe 32 or 48 cores) with lots of RAM (256 GB) *) dom0 OS will be RHEL 5.5 *) I would prefer to use xen as bundled by the distribution, but if required features are found in later releases then this can be considered *) the servers are connected to the SAN *) I have about 10 TB of shared storage, and will use around 20-25 RHEL paravirt guests *) The HA I will manage with heartbeat, probably use use clvmd for the shared storage My concern is to get the most out of the system in terms of IO. The guests will have a range of vCPUs assigned, from 1 to 8 say, and their workload varies over time. when they are doing some work it is both I/O and CPU intensive. It is only in unlikely use cases that all or most guests are very busy at the same time. The current solution to this workload is a cluster of nodes with either GFS (using shared SAN storage) or local disks, of which both approaches have some merits. However I am not tied to that architecture at all. There seems a lot (too many!) of options here *) created a large LUN / LVM volume on my SAN, and pass it to the guests and use GFS/GFS2 *) same thing, except use OCFS2 *) split my SAN storage into man LUNs / LVM volumes, and export 1 chunk per VM via phys: or tap:... interfaces *) more complex PCI-passthru configurations giving guest direct (?) access to storage *) create a big ext3/xfs/... file system on dom0 and export using NFS to guests (a kind of loopback?) *) others ... I ask really for any advise and experiences of list members faced with similar problems and what they found best. Thanks KM _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |