[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] disk access besk practice
On Tue, Jan 6, 2009 at 12:58 PM, Brian Krusic <brian@xxxxxxxxxx> wrote: > Hi all, > > While I've read some faqs, forums and Professional Xen Virtualization, I > would like your take on this. > You should read Running Xen ;) > I've 2 paravirtualized domUs running, each using tap:aio disk image located > on a local 500GB raid. > > While performance seems fine both interactively and using benchmarks, is > there a practical limit to the image size before I should start breaking it > up? > If there is a hard limit on the file system for max file size that is could be an issue. Otherwise, it can really depend on usage, backup considerations, etc. > I plan to build another dom0 box with a 24TB raid on it and hosting 2 > paravirtualized domUs, one of which will need 20TB. > > Should I break up the domU into 2 images, 1 for the OS and the other for > storage needs? > This can be beneficial in a general sense, for backup purposes and also performance could be achieved, just like with a non-virtualized system writing to different physical disks. > So my questions are; > > 1 - Whats a practical single disk image size? Others may have experience with very large disks.... > 2 - Should I pre allocate all image space during domU creation or have it > dynamically grow? > It depends on performance needed. Dynamically growing will have some performance degradation. And the dynamically allocated ones will save you a lot of space. It is a trade off. In practice, if breaking up, you could have a mixture of disks, the performance crucial ones could be pre-allocated and the less performance crucial, less used, could be dynamically grown (aka sparse files). Hope that helps some. Cheers, Todd -- Todd Deshane http://todddeshane.net http://runningxen.com _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |