[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] What is the "fastest" way to access disk storage from DomU?



Am Thursday, den 24 January hub Mark Williamson folgendes in die Tasten:

> > I've read some threads about storage speed here but didn't really got
> > a clue on what's the "best" or fastest way to set it up.

> > At the moment all the virtual disks are configured as

> >   disk = [ "phy:/dev/vg_xen1/<LV>,xvda1,w", ... ]

> > The volumes residing on the SAN storage are configured via EVMS and
> > I've 200MB/s writing speed from Dom0 (mesured with dd if=/dev/zero
> > of=/mnt/file) and "only" around 150MB/s when doing the same from DomU.

> When you do the tests of writing speed from dom0, are you writing to the 
> domU's filesystem LV?  Otherwise you're not testing like-for-like since 
> you're using a different part of the storage.  I'm not sure if this makes a 
> difference in your case, but different parts of a physical disk can have 
> surprisingly big differences in bandwidth (outer edge of the disk moves 
> faster, so better bandwidth).

Sure I used the same EVMS volume.
Anything other would have been pointless :)

> I'm not too familiar with EVMS, maybe there's some bottleneck there I'm not 
> familiar with and therefore missing.  Does EVMS do cluster volume management? 
>  
> I guess it does, as you're using it on a SAN ;-)

Paired with heartbeat (neccessary for EVMS) there is a Cluster Volume
Manager plugin/module (maybe the buzzword is called different), so
it's somehow possible to have the volumes shared among hosts.

> > Is this expected speed loss or is there any other way to give the DomU
> > access to the devices?

> You can only give domUs direct access to whole PCI devices at the moment, so 
> unless you gave each a separate SAN adaptor, you can't really give them any 
> more direct access.

> There's some work on SCSI passthrough being done by various people, so maybe 
> at some point that'll let you pass individual LUNs through from the SAN.

Hmm.
That would most probably not really helpful in my case as I'm not
using the /dev/sd* devices I get from the SAN about 4 ways (dual-head
HBA connected to SAN with two SPs) but the /dev/mapper/<foo> device
handled via multipathd.

OK, I could push all the according SCSI devices to the DomU and
multipath inside (if possible), but it's not a simple task to figure out
which sd* belong to which LUN as far I know of.
(Ok, multipath can do so, so there has to be a way...)

> For really high performance SAN access from domUs, the solution will 
> eventually (one fine day, in the future) to use SAN adaptors with 
> virtualization support that can natively give shared direct access to 
> multiple domUs.  We're not quite there yet though!

So let's hope :)

Thanks
Ciao
Max
-- 
        Follow the white penguin.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.