[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] high I/O cost for drbd and xen?


  • To: "Tom Hibbert" <tom@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Tue, 18 Jan 2005 23:34:29 -0000
  • Delivery-date: Wed, 19 Jan 2005 02:33:03 +0000
  • List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
  • Thread-index: AcT9Bw+7eU9GjA6pSZ+2/caN3x5yOwArwJNg
  • Thread-topic: [Xen-devel] high I/O cost for drbd and xen?

> Just run some benchmarks on my new failover cluster and found some
> alarming results. I've previously observed that xen typically 
> introduces
> almost zero IO overhead. I ran some tests with bonnie -x 10 on my drbd
> mirrors and compared the averages between dom0 and dom1 to see the
> virtualisation cost. I was surprised (and alarmed) to see 
> that there was
> a fairly high cost for read operations (high being >10%). I was very
> concerned to see the 80% cost for get_block on the failover node.

Not sure what's going on here as I've never used drbd.

I'd be inclined to run drbd in domain 0 and export the device to the
guest rather than run drbd in the guest.

Ian
  
> Could this be a scheduler issue? drbd likes to be high 
> priority - it's a
> high availability service after all. Perhaps the hypervisor 
> is blocking
> it
> or not allowing it its normal share of cpu time? Perhaps its 
> giving CPU
> time from drbd on dom0 to bonnie on dom1, causing the 
> reported slowdown?
>  
> Tom
>  
>  
> 


-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.