[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 04/10] xen/blkfront: separate ring information to an new struct



On 03/17/2015 10:52 PM, Felipe Franciosi wrote:
> Hi Bob,
> 
> I've put the hardware back together and am sorting out the software for 
> testing. Things are not moving as fast as I wanted due to other commitments. 
> I'll keep this thread updated as I progress. Malcolm is OOO and I'm trying to 
> get his patches to work on a newer Xen.
> 

Thank you!

> The evaluation will compare:
> 1) bare metal i/o (for baseline)
> 2) tapdisk3 (currently using grant copy, which is what scales best in my 
> experience)
> 3) blkback w/ persistent grants
> 4) blkback w/o persistent grants (I will just comment out the handshake bits 
> in blkback/blkfront)
> 5) blkback w/o persistent grants + Malcolm's grant map patches
> 

I think you need to add the patches from Christoph Egger with title
"[PATCH v5 0/2] gnttab: Improve scaleability" here.
http://lists.xen.org/archives/html/xen-devel/2015-02/msg01188.html


> To my knowledge, blkback (w/ or w/o persistent grants) is always faster than 
> user space alternatives (e.g. tapdisk, qemu-qdisk) as latency is much lower. 
> However, tapdisk with grant copy has been shown to produce (much) better 
> aggregate throughput figures as it avoids any issues with grant (un)mapping.
> 
> I'm hoping to show that (5) above scales better than (3) and (4) in a 
> representative scenario. If it does, I will recommend that we get rid of 
> persistent grants in favour of a better and more scalable grant (un)mapping 
> implementation.
> 

Right, but even if 5) have better performance, we have to make sure
older hypervisors with new linux kernel won't be affected after get rid
of persistent grants.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.