[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 04/10] xen/blkfront: separate ring information to an new struct



Hi Bob,

> -----Original Message-----
> From: Bob Liu [mailto:bob.liu@xxxxxxxxxx]
> Sent: 17 March 2015 07:00
> To: Felipe Franciosi
> Cc: Konrad Rzeszutek Wilk; Roger Pau Monne; David Vrabel; xen-
> devel@xxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; axboe@xxxxxx;
> hch@xxxxxxxxxxxxx; avanzini.arianna@xxxxxxxxx; chegger@xxxxxxxxx
> Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to an new
> struct
> 
> Hi Felipe,
> 
> On 03/06/2015 06:30 PM, Felipe Franciosi wrote:
> >> -----Original Message-----
> >> From: Bob Liu [mailto:bob.liu@xxxxxxxxxx]
> >> Sent: 05 March 2015 00:47
> >> To: Konrad Rzeszutek Wilk
> >> Cc: Roger Pau Monne; Felipe Franciosi; David Vrabel;
> >> xen-devel@xxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; axboe@xxxxxx;
> >> hch@xxxxxxxxxxxxx; avanzini.arianna@xxxxxxxxx; chegger@xxxxxxxxx
> >> Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to
> >> an new struct
> >>
> >>
> >> ...snip...
> >>>
> >>> Meaning you weren't able to do the same test?
> >>>
> >>
> >> I can if there are more details about how to set up this 5 and 10
> >> guests environment and test pattern have been used.
> >> Just think it might be save time if somebody still have the similar
> >> environment by hand.
> >> Roger and Felipe, if you still have the environment could you please
> >> have a quick compare about feature-persistent performance with patch
> >> [PATCH v5 0/2]
> >> gnttab: Improve scaleability?
> >
> > I've been meaning to do that. I don't have the environment up, but it isn't 
> > too
> hard to put it back together. A bit swamped at the moment, but will try (very
> hard) to do it next week.
> >
> 
> Do you have gotten any testing result?

I've put the hardware back together and am sorting out the software for 
testing. Things are not moving as fast as I wanted due to other commitments. 
I'll keep this thread updated as I progress. Malcolm is OOO and I'm trying to 
get his patches to work on a newer Xen.

The evaluation will compare:
1) bare metal i/o (for baseline)
2) tapdisk3 (currently using grant copy, which is what scales best in my 
experience)
3) blkback w/ persistent grants
4) blkback w/o persistent grants (I will just comment out the handshake bits in 
blkback/blkfront)
5) blkback w/o persistent grants + Malcolm's grant map patches

To my knowledge, blkback (w/ or w/o persistent grants) is always faster than 
user space alternatives (e.g. tapdisk, qemu-qdisk) as latency is much lower. 
However, tapdisk with grant copy has been shown to produce (much) better 
aggregate throughput figures as it avoids any issues with grant (un)mapping.

I'm hoping to show that (5) above scales better than (3) and (4) in a 
representative scenario. If it does, I will recommend that we get rid of 
persistent grants in favour of a better and more scalable grant (un)mapping 
implementation.

Comments welcome.

Cheers,
F.

> 
> --
> Regards,
> -Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.