[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Publicity] Blog Czar update, week of July 22



On mar, 2013-07-30 at 17:50 +0200, Roger Pau Monnà wrote:
> > Hi Roger!
> > 
> > It's truly wonderful to see you delivering this in time, considering
> > it's also quite nice a post too! :-)
> 
> Hehehe, the most painful part was uploading the article to the blog
> using my crappy mobile phone Internet connection, I think I've exhausted
> all my monthly bandwidth :).
> 
I see... Thanks once more then! :-)

> > "consists on fitting more segments in a requests" --> Not a super strong
> > opinion, but I'd probably spend a few words on what a segment is here. I
> > know it'll become evident from the code below, but still...
> 
> What about replacing segments with data?
> 
That would be fine with me.

> > 
> > "and some counters to the last produced requests and responses" -->
> > perhaps "some counters pointing to the", or just "some counters for the"
> 
> I've added the "pointing" to make it clearer, but I would like to avoid
> going into much further detail here.
> 
Agreed, that's more than enough, sorry for being a bit picky. :-P

> > 
> > Also, if it's not too complicated, I would add a sentence explaining why
> > the indirect descriptors approach is the best of all.
> 
> The next paragraph explains why I think indirect descriptors are
> probably the best approach, I'm not sure it's a good idea to add the
> same to this one.
> 
I saw that, and I'm not asking to neither move it nor to repeat it here.
The point is you're saying indirect descriptors "it allows us to scale
the maximum amount of data a request can contain, and modern storage
devices tend to like big requests", but, AFAICT, also the two previous
proposal achieved something similar (or at least were meant at that),
isn't it? So, I was thinking whether it was worth mentioning quickly why
indirect descriptors was the best among all three in improving the
request size/throughput.

> > "in order to provide a good balance between memory usage and disk
> > throughput" --> Mmm... sorry, can you help me understand what the issue
> > is here: is the problem that you may occupy 512MB of Dom0/driver
> > domain's memory per each guest (or is that even more)? Also, is that
> > memory freed after the spike of disk activity that got the ring filled,
> > or it just stay there forever?
> 
> All the memory used by blkfront is allocated during boot, if we had to
> allocate it when processing the requests we will have to use GFP_ATOMIC
> (we are holding the queue spinlock while processing the request), which
> can easily fail because it uses the emergency memory pool which is quite
> small. So yes, memory is never freed.
> 
Ok, I thought about something similar but couldn't tell for sure. Well,
I now wonder (it's actually the reason why I asked) whether it would be
nice to say something about this in the post. I don't have a strong
opinion on this, though... I guess I'll let you decide, if you think
it's already clear enough, or that it's not that important to specify in
greater details, I'm fine with that. :-)

> > I don't recall whether you have it or not any performance measurements
> > about this thing... Do you? How much data is that? Just asking because I
> > was wondering whether it would make an useful addition but, even if you
> > have them, it's probably fine to leave the post alone for now, and use
> > them for a future one. :-)
> 
> I don't have any real performance measurements that show a huge
> throughput increase when using indirect descriptors, I may be able to
> get some but I would prefer to leave that for another blog post if
> necessary.
> 
Perfect, that's even better! :-D

> Thanks for the thorough review.
>
My pleasure.

I was about to suggest to publish this tomorrow or on Thursday, but then
saw Lars' e-mail... Let's talk about this tomorrow...

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Publicity mailing list
Publicity@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/publicity

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.