[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Publicity] Blog Czar update, week of July 22



On 30/07/13 15:59, Dario Faggioli wrote:
> On lun, 2013-07-29 at 19:45 +0200, Roger Pau Monné wrote:
>> On 26/07/13 12:57, Dario Faggioli wrote:
>>> Ok, Tuesday morning (even if it's noon or something like that)
>>> definitely works!
>>
>> Here is the draft:
>>
>> http://blog.xen.org/?p=7534&preview=true
>>
> Hi Roger!
> 
> It's truly wonderful to see you delivering this in time, considering
> it's also quite nice a post too! :-)

Hehehe, the most painful part was uploading the article to the blog
using my crappy mobile phone Internet connection, I think I've exhausted
all my monthly bandwidth :).

> Some small suggestions:
> 
> "This idea is borrowed from the VirtIO protocol" --> is there a page
> somewhere that you can hyperlink this to?

There's a small section about indirect descriptors in
http://ozlabs.org/~rusty/virtio-spec/virtio-0.9.5.pdf so I'm going to
link it to.

> 
> "consists on fitting more segments in a requests" --> Not a super strong
> opinion, but I'd probably spend a few words on what a segment is here. I
> know it'll become evident from the code below, but still...

What about replacing segments with data?

> 
> "fitting more segments in a requests, I am going to expand how is
> this" --> make the ',' a '.'.
> 
> "shared memory page between the guest and the driver domain" --> can we
> link to somewhere (Wiki?) talking about driver domains and add something
> like this ("or, usually, between the guest and Dom0"). Don't get me
> wrong, I really agree with your use of driver domain, since we really
> should start pushing on the unique features we provide much more than
> what we're doing right now. However, it's still true that probably, most
> people still have blkback in Dom0, and might even hardly know that
> there's an alternative to that! :-P

Done.

> 
> "and some counters to the last produced requests and responses" -->
> perhaps "some counters pointing to the", or just "some counters for the"

I've added the "pointing" to make it clearer, but I would like to avoid
going into much further detail here.

> "size of a request and then punning the result to" --> is that
> 'rounding' ?

Yes, replaced it with rounding.

> 
> "In order to overcome this limitation there were several approaches" -->
> it's not terribly important but, when you cite the two previous
> approach, you can link the the proper xen-devel thread (if any).

Hehehe, sure.

> 
> Also, if it's not too complicated, I would add a sentence explaining why
> the indirect descriptors approach is the best of all.

The next paragraph explains why I think indirect descriptors are
probably the best approach, I'm not sure it's a good idea to add the
same to this one.

> 
> "array with an array of grant references that in turn are filled with an
> array of the following structures" --> avoid re-using array and, also,
> I'd make it more clear that the "grant references" you're talking about
> here is the same thing than the "shared memory page" you have in the
> following paragraph. How about something like: "..array with an array of
> grant references, each of which is a shared memory page filled by a
> bunch of structures looking as follows:" 
> 
> "inside each shared memory page, and then we can fit 8 shared memory
> pages in a request, which gives us a maximum total of 4096 segments per
> request," --> "inside each shared page. Thus, considering that we can
> fit references to 8 shared memory pages, this gives us a maximum total
> of 4096 segments per request."
> 
> ", then taking into account that each segment" --> ". Tacking into
> account that each segment"

Done :)

> "in order to provide a good balance between memory usage and disk
> throughput" --> Mmm... sorry, can you help me understand what the issue
> is here: is the problem that you may occupy 512MB of Dom0/driver
> domain's memory per each guest (or is that even more)? Also, is that
> memory freed after the spike of disk activity that got the ring filled,
> or it just stay there forever?

All the memory used by blkfront is allocated during boot, if we had to
allocate it when processing the requests we will have to use GFP_ATOMIC
(we are holding the queue spinlock while processing the request), which
can easily fail because it uses the emergency memory pool which is quite
small. So yes, memory is never freed.

> 
> The rest is fine, and again, the whole post was really quite great
> actually, thanks for the good work.
> 
> I don't recall whether you have it or not any performance measurements
> about this thing... Do you? How much data is that? Just asking because I
> was wondering whether it would make an useful addition but, even if you
> have them, it's probably fine to leave the post alone for now, and use
> them for a future one. :-)

I don't have any real performance measurements that show a huge
throughput increase when using indirect descriptors, I may be able to
get some but I would prefer to leave that for another blog post if
necessary.

Thanks for the thorough review.

_______________________________________________
Publicity mailing list
Publicity@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/publicity


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.