[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] magpie reference

  • To: Thomas Leonard <talex5@xxxxxxxxx>
  • From: Richard Mortier <Richard.Mortier@xxxxxxxxxxxxxxxx>
  • Date: Wed, 15 Oct 2014 23:11:13 +0100
  • Accept-language: en-US, en-GB
  • Acceptlanguage: en-US, en-GB
  • Cc: mirageos-devel <mirageos-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 15 Oct 2014 22:11:33 +0000
  • List-id: Developer list for MirageOS <mirageos-devel.lists.xenproject.org>
  • Thread-index: Ac/oxO9GpNFiZlAJSICYRvly8oLzIw==
  • Thread-topic: [MirageOS-devel] magpie reference

On 15 Oct 2014, at 12:08, Thomas Leonard <talex5@xxxxxxxxx> wrote:

> On 14 October 2014 16:48, Richard Mortier
> <Richard.Mortier@xxxxxxxxxxxxxxxx> wrote:
>> from call:
>> http://www.cs.nott.ac.uk/~rmm/papers/pdf/osdi04-magpie.pdf
>> http://dl.acm.org/citation.cfm?id=1251272
> Thanks for the reference. Being able to highlight all threads related
> to a particular input event could be useful, indeed.

yes; both to see system structure and also to understand performance in detail. 
eg., could your Lwt monitoring changes also sample cycle counter (or whatever 
would be appropriate in a domU?) so as to annotate segments with resources 

> In the current
> system, we can see that e.g. the blkfront.poll thread gets woken up
> for each read response and notifies the main thread waiting for the
> data:
> http://test.roscidus.com/static/html_viewer.html?t_min=8249.586333&t_max=8249.588562
> But we don't link it back to the original request. In this case just
> marking the request on the diagram would make it obvious what's
> happening, but in more complicated cases some visual indication of the
> original source could be useful.

note that one of the key issues we had when parsing events was the brittleness 
of the parser to events being reordered or dropped -- making it remarkably easy 
to end up in a state where nearly all events were either assigned to the same 
request or to no request.

> You could probably do your clustering analysis on these traces if you
> wanted to. Instrumenting Lwt gets you a lot of information
> automatically that you would otherwise have to write schemas for, but
> you do still need to handle the multiplexing problem.

not sure what you mean by the multiplexing problem?  if simply the need to 
trace the impact of input requests, the two solutions that were followed at the 
time were to assign each request a unique id, or to maintain mapping tables at 
every "module" boundary so things could be stitched together. we strongly 
believed (and i still strongly believe) that the latter is preferable-- it 
makes the tracing infrastructure much more general and usable, with little 
extra overhead and obviates the need to generate unique ids for inputs (which 
becomes fiddly in a distributed system).



Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

MirageOS-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.