[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Mirage tracing status



On 12 November 2014 21:03, Mindy <mindy@xxxxxxxxxxxxxxxxxxx> wrote:
>
> On 11/11/2014 10:53 AM, Thomas Leonard wrote:
>
> On 11 November 2014 16:40, Mindy <mindy@xxxxxxxxxxxxxxxxxxx> wrote:
>
> On 11/11/2014 09:56 AM, Thomas Leonard wrote:
>
> [snip]
>
>
> If anyone has had any success using the tracing themselves (or got
> stuck), let me know!
>
> Is this:
>
> ```
>
> let () =
>   let trace_pages = MProf_xen.make_shared_buffer ~size:1000000 in
>   let buffer = trace_pages |> Io_page.to_cstruct |> Cstruct.to_bigarray in
>   let trace_config = MProf.Trace.Control.make buffer MProf_xen.timestamper
> in
>   MProf.Trace.Control.start trace_config
> ```
>
> really sufficient to get a shared buffer?  Running a unikernel with the
> above code
> (+ some calls to MProf.Counter.increase for data), `xenstore-list` doesn't
> show
> anything in /local/domain/domid_number/data , but it looks like that's where
> the
> `collect` code expects to find memory to dump.
>
> Please let me know what extremely obvious thing I have overlooked :)

Oops, you're right! It should be:

    let trace_pages = MProf_xen.make_shared_buffer ~size:1000000
    let () =
      let buffer = trace_pages |> Io_page.to_cstruct |> Cstruct.to_bigarray in
      let trace_config = MProf.Trace.Control.make buffer
MProf_xen.timestamper in
      MProf.Trace.Control.start trace_config

Then, in your start function:

    lwt () = MProf_xen.share_with (module Gnt.Gntshr) (module OS.Xs)
~domid:0 trace_pages in

I need to get this automated with "mirage configure"...


-- 
Dr Thomas Leonard        http://0install.net/
GPG: 9242 9807 C985 3C07 44A6  8B9A AE07 8280 59A5 3CC1
GPG: DA98 25AE CAD0 8975 7CDA  BD8E 0713 3F96 CA74 D8BA

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.