[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LTTng-Xen Buffer shared between the hypervisor and a dom0 process


  • To: Mathieu Desnoyers <compudj@xxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Sat, 10 Mar 2007 17:18:25 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 10 Mar 2007 09:17:47 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcdjOBxzWzawA88rEduDAAAX8io7RQ==
  • Thread-topic: [Xen-devel] LTTng-Xen Buffer shared between the hypervisor and a dom0 process

On 10/3/07 03:02, "Mathieu Desnoyers" <compudj@xxxxxxxxxxxxxxxxxx> wrote:

> I see your idea : the other way around would be to have lttctl-xen
> return an error if the buffers are actually mapped. It would however
> require some changes to the buffer scheme, as I support multiple
> start/stop tracing while keeping the same buffers and the same lttd-xen
> daemon. I would have to create a new ltt sub-hypercall to finalize the
> buffers, which would make lttd-xen write them to disk and exit.
> 
> Controlling tracing from within a guest kernel or within the hypervisor
> would start to be a tricky business, as you would have to explicitely
> keep track of lttd-xen presence before freeing the buffers.

Are the buffer pages only ever shared with domain0? In that case we don't
need to worry about Xen holding a reference on the pages that would stop the
domain from ever being destroyed (since dom0 is never destroyed).

In which case I think you can just take an extra count_info reference in
Xen, which you drop on 'lttctl-xen -r'. You'll need an extra page flag so
that the IS_XEN_HEAP_FRAME case in free_domheap_pages() actually frees the
page rather than leaving that job for later.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.