[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LTTng-Xen Buffer shared between the hypervisor and a dom0 process

* Keir Fraser (Keir.Fraser@xxxxxxxxxxxx) wrote:
> On 7/3/07 19:24, "Mathieu Desnoyers" <compudj@xxxxxxxxxxxxxxxxxx> wrote:
> > Then, I would like to release some kind of reference count of this
> > mapping from the hypervisor. I do the following which results in page
> > faults (probably because it tries to free memory still accessed by
> > lttd-xen) :
> What's the shutdown order you're looking for? It sounds like you want Xen to
> tell lttd-xen when it should quit, which seems to me the wrong way round.

Not exactly : when xen wants to end writing to its buffers, it disables
writing, does a subbuffer switch and sets the buffer "finalize" flag to
1.  It then sends a VIRQ to lttd-xen. lttd-xen reads the last subbuffers
(using a get_cpu/put_cpu and get_facilities/put_facilities cmd of the
ltt hypercall to select the offset to read and then reads the buffers)
and is then ready to release the buffers. At that specific point, I
would like all the trace information (xmalloc'd and xenheap shared) to
be freed. But I would also like it to be freed if lttd is killed (so
when file descriptors and memory maps are freed).


>  -- Keir
> >             free_xenheap_pages(
> >                 rawbuf,
> >                 get_order_from_bytes(chan->alloc_size * 
> > num_possible_cpus()));
> > 
> > 
> > And then, when we are sure that no more data can be written in the
> > buffer, lttd-xen is ready to exit. It unmaps the buffer just before exit :
> > 
> >             err_ret = munmap(pair->mmap, pair->subbuf_size * 
> > pair->n_subbufs);
> > 
> > Do you know any proper way to achieve what I am looking for ?

Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.