[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-tools] RE: [Xen-devel] Hi,something about the xentrace tool



> > Once again, there is no explicit copying of the data between kernel
and
> > user space, so nobody should be worried about it.
> 
> There's no copying from the HV to the xentrace process.  But there is
> copying from xentrace to the dom0 kernel for the output file.  Some
> copying is necessary right now, because rather than writing out the
> pages verbatim, xentrace writes out the pcpu before writing out each
> record:

We have the records in huge per-cpu blocks in memory, then write them
out individually?
That's nuts.

We should keep the IO page aligned, reserving the first record entry of
each block to fill in when we do a write-out to indicate the cpu and
#records in the batch. 

I'd say this fix is less important than logging the number of dropped
records, but if we ever want to reduce the capture overhead in the
future we'll have to fix this.

Ian

 
> void write_rec(unsigned int cpu, struct t_rec *rec, FILE *out)
> {
>     size_t written = 0;
>     written += fwrite(&cpu, sizeof(cpu), 1, out);
>     written += fwrite(rec, sizeof(*rec), 1, out);
>     if ( written != 2 )
>     {
>         PERROR("Failed to write trace record");
>         exit(EXIT_FAILURE);
>     }
> }
> 
> If we wanted to make it zero copy all the way from the HV to the disk,
> we could have the xentrace process one stream per cpu, and do
> whatever's necessary to use DMA.  (Does anyone know if O_DIRECT will
> do direct DMA, or if one would have to use a raw disk?)
> 
> But I think we all seem to agree, this is not a high priority. :-)
> 
>  -George
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-tools mailing list
Xen-tools@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-tools


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.