[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] xc_map_foreign_{range,batch}


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Wed, 16 Dec 2009 18:50:16 +0000
  • Delivery-date: Wed, 16 Dec 2009 10:50:37 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:content-type; b=qyxRGbqC3QDiptL/6gz0z2J9HWUtExKyaEvJFahdepD4kouHS71/mQ54M2AnRZHLcw nPlzB2UI4FYsPABlLSpwJ+Y3yIC03KneAmsPQYxzDne4XocMQWK10XIBzkVeCM8LcT6J Vn03Y9Cragc1a9hmKciOzG3UQ8ly0AfIOJY1M=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Why does xc_map_foreign_range() take an argument named mfn of type
unsigned long, while xc_map_foreign_range() takes a list of pointers
to type xen_pfn_t?  Is there a signficance to that difference -- i.e.,
is one truly a guest pfn (which will be translated to an mfn using the
guest p2m table), and the other truly an mfn (which will have no
translation)?   Or are they both mfns?

I've looked around the source code in obvious places, but it's not
well documented, and I have no inclination to dig into the linux ioctl
implementation to see if there's a difference between
IOCTL_PRIVCMD_MMAP and IOCTL_PRIVCMD_MMAPBATCH.

I'm trying to rework the xentrace interface.  The current interface
only passes back a single mfn; this requires all buffers for all cpus
to be allocated in a contiguous chunk in the xen heap.  The size of an
available contiguous allocation is limited, and shrinks as the heap
becomes fragmented. As the number of cpus grows, this means smaller
and smaller buffers per cpu, and thus more lost records during periods
of high trace record generation (when trace records are usually the
most important).

Ideally we'd just allocate the memory we want from the heap without
requiring continuity, and pass back lists of mfns to be mapped by
xentrace.  Mapping each cpu's buffer in one virtual address range
would be ideal.

Any suggestions / clarifications?

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.