[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] comment request: dom0 dma on large memory systems


  • To: "Scott Parish" <srparish@xxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Sat, 4 Jun 2005 13:44:39 +0800
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 04 Jun 2005 05:44:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcVov43dhx4k0QTcTvu7htyHGR8rHAAAcM2g
  • Thread-topic: [Xen-devel] comment request: dom0 dma on large memory systems

>-----Original Message-----
>From: Scott Parish [mailto:srparish@xxxxxxxxxx]
>Sent: Saturday, June 04, 2005 12:00 PM
>To: Tian, Kevin
>
>>
>> IIRC, 2 or 3 months ago, Keir said that default memory allocation for
>> Dom0 is all available memory. And then CP has to decrease by balloon
>> interface before creating other domains. If this still holds true,
I'm
>> not sure whether above problem still exists, since all avail memory
>> including both <4G and >4G belonging to Dom0 then. (XEN itself only
>> consumes a small trunk). However after looking at your patch and then
>> the source, it seems that only the max available order, meaning must
be
>> continuous, is allocated to Dom0 currently. So did I misunderstand
this
>> concept? If it really only means maximum continuous trunk, then you
>> patch definitely shoots straight on the real problem on all 64bit
>> platform. ;-)
>
>Right, there are several hacks around this problem, a couple i've
>thought of are:
>
>  + enforce dom0 take all memory

Just a rough thought. If dom0 can take all the memory, one alternative
is to rely on dom0 to support DMA related allocation. At that time, dom0
can handle all internal requests itself, without HV's intervention. Then
a similar component like balloon driver resides within dom0, to handle
requests from other driver domains. A new event channel will be created
to pass zone information about request between Dom0 and DomN. Then when
driver domain wants to allocate DMA-able pages, the request will go to
dom0, instead of HV. Finally the balloon-like driver will allocate
DMA-able page from Dom0's memory allocator, and then update driver
domain's mapping table. Yes, this adds some overhead for more context
switch. But HV can pre-requested DMA pool from dom0, and then accelerate
the process.

NUMA is somehow different, and I have no clear picture whether this
direction applies to it yet. ;-P

>  + drop the max order size for MEMZONEs to 18 (in which case
>    alloc_largest should always allocate from the lower memory)
>  + prealloc X amount of low memory (128M for instance) and add
>    it into the dom0 allocation

IMO, to add zone info as your patch is better than simply hacks.

>
>You nailed it when you mentioned driver domains (next email); the long
>term goal is to make sure we're able to support them and hopefully
avoid
>the hogging of that memory unnecessarily for non-dma uses. Thanks for
>noticing ;^)
>
>(i was also glad Ian brought up numa, i had forgotten about it and this
>is probably a good time to think about that while i'm tearing up this
>code)
>
>sRp

Yep. ;v)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.