[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Xen 3.0 Status update


  • To: "Scott Parish" <srparish@xxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Fri, 29 Jul 2005 00:31:42 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 28 Jul 2005 23:30:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcWTyYChZ36XP8+7RmmFSAc1qLgBagAAa61g
  • Thread-topic: [Xen-devel] Xen 3.0 Status update

 
> > > I have a patch that introduces zones into xen, and a hypercall to 
> > > request dmaable memory, which i've made
> > > xen_contig_memory() use.
> > 
> > The hypercall should probably pass in the 'order' of the 
> address limit 
> > required for the allocation. There are a few stupid devices that 
> > require memory below 2GB etc (e.g. aacraid)
> 
> This is with the MEMOP_decrease_reservation hypercall, which 
> is already using up all of its allotted arguments. Its been a 
> while, but it didn't look like it was going to be real easy 
> to raise the limit of 6 arguments on x86_32.

extent_order only needs to be a byte parameter, so it would be
reasonable to have the next byte of the word be the addr_limit_order.
(We might want a separate alignment order in future too).

> > > Unfortunately, there still seems to be some places where kmallocs 
> > > are done for dma buffers. (i tried putting all linux memory into 
> > > ZONE_NORMAL and caught a couple of these places)
> > 
> > Can you give examples? What size are the allocations? Do 
> you know what 
> > the official position is i.e. is using kmalloc with 
> ZONE_DMA deprecated?
> 
> I have no idea about official positions of the linux kernel.

I guess its probably allowed for sub page allocations. 

Hopefully the s/w iommu can take care of these at map time.

Ian 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.