[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] NUMA TODO-list for xen-devel



On Thu, 2012-08-02 at 01:04 +0000, Zhang, Yang Z wrote:
> >     - Automatic placement at guest creation time. Basics are there and
> >       will be shipping with 4.2. However, a lot of other things are
> >       missing and/or can be improved, for instance:
> > [D]    * automated verification and testing of the placement;
> >        * benchmarks and improvements of the placement heuristic;
> > [D]    * choosing/building up some measure of node load (more accurate
> >          than just counting vcpus) onto which to rely during placement;
> >        * consider IONUMA during placement;
> We should consider two things:
> 1. Dom0 IONUMA: Devices used by dom0 should get the dma buffer from the node 
> which it resides. Currently, Dom0 allocates dma buffer without provide the 
> node info to the hypercall..
> 2.Guest IONUMA: when guest boots up with pass through device, we need to 
> allocate the memory from the node where the device resides for further dma 
> buffer allocation. And let guest know the IONUMA topology. This rely on the 
> guest NUMA.
> This topic was mentioned in xen summit 2011:
> http://xen.org/files/xensummit_seoul11/nov2/5_XSAsia11_KTian_IO_Scalability_in_Xen.pdf
> 
Seems fine, I knew that presentation and I added these details to the
Wiki page (sorry for the delay). Are you (or someone from your group)
perhaps working or planning to work on it?

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.