[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 0/4] sched: credit2: introduce per-vcpu hard and soft affinity
On Wed, 2015-03-25 at 23:48 -1000, Justin T. Weaver wrote: > Here are the results I gathered from testing. Each guest had 2 vcpus and 1GB > of memory. > Hey, thanks for doing the benchmarking as well! :-) > The hardware consisted of two quad core Intel Xeon X5570 processors > and 8GB of RAM per node. The sysbench memory test was run with the num-threads > option set to four, and was run simultaneously on two, then six, then ten VMs. > Each result below is an average of three runs. > > ------------------------------------------------------- > | Sysbench memory, throughput MB/s (higher is better) | > ------------------------------------------------------- > | #VMs | No affinity | Pinning | NUMA scheduling | > | 2 | 417.01 | 406.16 | 428.83 | > | 6 | 389.31 | 407.07 | 402.90 | > | 10 | 317.91 | 320.53 | 321.98 | > ------------------------------------------------------- > > Despite the overhead added, NUMA scheduling performed best in both the two and > ten VM tests. > Nice. Just to be sure, is my understending of the columns label accurate? - 'No affinity' == no hard nor soft affinity for any VM - 'Pinning' == hard affinity used to pin VMs to NUMA nodes (evenly, I guess?); soft affinity untouched - 'NUMA scheduling' == soft affinity used to associate VMs to NUMA nodes (evenly, I guess?); hard affinity untouched Also, can you confirm that all the hard and soft affinity setting were done at VM creation time, i.e., they were effectively influencing where the memory of the VMs was being allocated? (It looks like so, from the number, but I wanted to be sure...) Thanks again and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |