[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [linux-2.6.18-xen] [IA64] improve response time in dom 0 at creating a guest domain
# HG changeset patch # User Isaku Yamahata <yamahata@xxxxxxxxxxxxx> # Date 1217906378 -32400 # Node ID 678ad99920c897f247c37d3de14827c547e664c5 # Parent 0deba952a6d3d2a58c404feefea2bccb3071089a [IA64] improve response time in dom 0 at creating a guest domain The hypercall takes several hundred mili seconds, and it takes around 5 mili seconds with my new patch. Time of one hypercall should be smaller than a vcpu time slice. Signed-off-by: Akio Takebe <takebe_akio@xxxxxxxxxxxxxx> --- arch/ia64/xen/xcom_privcmd.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff -r 0deba952a6d3 -r 678ad99920c8 arch/ia64/xen/xcom_privcmd.c --- a/arch/ia64/xen/xcom_privcmd.c Mon Jul 28 17:24:40 2008 +0900 +++ b/arch/ia64/xen/xcom_privcmd.c Tue Aug 05 12:19:38 2008 +0900 @@ -437,15 +437,18 @@ xencomm_privcmd_memory_reservation_op(pr * may cause the soft lockup warning. * In order to avoid the warning, we limit * the number of extents and repeat the hypercall. - * The following value is determined by experimentation. - * If the following limit causes soft lockup warning, + * The following value is determined by evaluation. + * Time of one hypercall should be smaller than + * a vcpu time slice. The time with current + * MEMORYOP_MAX_EXTENTS is around 5 msec. + * If the following limit causes some issues, * we should decrease this value. * * Another way would be that start with small value and * increase adoptively measuring hypercall time. * It might be over-kill. */ -#define MEMORYOP_MAX_EXTENTS (MEMORYOP_XENCOMM_LIMIT / 4) +#define MEMORYOP_MAX_EXTENTS (MEMORYOP_XENCOMM_LIMIT / 512) while (nr_extents > 0) { xen_ulong_t nr_tmp = nr_extents; _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |