[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs



# HG changeset patch
# User Jan Beulich <jbeulich@xxxxxxxx>
# Date 1318492954 -7200
# Node ID 46ca8ea42d4c674e0792e792300710afec3f6e24
# Parent  39df1692395884e4bf0fc45f720c12e37072a47b
x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs

The IDTs being exactly a page in size, using xmalloc() here is rather
inefficient, as this requires double the amount to be allocated (with
almost an entire page wasted). For hot plugged CPUs, this at once
eliminates one more non-order-zero runtime allocation.

For x86-32, however, the IDT is exactly half a page, so allocating a
full page seems wasteful here, so it continues to use xmalloc() as
before.

With most of the affected functions' bodies now being inside #ifdef-s,
it might be reasonable to split those parts out into subarch-specific
code...

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Acked-by: Keir Fraser <keir@xxxxxxx>
---


diff -r 39df16923958 -r 46ca8ea42d4c xen/arch/x86/smpboot.c
--- a/xen/arch/x86/smpboot.c    Thu Oct 13 10:00:13 2011 +0200
+++ b/xen/arch/x86/smpboot.c    Thu Oct 13 10:02:34 2011 +0200
@@ -639,9 +639,6 @@
 {
     unsigned int order;
 
-    xfree(idt_tables[cpu]);
-    idt_tables[cpu] = NULL;
-
     order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
 #ifdef __x86_64__
     if ( per_cpu(compat_gdt_table, cpu) )
@@ -650,10 +647,15 @@
         free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)),
                            order);
     per_cpu(compat_gdt_table, cpu) = NULL;
+    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
+    if ( idt_tables[cpu] )
+        free_domheap_pages(virt_to_page(idt_tables[cpu]), order);
 #else
     free_xenheap_pages(per_cpu(gdt_table, cpu), order);
+    xfree(idt_tables[cpu]);
 #endif
     per_cpu(gdt_table, cpu) = NULL;
+    idt_tables[cpu] = NULL;
 
     if ( stack_base[cpu] != NULL )
     {
@@ -691,19 +693,25 @@
     if ( !page )
         goto oom;
     per_cpu(gdt_table, cpu) = gdt = page_to_virt(page);
+    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
+    page = alloc_domheap_pages(NULL, order,
+                               MEMF_node(cpu_to_node(cpu)));
+    if ( !page )
+        goto oom;
+    idt_tables[cpu] = page_to_virt(page);
 #else
     per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0);
     if ( !gdt )
         goto oom;
+    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
+    if ( idt_tables[cpu] == NULL )
+        goto oom;
 #endif
     memcpy(gdt, boot_cpu_gdt_table,
            NR_RESERVED_GDT_PAGES * PAGE_SIZE);
     BUILD_BUG_ON(NR_CPUS > 0x10000);
     gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
 
-    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
-    if ( idt_tables[cpu] == NULL )
-        goto oom;
     memcpy(idt_tables[cpu], idt_table,
            IDT_ENTRIES*sizeof(idt_entry_t));
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.